repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
facebook/react-native
389990743
Title: Warnings in Xcode Question: username_0: Xcode shows multiple warnings when either `RNTester/RNTester/xcodeproj` or `template/ios/HelloWorld.xcodeproj/` (scaffold for new apps) is built. These warnings have accumulated over the years (https://github.com/facebook/react-native/issues/19628), and it is [one of the concerns with the most reactions](https://github.com/react-native-community/discussions-and-proposals/issues/64#issuecomment-444835761) in a recent survey of React Native users. This issue will track all efforts to reduce the number of warnings in our Xcode projects. If you send a pull request that removes a warning listed in this issue, please let us know so we can take a look at your proposed change. # RNTester ## Targets Xcode breaks down the warnings by target. - [ ] double-conversion - [ ] jsi - [ ] third-party (folly, glog) - [ ] cxxreact - [ ] jsiexecutor - [ ] React - [ ] RCTNetwork - [ ] RCTImage - [ ] RNTester Note that some of these warnings may originate in third party libraries. Further, these warnings may be resolved in newer versions of these libraries. As such, you may need to upgrade some of these libraries to resolve a warning. Careful testing will be needed in order to make sure no breaking changes are introduced as part of these upgrades. ## Detailed list of warnings As of this writing, Xcode 10.1 presents warnings on the following files when building `RNTester`: <details> <summary> Target: double-conversion </summary> ``` third-party/double-conversion-1.1.6/src/double-conversion.cc:825:10: Declaration shadows a local variable ``` </details> <details> <summary> Target: jsi </summary> ``` Semantic Issue Group ReactCommon/jsi/JSCRuntime.cpp:575:22: Unused parameter 'ctx' ReactCommon/jsi/JSCRuntime.cpp:612:22: Unused parameter 'ctx' third-party/folly-2018.10.22.00/folly/Memory.h:51:30: Possible misuse of comma operator here ReactCommon/jsi/JSIDynamic.cpp:6:10: In file included from ReactCommon/jsi/JSIDynamic.cpp:6: ReactCommon/jsi/JSIDynamic.h:8:10: In file included from ReactCommon/jsi/JSIDynamic.h:8: third-party/folly-2018.10.22.00/folly/dynamic.h:67:10: In file included from third-party/folly-2018.10.22.00/folly/dynamic.h:67: third-party/folly-2018.10.22.00/folly/container/F14Map.h:38:10: In file included from third-party/folly-2018.10.22.00/folly/container/F14Map.h:38: third-party/folly-2018.10.22.00/folly/container/detail/F14Policy.h:23:10: In file included from third-party/folly-2018.10.22.00/folly/container/detail/F14Policy.h:23: third-party/folly-2018.10.22.00/folly/Memory.h:51:21: Cast expression to void to silence warning third-party/folly-2018.10.22.00/folly/Memory.h:51:50: Possible misuse of comma operator here ReactCommon/jsi/JSIDynamic.cpp:6:10: In file included from ReactCommon/jsi/JSIDynamic.cpp:6: ReactCommon/jsi/JSIDynamic.h:8:10: In file included from ReactCommon/jsi/JSIDynamic.h:8: third-party/folly-2018.10.22.00/folly/dynamic.h:67:10: In file included from third-party/folly-2018.10.22.00/folly/dynamic.h:67: third-party/folly-2018.10.22.00/folly/container/F14Map.h:38:10: In file included from third-party/folly-2018.10.22.00/folly/container/F14Map.h:38: third-party/folly-2018.10.22.00/folly/container/detail/F14Policy.h:23:10: In file included from third-party/folly-2018.10.22.00/folly/container/detail/F14Policy.h:23: [Truncated] ``` API Misuse (Apple) Group Libraries/Network/RCTNetInfo.m:66:60: Dictionary value cannot be nil ``` </details> <details> <summary> Target: RCTImage </summary> ``` Core Foundation/Objective-C Group Libraries/Image/RCTImageCache.m:41:3: Instance variable used while 'self' is not set to the result of '[(super or self) init...]' ``` </details> # Extra Credit Help track our progress and prevent regressions by logging the number of warnings produced by our Xcode projects. An increase in the number of warnings on a clean project should be considered a regression in CI. You may consider adding a build script that [outputs the warnings to a file](https://stackoverflow.com/questions/20047112/export-all-warnings-in-file-in-xcode), and use our existing dangerbot infrastructure to flag PRs that regress. See [React's sizebot for an example](https://github.com/facebook/react/blob/master/dangerfile.js). Answers: username_1: @username_0 I ticked off what I believe has been done. Is it really everything and are the warnings gone or do we have warnings remaining? As a next step, could we make warnings land blocking so that we don't end up introducing new warnings? There is no point in having these warnings if we don't act on them. username_2: There is just one more Xcode 10.1 warning about the RNTester project. It's `warning: All interface orientations must be supported unless the app requires full screen.` which should be solved with something from https://stackoverflow.com/questions/37168888/ios-9-warning-all-interface-orientations-must-be-supported-unless-the-app-req. There are still some for `React`, `yoga` and `third-party`. There are some yellow boxes popping up when we run RNTester. Also, for Xcode 10.2 we can do a new PR when it's out of beta I guess? Status: Issue closed username_1: There may be a few warnings here and there that are introduced, but overall we made great progress and got rid of almost all existing warnings. Let's make sure we don't introduce new warnings! username_0: @username_1 I have a PR that will fail CI builds on warnings: https://github.com/facebook/react-native/pull/24035 username_1: Oh nice!!
composer/composer
285066296
Title: composer repository with allow_ssl_downgrade and custom certificate Question: username_0: I am using a private statis instance on a custom top level domain. thats the way our internal boxes work, therefore I use them. Its also the reason why I cant use Letsencrypt certificates, because those dont work with custom TLDs. to whitelist this single composer repository I used the `allow_ssl_downgrade`. My `composer.json`: ```json { "name" : "custom/myapp", "config" : { "optimize-autoloader" : true, "platform" : { "php" : "7.0" } }, "require" : { "custom/package1" : "^4.0@dev", "custom/package2" : "^2.0@dev" }, "repositories" : [{ "type" : "composer", "url" : "https://satis.deployphp16", "allow_ssl_downgrade": true } ] } ``` Output of `composer diagnose`: ``` composer diagnose Checking platform settings: OK Checking git settings: OK Checking http connectivity to packagist: OK Checking https connectivity to packagist: OK Checking github.com rate limit: OK Checking disk free space: OK Checking pubkeys: Tags Public Key Fingerprint: 57815BA2 7E54DC31 7ECC7CC5 573090D0 87719BA6 8F3BB723 4E5D42D0 84A14642 Dev Public Key Fingerprint: <KEY> 0C708369 153E328C AD90147D AFE50952 OK Checking composer version: OK ``` When I run this command: ``` composer update -vvv ``` I get the following output: ``` [Composer\Downloader\TransportException] The "https://satis.deployphp16/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed Failed to enable crypto failed to open stream: operation failed ``` And I expected this to happen: no ssl error, because `allow_ssl_downgrade` is set to `true` Am I using the wrong option to allow a self signed cert for this single repos? Answers: username_0: ah, it seems the correct options are: ``` "repositories" : [{ "type" : "composer", "url" : "https://satis.deployphp16", "options": { "ssl": { "verify_peer": false, "verify_peer_name": false, "allow_self_signed": true } } } ] ``` found in the example of https://github.com/composer/composer/issues/4786 Status: Issue closed username_1: `allow_self_signed: true` is the only option I needed ``` "repositories" : [{ "type" : "composer", "url" : "https://satis.deployphp16", "options": { "ssl": { "allow_self_signed": true } } } ] ``` username_2: Also had similar issues on Travis CI, my fix is a combination of disabling HTTP fallback, IPv6 and also using a different DNS provider. :roll_eyes: https://twitter.com/IEMIXER/status/1158378635037949952
xinthink/cocmvc
65163214
Title: java.lang.NullPointerException: at Editor.java:1927 Question: username_0: time: 2015-02-12T04:51:14Z(1423716674431) app version: [241]2.4.1 device: d2ltetmo, OS version: 18 java.lang.NullPointerException: at android.widget.Editor.onTouchUpEvent(Editor.java:1927) or nativeCrash: or find [details here](https://play.google.com/apps/publish?dev_acc=08732249859506944557#ErrorClusterDetailsPlace:p=com.thirdrock.fivemiles&lr=LAST_6_MONTHS&sh=false&s=new_status_desc&ed=1423716674431&et=CRASH&ecn=java.lang.NullPointerException&tf=Editor.java&tc=android.widget.Editor&tm=onTouchUpEvent).<issue_closed> Status: Issue closed
GCTC-NTGC/TalentCloud
761364312
Title: Task - Manager/HR - Remove "Don't Meet Essential Criteria" in Applicant list Question: username_0: # Description After the implementation of the "Timeline" application design, it's impossible for applications to come in with with skills at a lower level than required, this means the "Don't meet essential criteria" section we for such applications will now sit empty. We should remove this bucket to avoid confusing managers. # Images Place image attachments (wireframes, etc.) here. ![image](https://user-images.githubusercontent.com/16977254/101795955-74af7400-3ad6-11eb-9dbb-7f5da9399fe3.png) - [ ] This task requires a unit test. - [ ] This task is complex and needs acceptance criteria. # Acceptance criteria - [ ] Requirement one - [ ] Requirement two - [ ] Requirement three
bbc/hive-scheduler
192332593
Title: Question son where appium should be running in hive ci setup Question: username_0: Hi Sorry this is the only avenue available to communicate with the team regarding doubts on Hive CI. I have setup it up locally on my Mac and it successfully ran a github based UI automation on an iPhone and Android mobile devices connected to my Mac. In this case, schedule and runner were in the same box, Appium was running as well. test script was pointing to http://0.0.0.0:4723/wd/hub and ran fine. Second Case: I am also able to start a runner on another Mac(say mac2) and point to my schedule's private IP on Mac1 and I could see the iPhone attached to Mac2 via http://localhost:3000/workers on Mac1 (scheduler) My questions: - In the second case, when I run a test via "test batch", do I have to run Appium on runner or scheduler? - also I believe that if I have more than 1 runner and if i want my test to run on all of them, i don't have to have any parallelization built into the test. Just use env variables to run the same test pointing to different Appium servers? Answers: username_1: Hi @username_0, 1. There should be only one runner running on one Mac/linux machine. Depending on runner specified (like android or ios) it will detect the devices connected to machine. We can check which runner is configured and devices list and queue names by running `hived status`. [Refer](https://github.com/bbc/hive-runner) 2. Regarding how to start appium server in hive-ci. It is done through scripts. Hive starts it in different process. How to do it in hive? [Refer](http://bbc.github.io/hive-ci/documentation/appium_test.html) 3. Batch is the trigger point for test. When we specify queue name, build and other details in batch and submit, it should start tests on device associated with queue. 4. [Refer](http://bbc.github.io/hive-ci/documentation/running-your-tests.html) this to create script, project and batch in hive. At very high level: `Script` runs on command line. It should include any pre-execution setup like starting appium node and execution command. `Project` includes script and other information like queue name `Batch` is trigger point for jobs/tasks/tests. username_1: Just to add on to that we don't have to do any parallellization as that is handled by hive in different process. Just need to assign queue name for device. username_0: thanks @username_1 , my main question was where Appium should be running, as part of machine where runner is running is what I find. `scheduler <-> runner 1 (runner + xcode + appium,etc)` ` <-> runner 2 (runner + xcode + appium,etc) ` username_1: Yes, it should be running where runner is running. Any testing tool we use, is independent of Hive-Scheduler (its web interface to create and schedule jobs)
symfony/symfony
770876791
Title: [Notifier] [Slack] [DX] Improve the DX Question: username_0: This is the full DSN: `SLACK_DSN=slack://TOKEN@default?channel=CHANNEL` I tried a lot to figure out **WHAT** you need to pass exactly: * If you want to wrote to the channel `#support` -> `?channel=support` โœ… * If you want to wrote to the user `@username_0` -> `?channel=@username_0` โœ… * If you want to wrote to the user `@username_0` you can also use the _UserId_, which can be found in your profile -> `?channel=U68.....` ๐Ÿค” โœ… ![ss](https://user-images.githubusercontent.com/995707/102617522-a01b0a00-4139-11eb-9ce7-321d0dbc3992.png) So far this looks ok, but there are some pitfalls we can avoid. One would think he can just use: * If you want to wrote to channel `#support` -> `?channel=#support` โŒ **NO # sign allowed** ๐Ÿ˜ฎ * If you want to wrote to a user `@username_0` -> `?channel=username_0` โŒ **NO, @ sign needed** ๐Ÿ˜ฎ * If you want to wrote to a user `@username_0` with his _UserId_ -> `?channel=@U68.....` โŒ **NO @ sign allowed** ๐Ÿ˜ฎ **My proposal:** A new `user=` (or user_handle=) and `user_id` parameter. We can now validate, that a channel must not start with `#`, a user must start with `@` and a user_id must not start with `@`. ### The correct token **The Problem:** Because we switched back and forth from a token to a webhook_id, which can be considered a "token" too, it could be hard to find out, if you are using the correct token. **My proposal:** Let's validate the token syntax in the transport and give a clear error message. Slack has a [clear syntax](https://api.slack.com/authentication/token-types#granular_bot) for their tokens which makes us able to validate the syntax before we perform a request. * Bot user token strings begin with `xoxb-` * User token strings begin with `xoxp-` * Workspace access token strings begin with `xoxa-2` cc @malteschlueter as we both had some trouble in the past Answers: username_1: ?channel=#channel works if you escape the # username_0: Oh nice to know ๐Ÿ˜„ username_2: If we have a dedicated `user` option, the `@` character feels redundant. ```diff - slack://TOKEN@default?user=@username_0 + slack://TOKEN@default?user=username_0 ``` username_3: AFAIK, slack deprecate webApi for `channels.*`, `groups.*`, `im.*`(direct mesages) etc, in favor of an unified api that handle all of them. * https://api.slack.com/docs/conversations-api * https://api.slack.com/changelog/2020-01-deprecating-antecedents-to-the-conversations-api Maybe we should follow the same move, and instead of having multiple query params available, having only `conversation` DSN could be : * slack://TOKEN@default?conversation=#support * slack://TOKEN@default?conversation=@username_0 * slack://TOKEN@default?conversation=U68XXXX username_0: I like your idea a lot @username_3 ๐Ÿ‘ username_0: ![CleanShot 2020-12-18 at 15 38 41](https://user-images.githubusercontent.com/995707/102626490-27bb4580-4147-11eb-8008-9aebd6ea3313.png) I would then create a new `symfony/slack-conversations-notifier` instead of reworking the current slack notifier and start dealing with different DSNs username_0: I don't have so much time right now, lets close this for now ๐Ÿ‘ Status: Issue closed
MatMaul/pynetgear
1091285036
Title: Feature request: Add operation to retrieve firewall logs Question: username_0: This request is to add functionality to retrieve firewall logs as shown under **Administration > Logs** in the Netgear GUI. I was able to test to a call to API **GetSystemLogs** (referenced from https://github.com/MatMaul/pynetgear/issues/20#issuecomment-428465492) on an R8000 (V1.0.4.76_10.1.82), but the response returns an incomplete subset of the logs (always returns 22 vs. 200+ lines shown in the GUI). Also, the last line gets truncated and the logs in the response do not seem to get refreshed as frequent as when viewing them in the GUI. ```python def get_logs(self): success, response = self._make_request( SERVICE_DEVICE_INFO, "GetSystemLogs" ) if not success: return None success, node = _find_node( response.text, ".//GetSystemLogsResponse/NewLogDetails") if not success: return None logs = node.text.split('\n') return logs ``` Example response snippet showing the truncated ending line. ``` [Admin login] from source 192.168.2.66, Thursday, Dec 30,2021 12:11:43 [Site allowed: firetvcaptiveportal.com] from source 192.168.2.52, Thursday, Dec 30,2021 12:11:18 [Site allowed: clientconfig.akamai.steamstatic.com] from source 192 ```
apache/airflow
1166812271
Title: dag_processing code needs to handle OSError("handle is closed") in poll() and recv() calls Question: username_0: ### Apache Airflow version 2.1.4 ### What happened The problem also exists in the latest version of the Airflow code, but I experienced it in 2.1.4. This is the root cause of problems experienced in [issue#13542](https://github.com/apache/airflow/issues/13542). I'll provide a stack trace below. The problem is in the code of airflow/dag_processing/processor.py (and manager.py), all poll() and recv() calls to the multiprocessing communication channels need to be wrapped in exception handlers, handling OSError("handle is closed") exceptions. If one looks at the Python multiprocessing source code, it throws this exception when the channel's handle has been closed. This occurs in Airflow when a DAG File Processor has been killed or terminated; the Airflow code closes the communication channel when it is killing or terminating a DAG File Processor process (for example, when a dag_file_processor_timeout occurs).This killing or terminating happens asynchronously (in another process) from the process calling the poll() or recv() on the communication channel. This is why an exception needs to be handled. A pre-check of the handle being open is not good enough, because the other process doing the kill or terminate may close the handle in between your pre-check and actually calling poll() or recv() (a race condition). ### What you expected to happen Here is the stack trace of the occurence I saw: ``` [2022-03-08 17:41:06,101] {taskinstance.py:914} DEBUG - <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]> dependency 'Not In Retry Period' PASSED: True, The context specified that being in a retry p eriod was permitted. [2022-03-08 17:41:06,101] {taskinstance.py:904} DEBUG - Dependencies all met for <TaskInstance: staq_report_daily.gs.wait_staq_csv_file 2022-03-06 17:15:00+00:00 [running]> [2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: gdai_gcs_sync> because no tasks in DAG have SLAs [2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: unity_creative_import_process> because no tasks in DAG have SLAs [2022-03-08 17:41:06,119] {scheduler_job.py:1196} DEBUG - Skipping SLA check for <DAG: sales_dm_to_bq> because no tasks in DAG have SLAs [2022-03-08 17:44:50,454] {settings.py:302} DEBUG - Disposing DB connection pool (PID 1902) Process ForkProcess-1: Traceback (most recent call last): File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/python3.8/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 370, in _run_processor_manager processor_manager.start() File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 610, in start return self._run_parsing_loop() File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 671, in _run_parsing_loop self._collect_results_from_processor(processor) File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/manager.py", line 981, in _collect_results_from_processor if processor.result is not None: File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 321, in result if not self.done: File "/opt/python3.8/lib/python3.8/site-packages/airflow/dag_processing/processor.py", line 286, in done if self._parent_channel.poll(): File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 255, in poll self._check_closed() File "/opt/python3.8/lib/python3.8/multiprocessing/connection.py", line 136, in _check_closed raise OSError("handle is closed") OSError: handle is closed ``` This corresponded in time to the following log entries: ``` % kubectl logs airflow-scheduler-58c997dd98-n8xr8 -c airflow-scheduler --previous | egrep 'Ran scheduling loop in|[[]heartbeat[]]' [2022-03-08 17:40:47,586] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds [2022-03-08 17:40:49,146] {scheduler_job.py:813} DEBUG - Ran scheduling loop in 0.56 seconds [2022-03-08 17:40:50,675] {base_job.py:227} DEBUG - [heartbeat] [Truncated] cloud.google.com/gke-nodepool: scheduler-pool containers: - name: gcs-syncd resources: limits: memory: 2Gi ``` ### Anything else On the below checkbox of submitting a PR, I could submit one, but it'd be untested code, I don't really have the environment setup to test the patch. ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md) Answers: username_1: Feel free to submit a pull request to handle the exception! We can figure out how to test the solution in the review process. BTW I donโ€™t know what your current fix looks like, but `OSError` has an `errno` attribute, checking that in the error handling code may be appropriate as well. (Not sure, I donโ€™t even know what errno this error has right now.) username_0: I plan to submit a PR within the next two weeks. username_2: # Trying to explain things... Our team has run into this issue time and time again. We have tried different combinations of both Airflow and Python versions to no avail. ## TL;DR When a `DAGFileProcessor` hangs and is killed due to a timeout we believe the `self.waitables` and `self._processors` attributes of the `DAGFileProcessorManager` are not being updated as they should. This causes an unhandled exception when trying to receive data on a pipe end (i.e. file descriptor) which has already been closed. ## The long read... We're running a decouple Airflow deployment within a k8s cluster. We are currently using a 3-container *pod* where one of them runs the *Web Server*, another one executes the *Scheduler* and the third one implements *Flower* (we're using the *CeleryExecutor*). The backbone of the deployment is implemented through a *StatefulSet* that runs the Celery executors themselves. The trace we were seeing on the scheduler time and time again was: ``` Process ForkServerProcess-1: Traceback (most recent call last): File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 370, in _run_processor_manager processor_manager.start() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 610, in start return self._run_parsing_loop() File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 671, in _run_parsing_loop self._collect_results_from_processor(processor) File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 981, in _collect_results_from_processor if processor.result is not None: File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/processor.py", line 321, in result if not self.done: File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/processor.py", line 286, in done if self._parent_channel.poll(): File "/usr/local/lib/python3.7/multiprocessing/connection.py", line 255, in poll self._check_closed() File "/usr/local/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed raise OSError("handle is closed") OSError: handle is closed ``` This has been thrown by Airflow 2.1.3, but we've seen very similar (if not equal) variations with versions all the way up to Airflow 2.2.4. Given we traced the problem down to the way multiprocessing synchronisation was being handled we played around with `multiprocessing`'s [*start method*](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) through the `mp_start_method` configuration parameter which wasn't included in the stock configuration example: https://github.com/apache/airflow/blob/f309ea78f7d8b62383bc41eac217681a0916382b/airflow/utils/mixins.py#L27-L38 The containers we are using leverage `fork` as the default way of creating new process. After trying that one out we moved on to using `spawn` and ended up settling on `forkserver`. No matter the *start method* we leveraged, we ran into the same issue over and over again. For a while we coped with this behaviour by just restarting the Airflow deployment on an hourly basis, but we decided to set some time apart today to delve a bit deeper into all this. The good news is after a thorough investigation we noticed a pattern the preceded the crash. In order to pin it down down ran [`ps(1)`](https://www.man7.org/linux/man-pages/man1/ps.1.html) on the scheduler container. We also monitored the *DAG Processor Manager* log (which we have at `/opt/airflow/logs/dag_processor_manager/dag_processor_manager.log` given our Airflow home is `/opt/airflow`) and we took a look at the scheduler's log through `kubectl logs` given it's sent to *stdout/stderr*. The pattern itself goes something like: 1. A `DAGFileProcessor` get's stuck for longer than `dag_file_processor_timeout` as seen on `ps`' output. 2. As soon as the timeout is exceeded, the `DAGFileProcessorManager` kills the stuck `DAGFileProcessor`. 3. When the `DAGFileProcessorManager` tries to collect results back from the different `DAGFileProcessor`s it crashes. The above led us to believe something was a bit off in the way the `DAGFileProcessor`s were being killed. Given our Docker-based deployment allowed for it, we retrieved a copy of the stock [`manager.py`](https://github.com/apache/airflow/blob/v2-1-stable/airflow/dag_processing/manager.py) and [`processor.py`](https://github.com/apache/airflow/blob/v2-1-stable/airflow/dag_processing/processor.py) files and added a bit of logging through `self.log.debug()`. The following appeared on out `DAGFileProcessorManager` log: ``` [2022-03-21 13:01:00,747] {manager.py:1163} DEBUG - FOO - Looking for DAG processors to terminate due to timeouts! [2022-03-21 13:01:00,748] {manager.py:1172} ERROR - Processor for /opt/airflow/dags/dags-airflow/spark_streaming.py with PID 965 started at 2022-03-21T13:00:10.536124+00:00 has timed out, killing it. [2022-03-21 13:01:00,748] {manager.py:1178} DEBUG - FOO - # of waitables BEFORE killing timed out processor: 2 [2022-03-21 13:01:00,748] {manager.py:1180} DEBUG - FOO - # of waitables AFTER killing timed out processor: 2 [Truncated] self.log.warning("Killing DAGFileProcessorProcess (PID=%d)", self._process.pid) os.kill(self._process.pid, signal.SIGKILL) # Reap the resulting zombie! Note the call to `waitpid()` blocks unless we # leverage the `WNOHANG` (https://docs.python.org/3/library/os.html#os.WNOHANG) # option. Given we were just playing around we decided not to bother with that... self.warning(f"FOO - Waiting to reap the zombie spawned from PID {self._process.pid}") os.waitpid(self._process.pid) self.warning(f"FOO - Reaped the zombie spawned from PID {self._process.pid}") if self._parent_channel: self._parent_channel.close() ``` From what we could see, the above reaped the zombie like we initially expected it to. So, after all this nonsense we just wanted to end up by saying that we believe it's the way `DAGFileManagerProcessor`'s attributes are being cleaned up that crashes Airflow for us. In our experience this is triggered by a `DAGFileProcessor` being forcefully terminated after a timeout. We would also like to thank everybody making Airflow possible: it's one heck of a tool! Feel free to ask for more details and, if we got anything wrong (it wouldn't be the first time), please do let us know! username_3: @username_2 - I :heart: your detailed description and explanation. It reads like a good crime story :dagger: Fantastic investigation and outcome. How about you clean it up a bit and submit a PR fixing ti? username_2: Hi @username_3! I'm glad you found our investigation useful and that you had fun reading through it. Reading so may Agatha Christie books has to pay off at some point ๐Ÿ˜œ I would be more than happy to polish it all up and open a Pull Request so that the changes are incorporated into Airflow itself. I'll do my best to find some time to do it throughout the week. And thanks a ton for the kind words! I really appreciate it ๐Ÿ˜‹ username_3: Cool. If you have any questions during the contribution process - happy to help - just "@" me. And even if you are not sure about some of the decisions, we can discuss it in the PR and iterate before we merge (and drag-in more of the minds here to make it really good)
patrickkwang/cruncher
158810262
Title: sliders for confidence intervals Question: username_0: for example, iCDF(0.05) to iCDF(0.95) -> [a,b] Answers: username_1: I don't understand what you want. Is this one slider with two points on it to define a range under which we want to compute the iCDF, or two or more different sliders? username_0: Two points, one slider. It would also be super neat if we could tie the two sliders together such that the user can move either one, but the other stays the same distance from 0.5 (e.g. 0.05 and 0.95, or 0.2 and 0.8, but not 0.2 and 0.95, because that would be a ridiculous confidence interval). The iCDF is computed for both points, e.g. a 90% (0.05/0.95) confidence interval could be [1,11] if the mean is 5. username_1: 1db49cb Status: Issue closed
conda-forge/staged-recipes
467877051
Title: Package wasmer Question: username_0: `wasmer` is: ``` ...a $X library for executing WebAssembly binaries: Easy to use: The wasmer API mimics the standard WebAssembly API, Fast: wasmer executes the WebAssembly modules at native speed, Safe: All calls to WebAssembly will be fast, but more importantly, completely safe and sandboxed. ``` ...where it looks like `$X` can be replaced by: ``` - Rust -C/C++ - PHP - Python - Ruby - Go ``` links: - https://pypi.org/project/wasmer/ - https://wasmer.io/ - https://news.ycombinator.com/item?id=19670826 related feedstocks: - https://github.com/conda-forge/staged-recipes/pull/8816 - https://github.com/conda-forge/snapshottest-feedstock/pull/1 Answers: username_0: As of a few days ago, (some parts) of `wasmer@master` will compile with `rust` stable. Some of the language-specific extensions (notably python) still require nightly. Still very interesting! username_0: It looks like wasmer is working up to release a `1.0.0` (currently in alpha). Now that our rust situation is a bit more normalized, it's probably worth taking another shot at this... username_0: [Wasmer 1.0 is out](https://medium.com/wasmer/wasmer-1-0-3f86ca18c043)! I'll probably take a look at this soon, don't know how much has changed since #9311 username_1: @username_0 count me in! very interested in this as well! username_0: @username_1 my current best-effort is here: https://github.com/conda-forge/staged-recipes/pull/13622 - wasmer expects a newer osx, so kinda hard blocked there - haven't tried windows :blush: - haven't done the whole third-party license dance, sorted, though have made pretty good progress on e.g. [geckodriver](https://github.com/conda-forge/geckodriver-feedstock/blob/master/recipe/meta.yaml) on tightening up that process I think to get it really working to my satisfaction, at least [wasmer-python](https://github.com/wasmerio/wasmer-python) would need to be tackled... having at least the runtime (and maybe `wapm`) up would be huge. username_1: Awesome. I've been thinking we should have a wasm architecture for conda and ship our core library stack for wasm as well. I played around with emscripten and emcc and it seems quite doable :) But there are also a lot of unsolved questions username_0: I think this is a very worthy goal. To my knowledge, there isn't a "The Package Manager" for webassembly: wasmer has `wamp`, and pyodide has... it's own thing. On jyve, which only targets the browser, I just (ab)use `npm`/`webpack`. Whether we're talking a baseline of `conda-on-pyodide` or `microwamba`, it seems like it would be a significant step forward if it worked with all/most of the existing conda tooling e.g. `conda-lock`, and could be a "free" output of a feedstock, after a migration. username_0: If anybody's tracking this, but not the current 1.0.1 PR (#13622), it's ready for review! username_0: Welp, we did it: https://github.com/conda-forge/wasmer-feedstock Please chime in over there, anybody listening, if you have things you want to see... and better still if you can help! There's a lot to do! Status: Issue closed
robertyuyang/StylisticFingerprinting
527789708
Title: RuntimeError: maximum recursion depth exceeded Question: username_0: Hello. I've been analyzing some projects. I'm getting the a maximum recursion depth exceeded when anallyzing the following function. `StatusOr<XlaOp> XlaBuilder::AddInstruction( HloInstructionProto&& instr, HloOpcode opcode, tensorflow::gtl::ArraySlice<XlaOp> operands) { // rest of the code }' The IsRValueType is called recursively and recursion depth exceeds limit. By removing the whitespace in `StatusOr<XlaOp> XlaBuilder::AddInstruction` resolves the error, but that changes the style doesnt it. Fixing this would be greatly appreciated. :)
amzn/selling-partner-api-docs
1064507718
Title: [BUG] ItemProcurement in Listing Item object is returned from api as array type instead of an object Question: username_0: In Listing Items swagger model (here [LINK](https://github.com/amzn/selling-partner-api-models/blob/main/models/listings-items-api-model/listingsItems_2021-08-01.json) ItemProcurement property of object Item is marked as object, but when i call get listing api, if there are no procurement, json returned is marked as array. `"procurement":[ ]`
vim-airline/vim-airline
414584655
Title: tabline with tab count in no-buffers mode Question: username_0: #### environment - vim: neovim v0.3.4 - vim-airline: 1c3ae6077af7 - OS: Win10 + WSL - Have you reproduced with a minimal vimrc: yes - What is your airline configuration: if you are using terminal: - terminal: wsltty - $TERM variable: xterm - color configuration (:set t_Co?): if you are using Neovim: - does it happen in Vim: yes #### actual behavior When only showing tabs in the tabline (show_tabs=1, show_buffers=0) the total tab count in the right upper corner is missing. This patch copies the relevant lines from `tabline#buffers#get()`: ``` diff --git a/autoload/airline/extensions/tabline/tabs.vim b/autoload/airline/extensions/tabline/tabs.vim index 7412cdd48b3d..fe848be1372d 100644 --- a/autoload/airline/extensions/tabline/tabs.vim +++ b/autoload/airline/extensions/tabline/tabs.vim @@ -93,6 +93,10 @@ function! airline#extensions#tabline#tabs#get() endif endif + if tabpagenr('$') > 1 + call b.add_section_spaced('airline_tabmod', printf('%s %d/%d', "tab", tabpagenr(), tabpagenr('$'))) + endif + let s:current_bufnr = curbuf let s:current_tabnr = curtab let s:column_width = &columns ``` #### expected behavior tabline shows tab counts are shown in either buffer or tab-only mode Answers: username_1: care to create a PR for that? username_0: here you go :) Status: Issue closed username_2: is expected behavior necessarily. Having the tab count visible when `show_tabs=0` is set makes sense, and that's apparently the reason why it was introduced in the first place: #1329. username_3: After updating, I find the addition of this tab count most unwelcome. I can see the tabs I have open and the tab I'm on, so it's duplicate information for my uses. Can we please make this optional? Example of the duplication: ![screen-shot-2019-03-15-17-15-03](https://user-images.githubusercontent.com/1057635/54467967-e9108400-4745-11e9-9ed4-80530f3c5f61.png) username_1: Okay, I heard you. Fixed. username_3: Thank you! That fix is working perfectly. My first comment seemed a bit harsh, now that I read it again. Sorry about that. The tabline is a sacred place, but one can argue that more respectfully.
mewmew/cfa
723780379
Title: gonum issues, with gollvm Question: username_0: @korstchak Answers: username_0: @korstchak username_1: Gonum is not tested on gollvm and is not officially supported. AFAIU Go assembly does not work with gollvm (certainly this is true with gccgo). Try building with the `-tags noasm` tag. username_2: Hi @username_0, I like your adventurous take on jumping in and trying to compile this project. I'm not sure when I last used it, as I've mainly used it for experimentation to build intuition; as summarized in [Evaluation of Methods for Effective Control Flow Recovery](https://raw.githubusercontent.com/decomp/doc/master/presentation/control_flow_analysis/cfa_presentation.pdf). As such, I would not expect this project to compile successfully, without at least some manual intervention. If you are interested in decompilation and control flow recovery then I would recommend that you check out [Rellic](https://github.com/lifting-bits/rellic) which implements the *pattern-independent structuring* control flow recovery algorithm. Wish you all the best and happy reversing! Cheers, Robin P.S. I'll close this issue for now as the project is not likely to compile in the foreseeable future. Status: Issue closed
DistanceDevelopment/spatial-workshops
244039087
Title: Introduction to the sperm whale data Question: username_0: Are we going to fit detection functions to the sperm whale data? Wherever we start talking about it we need to ensure that we give some background. I have that info but not sure where to put it. Answers: username_1: of course detection functions are going to be fitted [exercise 6](https://github.com/DistanceDevelopment/spatial-workshops/blob/master/exercises/bookdown/06-detection-functions.Rmd) does so Monday morning (LJT) is going to have some form of detection function fitting username_0: Right, so the question is now whether @lenthomas will fit the spermwhale detection functions in his lectures/practicals or if that needs to go into the DSM material... username_1: Best approach is to leave detection function modelling of sperm whales inside the practical 6. I suspect little attention needs be spent to lecturing about detection functions. I would like to believe participants would be up to speed about detection function modelling courtesy of background reading they are asked to do (for intermediate workshop) along with detection function exposure they receive on Monday of workshop. username_0: Resolved. Status: Issue closed
sbuggay/sbuggay.github.io
295271511
Title: Code block improvements Question: username_0: - [ ] Line numbers - [ ] Line number offset - [ ] Highlighted lines - [ ] Highlighted line annotations Answers: username_0: Most of this is already supported through the codeblock tag: ``` /** * Code block tag * * Syntax: * {% codeblock [title] [lang:language] [url] [link text] [line_number:(true|false)] [highlight:(true|false)] [first_line:number] [mark:#,#-#] %} * code snippet * {% endcodeblock %} */ ``` Status: Issue closed
Kotlin/kotlinx.serialization
505012603
Title: Serializing into dictionary Question: username_0: **What is your use-case and why do you need this feature?** Serializing from JSON object and generate a dictionary for it **Describe the solution you'd like** Able to serialize any JSON object/string to become a dictionary. So it is not necessarily to create a serializable data class Is this feature supported already? Thanks! Answers: username_1: Yes. Have a look at `JsonObject`. Status: Issue closed
FWDekker/intellij-randomness
285671212
Title: Upgrade to JUnit 5 Question: username_0: Upgrade from JUnit 4 to JUnit 5. Status: Issue closed Answers: username_0: Not worth the trouble. username_0: With the integration tests having been merged into the regular test module (#78) and the partial dependency upgrades from #60, it may now be more feasible to upgrade to JUnit 5. I will re-investigate. username_0: Upgrade from JUnit 4 to JUnit 5.
PolySync/oscc-joystick-commander
258926698
Title: After wiring up all the modules the system does not work Question: username_0: I wired up and flashed all the modules without any errors. I installed the joystickcommander package and followed the steps with no errors. When I run the ./joystickcommand, the command window initializes and when I press start button, the values of steering, brake and throttle are displayed and when the left stick is toggled, the steering value updates, but after a second, program closes by giving disable controls message and there is no change in the steering wheel position of the car even when the values are updated. I built the can-gateway module by stacking 2 CAN shields and connecting one of them to Kvaser converter and then to laptop (Modified CS pin of one of the shields to pin 10 --> Connected to Kvaser converter, did not change the resistor) Connected the steering module as given in GitHub. Powered all the modules through the laptop itself. Connected the joystick to the laptop. Should there be a control can bus separately or can I just use the laptop as control can bus? This is my current connection: STEERING --> CAN GATEWAY --> OBD CAN LAPTOP (CONTROL CAN) --> KVASER --> CAN GATEWAY ALL MODULES POWERED THROUGH ARDUINO connected to LAPTOP I don't have a physical control can bus. I am assuming my laptop will function as a control can bus. No non-e power source. Please let me know where I am wrong. Status: Issue closed Answers: username_1: Please see answer here: https://github.com/PolySync/oscc/issues/201
quic/aimet
791036761
Title: Multiple inputs Question: username_0: If my model is designed to accept two inputs, what should I put in the input_shape argument in the ModelCompressor.compress_model()? Answers: username_1: Hi @username_0 Assuming this is the PyTorch model : Yes, we would need to pass a **list of tuples** of shapes corresponding to each of the input nodes. Please let us know if you have any issue using it.
samisnotinsane/podster
582587097
Title: Clicking on episode should start playing audio Question: username_0: Currently, we have to click on episode and then on play to start playing. Update should make it so that: - Clicking on episode immediately starts playing audio for that episode - Display show notes on the right sidebar Status: Issue closed Answers: username_0: Currently, we have to click on episode and then on play to start playing. Update should make it so that: - Clicking on episode immediately starts playing audio for that episode - Display show notes on the right sidebar username_0: Issue has regressed in latest commit at the time of writing
spring-projects/spring-boot
56509686
Title: ActiveMQ not automatically reconnecting using pooling Question: username_0: When setting the spring variable spring.activemq.pooled to true, the ActiveMQ Client does not automatically reconnect if it loses connection to the broker. Using Spring Boot 1.1.10.RELEASE and activemq-client 5.9.1 and activemq-pool 5.9.1 Answers: username_1: I am not sure it has anything to do with Spring Boot actually. We're just creating a regular `ConnectionFactory` and the `pooled` option is wrapping that in a `PolledConnectionFactory`. I can't see why the behaviour would be inconsistent. How do you reproduce that? Shutting down the broker, starting it up again and making sure the listeners are recovering? username_0: I shutdown the broker, started it again and then I cannot send stuff with my JmsTemplate. Getting exceptions that the connection is closed. username_0: Also: any listeners are not reconnecting to the queue. username_1: This was reported a long time ago and I wonder if that's still applicable. If that's the case, can you please share a sample that we could run ourselves? Status: Issue closed username_0: I've been using the ActiveMQ component with 1.4.3 and it seems to reconnect fine when pooled connectionfactory is used. I think this can be closed.
neoclide/coc-eslint
697225103
Title: No diagnostics Question: username_0: Installed the extension and have config at `~/.eslintrc` but no diagnostics. Checked logs and looks OK: ``` 2020-09-10T00:08:41.772 INFO (pid:19356) [services] - service tsserver started 2020-09-10T00:08:41.783 INFO (pid:19356) [services] - eslint langserver state change: starting => running 2020-09-10T00:08:41.785 INFO (pid:19356) [services] - service eslint started ``` Answers: username_0: Urgh! Stupid "smart" quotes were messing up ESLint config. Still got an issue but it relates to ESLint not being able to find airbnb config ๐Ÿคทโ€โ™‚๏ธ Status: Issue closed
AlphaWallet/alpha-wallet-android
700912242
Title: Confusing "Setting Subtitle" message Question: username_0: 1. Enable TS debugging. 2. Drop this file to the AlphaWallet directory on the device; https://github.com/AlphaWallet/TokenScript/blob/access-cards-from-web/tests/EntryToken.xml 3. Observe in "TokenScript Management" there is "Setting Subtitle" ![Screenshot_20200914-185512_AlphaWallet](https://user-images.githubusercontent.com/548435/93065369-fa7c7880-f6bb-11ea-80d9-ed2c59fe01eb.jpg) Expected: The newly added TokenScript should look like the others.<issue_closed> Status: Issue closed
ReproNim/datalad-nda
956817552
Title: Trouble Adding S3 URLs from NDA manifest Question: username_0: Hello, I am attempting to use the `datalad-nda` CLI in order to produce a datalad dataset based on the 09 2020 3165 ABCD release found on NDA. I have the manifest as an uncompressed text file, so I came up with the following command: `/home/faird/shared/code/external/utilities/datalad-nda/scripts/datalad-nda add2datalad \ -i <(cat /spaces/ngdr/workspaces/hendr522/ABCD/datalad-ABCD-BIDS/abcd316520200818/datastructure_manifest.txt) \ -d /spaces/ngdr/workspaces/hendr522/ABCD/datalad2.0-ABCD-BIDS -J 10 --fast --drop-after`. I've attached the messages sent to STDOUT and STDERR in separate files [datalad2.0-nda_ABCD-BIDS_5371074_STDOUT.txt](https://github.com/ReproNim/datalad-nda/files/6908294/datalad2.0-nda_ABCD-BIDS_5371074_STDOUT.txt) [datalad2.0-nda_ABCD-BIDS_5371074_STDERR.txt](https://github.com/ReproNim/datalad-nda/files/6908296/datalad2.0-nda_ABCD-BIDS_5371074_STDERR.txt) I am using the most recent version of the `datalad-nda` CLI (I pulled the repo maybe a week ago) and am running datalad version 0.14.4 Answers: username_1: unrelated but you don't need `<(cat ...)` if it is already uncompressed. <details> <summary>the error at the end of STDERR</summary> ```shell [INFO] -> Adding URLs Traceback (most recent call last): File "/home/faird/shared/code/external/utilities/datalad-nda/scripts/datalad-nda", line 442, in <module> main() File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/faird/shared/code/external/utilities/datalad-nda/scripts/datalad-nda", line 250, in add2datalad out = ds.addurls( File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/datalad/distribution/dataset.py", line 503, in apply_func return f(**kwargs) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/datalad/interface/utils.py", line 486, in eval_func return return_func(generator_func)(*args, **kwargs) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/datalad/interface/utils.py", line 474, in return_func results = list(results) File "/home/umii/hendr522/SW/miniconda3/envs/datalad/lib/python3.8/site-packages/datalad/interface/utils.py", line 459, in generator_func raise IncompleteResultsError( datalad.support.exceptions.IncompleteResultsError: Command did not complete successfully. 1 failed: [{'action': 'addurls', 'message': 'First positional argument should be mapping ' '[addurls.py:format:79]', 'path': '/spaces/ngdr/workspaces/hendr522/ABCD/datalad2.0-ABCD-BIDS', 'status': 'error', 'type': 'dataset'}] ``` </details> filed https://github.com/datalad/datalad/pull/5850 to help make that message a bit more informative. overall, please double check that your .txt is a .tsv, i.e. tab separated (that is what I had) since ATM we just hardcode that assumption: https://github.com/ReproNim/datalad-nda/blob/master/scripts/datalad-nda#L78 username_1: although probably it is not that -- since it would have then just failed here https://github.com/ReproNim/datalad-nda/blob/master/scripts/datalad-nda#L92 instead of nicely getting those out... I should try to redo a sample run on what I had to see if may be something changed in datalad which rendered it not working. username_0: Let me know if you need anything from me. I'm using an updated manifest.txt from when you originally built the ABCD-BIDS datalad dataset, so maybe that is part of it? If it is useful, I am happy to send you the manifest file that I am using via BOX or something. I imagine you have an active ABCD DUC, right? username_1: Sorry about delay, I will try to get try this today, and let you know if we need to sync up on manifests ;-) username_1: I have pushed fefeb27420a7fe1696729620214e22705c3db2ba which should address that exception you (and I) were getting with 0.14.4 -- we have managed to change something on datalad end which stopped doing automagic treatment of pathlib's Path's as str where desired. But the main problem came later since I seem can't get access to any of those prior S3 urls -- may be they turned off that access approach already? but may be it would somehow work for you? username_1: FWIW, better upgrade to 0.14.7 datalad, just in case ;) username_0: Thanks for the push, I ran it with this and datalad version 0.14.6 (because that is the most recent conda had). Here is the information I have gotten from STDERR: [INFO] Creating a new dataset at /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165 [INFO] Creating a new annex repo at /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165 [INFO] Creating a helper procedure /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165/.datalad/procedures/cfg_nda [INFO] Running procedure cfg_nda [INFO] == Command start (output follows) ===== [INFO] == Command exit (modification check follows) ===== [INFO] Reading entire file [INFO] Read 4764349 lines [INFO] Loaded 4764347 records from 290 submissions for 290 datasets. [INFO] Creating a new annex repo at /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165/sourcedata [INFO] Running procedure cfg_nda [INFO] == Command start (output follows) ===== [INFO] == Command exit (modification check follows) ===== [INFO] Processing for submission 21948 [INFO] Getting records only for submission_id=21948 [INFO] Got 331758 records [INFO] Processed entire file: got 331758 files, 5668 subdatasets with 7 files having multiple URLs to possibly reach them [WARNING] - dataset_description.json: 45848 records | - README: 45848 records | - CHANGES: 45848 records | - task-SST_bold.json: 1191 records | - task-rest_bold.json: 1470 records | - task-MID_bold.json: 1235 records | - task-nback_bold.json: 1197 records [INFO] Saved output to <_io.TextIOWrapper name='/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165/sourcedata/submission-21948.csv' mode='w' encoding='UTF-8'> [INFO] -> Saving new submission [INFO] -> Adding URLs [INFO] Creating a new annex repo at /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165/derivatives/abcd-hcp-pipeline [INFO] Running procedure cfg_nda [INFO] == Command start (output follows) ===== [INFO] == Command exit (modification check follows) ===== Is that looking like you expect? If so, can you give me some advice on the resources a job like this will need to run to completion? Currently I have specified 10 CPUs, 20gb RAM, and 24 hours. username_1: Did it annex any files? Could you please take a head (eg 100 rows) of the file and run till completion, check that files are annexed and could be retrieved if you clone the result and remove origin remote (so it just doesn't fetch from where you already fetched to) Resources - hard to tell, depends if you use --fast mode (no checksums in annex keys, suboptimal), bandwidth etc. Iirc a full run took like a week for me and that is with iirc 10 parallel across subdatasets jobs and a really good bandwidth. Do your manifests contain md5 checksums? If so, I might better finally add support (recent git annex has needed functionality already) to just trust those, and then avoiding most if not all traffic - we would just populate datasets without actually having data. Will only query NDA / s3 for file sizes afaik username_0: I'm not totally sure what file that you are wanting me to take a "head" of, but it doesn't look that it was able to at least annex one file within the "sourcedata" folder named "submission-21948.csv" which appears to be a manifest as a CSV file. It does not look like the manifest I used has md5 checksums, the file headers are: - "submission_id" - "dataset_id" - "submission_id" - "manifest_name" - "manifest_file_name" - "associated_file" Here is the command that I used for the most recent execution: /home/faird/shared/code/external/utilities/datalad-nda/scripts/datalad-nda add2datalad \ -i /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/abcd316520200818/datastructure_manifest.txt \ -d /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165 -J 10 --fast --drop-after username_1: ```bash head -n 100 < /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/abcd316520200818/datastructure_manifest.txt > /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/abcd316520200818/datastructure_manifest-100.txt /home/faird/shared/code/external/utilities/datalad-nda/scripts/datalad-nda add2datalad -i /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/abcd316520200818/datastructure_manifest-100.txt -d /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ABCD-BIDS-3165-test100 -J 10 --fast --drop-after ``` username_1: checked mine -- the same... I wonder what was some other file I had tried to work with. oh well -- I guess no "speedy" way for now (unless we see a checksum to be added to manifest by NDA - they might be computing them upon upload to ensure data integrity etc) username_0: Okay, what I am hearing you say is that it appears to be working, but it is going to take a lot longer than anticipated because it needs to create the md5 checksums. Maybe I'll do a little research into how Tobias Kadelka was able to pre-generate the md5 checksums via: [https://github.com/TobiasKadelka/build_hcp](https://github.com/TobiasKadelka/build_hcp) and [http://handbook.datalad.org/en/inm7/usecases/HCP_dataset.html](http://handbook.datalad.org/en/inm7/usecases/HCP_dataset.html) username_0: Just thinking out loud, I could ask the NDA help desk if they distribute md5 checksums for releases. The other thing that I could do is that I have the entire ABCD-BIDS collection 3165 as a read-only folder at Minnesota. I could crawl all of the files and pull out the md5 checksums for each file. What do you think? username_1: It is a "danger zone" since assumes that that folder has it 100% identical to what is in NDA and no evil software/hardware/human bug caused divergence. I would say that running something like `cd /path/to/3165; find -type f | parallel --jobs 10 md5sum > ~/mine/3165.md5sums` could be useful later to compare to what we would get from NDA. But before then we should see if they have checksum and which one (could be something else than md5, e.g. sha256, etc) username_1: without any data file under any of those directories? if so -- means that it failed to fetch any file, most likely "good old direct S3 access" is no longer possible. username_0: Right, no data files underneath any of the directories. It appears that the this only file that is underneath any of the given directories is "submission-22640.csv". So what's next if "good old direct S3 access" is no longer possible? By the way, how does your script go about authenticating users? I've been following along with the tutorial at: [http://handbook.datalad.org/en/latest/usecases/HCP_dataset.html#data-retrieval-and-interacting-with-the-repository](http://handbook.datalad.org/en/latest/usecases/HCP_dataset.html#data-retrieval-and-interacting-with-the-repository). There it suggests that I explicitly add something to the ".config/providers/nda-s3.cfg" for authentication, but it seems like you are going about a totally different strategy. username_1: datalad comes with this https://github.com/datalad/datalad/blob/master/datalad/downloaders/configs/nda.cfg (I don't spot any "suggests" you mention present in the handbook) so it used nda-s3 type of credential defined here https://github.com/datalad/datalad/blob/master/datalad/downloaders/credentials.py#L353 which the dance to get the token to access S3... datalad should ask you for user/password, store those, and then mint a new token whenever needed You know -- I have tried now again -- and it seemed to work out as it all should have. May be things are back in "old normal"??? please try again, and if it doesn't work (how? any errors?) -- we would indeed need to work out based on the same manifest. My invocation was ``` datalad-nda/scripts/datalad-nda --pdb add2datalad -i datastructure_manifest-100.txt -d testds-datalad2.0-ABCD-BIDS-100 -J 10 --drop-after ``` on a sample of top 100 rows in manifest I had, and I had datalad 0.14.7 (FWIW: we do have now updated datalad and git-annex in conda-forge... would be interested to discover if that matter)... before that - you could try just plain direct `datalad download-url s3://NDAR_Central_1/submission_22640/derivatives/abcd-hcp-pipeline/sub-SENSORED/ses-baselineYear1Arm1/img/DVARS_and_FD_task-nback01.png` (find a correct GUID ;)) -- if that works, datalad-nda helper should work. If doesn't -- we need to troubleshoot at this level first to get download working for you username_0: I've tried this, but it seems to just hang. Based on our current discussion I get: `datalad download-url s3://NDAR_Central_1/submission_22640/derivatives/abcd-hcp-pipeline/sub-SENSORED/ses-baselineYear1Arm1/img/DVARS_and_FD_task-nback01.png [INFO ] Downloading 's3://NDAR_Central_1/submission_22640/derivatives/abcd-hcp-pipeline/sub-SENSORED/ses-baselineYear1Arm1/img/DVARS_and_FD_task-nback01.png' into '/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/'` And based on our discussion that I started with <NAME>, on the HCP-D data: `datalad download-url s3://NDAR_Central_4/submission_33171/SENSORED_V1_MR/unprocessed/Diffusion/SENSORED_V1_MR_dMRI_dir99_AP_SBRef.nii.gz [INFO ] Downloading 's3://NDAR_Central_4/submission_33171/SENSORED_V1_MR/unprocessed/Diffusion/SENSORED_V1_MR_dMRI_dir99_AP_SBRef.nii.gz' into '/spaces/ngdr/workspaces/hendr522/HCP-D/'` In both cases it does not proceed any further. username_1: eh, I bet it is due to https://github.com/datalad/datalad/issues/5099 ... I will look into at least making the situation more obvious. Original intention is to lock for querying credentials, and "theoretically" it should not hang unless there is another datalad process on the same box asking for credentials Please rerun with `datalad -l debug download-url ...` -- it will state before "hanging" the lock file path (on linux - `~/.cache/datalad/locks/downloader-auth.lck`, don't know on OSX). If that is where it hangs - ctrl-C that process - if on linux -- test if you don't have that lock used by another process (`fuser -v that-lock-file`) and kill it - if nothing really holds it, and it is just stale somehow... please just `rm` it and try to `download-url` again username_0: Deleting the locked file definitely did the trick. The only problem I am now having is that despite entering the correct NDA username and password `datalad download-url` does not accept it. Any ideas? Here are the messages sent to the debugger: ``` (datalad_and_nda) hendr522@ln0005 [/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad] % datalad -l debug download-url s3://NDAR_Central_1/submission_22640/README [DEBUG ] Command line args 1st pass for DataLad 0.14.7. Parsed: Namespace() Unparsed: ['download-url', 's3://NDAR_Central_1/submission_22640/README'] [DEBUG ] Discovering plugins [DEBUG ] Building doc for <class 'datalad.core.local.status.Status'> [DEBUG ] Building doc for <class 'datalad.core.local.save.Save'> [DEBUG ] Building doc for <class 'datalad.interface.download_url.DownloadURL'> [DEBUG ] Parsing known args among ['/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/bin/datalad', '-l', 'debug', 'download-url', 's3://NDAR_Central_1/submission_22640/README'] [DEBUG ] Async run: | cwd=None | cmd=['git', '--git-dir=', 'config', '-z', '-l', '--show-origin'] [DEBUG ] Launching process ['git', '--git-dir=', 'config', '-z', '-l', '--show-origin'] [DEBUG ] Process 3556229 started [DEBUG ] Waiting for process 3556229 to complete [DEBUG ] Process 3556229 exited with return code 0 [DEBUG ] Determined class of decorated function: <class 'datalad.interface.download_url.DownloadURL'> [DEBUG ] parseParameters: Given "", we split into [] [DEBUG ] parseParameters: Given "credential: Credential, optional | Provides necessary credential fields to be used by authenticator | authenticator: Authenticator, optional | Authenticator to use for authentication.", we split into [('credential', 'credential: Credential, optional\n Provides necessary credential fields to be used by authenticator'), ('authenticator', 'authenticator: Authenticator, optional\n Authenticator to use for authentication.')] [DEBUG ] parseParameters: Given "", we split into [] [DEBUG ] parseParameters: Given "credential: Credential, optional | Provides necessary credential fields to be used by authenticator | authenticator: Authenticator, optional | Authenticator to use for authentication.", we split into [('credential', 'credential: Credential, optional\n Provides necessary credential fields to be used by authenticator'), ('authenticator', 'authenticator: Authenticator, optional\n Authenticator to use for authentication.')] [DEBUG ] parseParameters: Given "", we split into [] [DEBUG ] parseParameters: Given "credential: Credential, optional | Provides necessary credential fields to be used by authenticator | authenticator: Authenticator, optional | Authenticator to use for authentication.", we split into [('credential', 'credential: Credential, optional\n Provides necessary credential fields to be used by authenticator'), ('authenticator', 'authenticator: Authenticator, optional\n Authenticator to use for authentication.')] [DEBUG ] parseParameters: Given "", we split into [] [DEBUG ] parseParameters: Given "method : callable | A callable, usually a method of the same class, which we decorate | with access handling, and pass url as the first argument | url : string | URL to access | *args, **kwargs | Passed into the method call", we split into [('method', 'method : callable\n A callable, usually a method of the same class, which we decorate\n with access handling, and pass url as the first argument'), ('url', 'url : string\n URL to access'), ('*args, **kwargs', '*args, **kwargs\n Passed into the method call')] [DEBUG ] Reading files: ['/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/crawdad.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/crcns.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/dockerio.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/figshare.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/hcp.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/indi.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/kaggle.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/loris.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/nda.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/nitrc.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/nsidc.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/openfmri.cfg', '/home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/lib/python3.9/site-packages/datalad/downloaders/configs/providers.cfg'] [DEBUG ] Assigning credentials into 21 providers [DEBUG ] Returning provider Provider(authenticator=<<S3Authenticato++27 chars++one)>>, credential=<<NDA_S3(name='N++40 chars++'>>)>>, name='NDA', url_res=<<['s3://(ndar_c++27 chars++*)']>>) for url s3://NDAR_Central_1/submission_22640/README [INFO ] Downloading 's3://NDAR_Central_1/submission_22640/README' into '/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/' [DEBUG ] Acquiring a currently existing lock to establish download session. If stalls - check which process holds b'/home/umii/hendr522/.cache/datalad/locks/downloader-auth.lck' [DEBUG ] S3 session: Reconnecting to the bucket [DEBUG ] Importing keyring [DEBUG ] Generating token for NDA user hendr522 using <datalad.support.third.nda_aws_token_generator.NDATokenGenerator object at 0x7fb8cf6cfca0> talking to https://nda.nih.gov/DataManager/dataManager DEBUG :datalad.downloaders.credentials:Generating token for NDA user hendr522 using <datalad.support.third.nda_aws_token_generator.NDATokenGenerator object at 0x7fb8cf6cfca0> talking to https://nda.nih.gov/DataManager/dataManager ERROR:root:response had error message: Invalid username and/or password [DEBUG ] Access was denied: invalid username and/or password [credentials.py:_nda_adapter:330] DEBUG :datalad.downloaders:Access was denied: invalid username and/or password [credentials.py:_nda_adapter:330] Access to s3://NDAR_Central_1/submission_22640/README has failed. Do you want to enter other credentials in case they were updated? (choices: yes, no): yes You need to authenticate with 'NDA' credentials. https://ndar.nih.gov/access.html provides information on how to gain access user: hendr522 password: password (repeat): INFO:datalad.ui.dialog:Clear progress bars download_url(error): /spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/ (file) [Password not found [file_base.py:delete_password:180]] INFO:datalad.ui.dialog:Refresh progress bars [DEBUG ] could not perform all requested actions: Command did not complete successfully. 1 failed: [{'action': 'download_url', 'exception_traceback': '[download_url.py:__call__:186,base.py:download:520,base.py:access:210,base.py:_handle_authentication:255,base.py:_enter_credentials:337,credentials.py:enter_new:269,credentials.py:refresh:277,credentials.py:delete:184,keyring_.py:delete:62,core.py:delete_password:65,file_base.py:delete_password:180]', 'message': 'Password not found [file_base.py:delete_password:180]', 'path': '/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/', 'status': 'error', 'type': 'file'}] [utils.py:generator_func:459] DEBUG :datalad.cmdline:could not perform all requested actions: Command did not complete successfully. 1 failed: [{'action': 'download_url', 'exception_traceback': '[download_url.py:__call__:186,base.py:download:520,base.py:access:210,base.py:_handle_authentication:255,base.py:_enter_credentials:337,credentials.py:enter_new:269,credentials.py:refresh:277,credentials.py:delete:184,keyring_.py:delete:62,core.py:delete_password:65,file_base.py:delete_password:180]', 'message': 'Password not found [file_base.py:delete_password:180]', 'path': '/spaces/ngdr/workspaces/hendr522/ABCD/data/datalad/', 'status': 'error', 'type': 'file'}] [utils.py:generator_func:459] ``` username_1: quick workaround (BTW please share output of `datalad wtf --decor html_details`) might be to get to your OS credentials manager and remove anything you find for datalad and NDA, and retry. More detail on the fresh issue https://github.com/datalad/datalad/issues/5889 username_0: ``` (datalad_and_nda) hendr522@ln0005 [/spaces/ngdr/workspaces/hendr522/HCP-D] % datalad wtf --decor html_details <details><summary>DataLad 0.14.7 WTF (configuration, credentials, datalad, dependencies, environment, extensions, git-annex, location, metadata_extractors, metadata_indexers, python, system)</summary> # WTF ## configuration <SENSITIVE, report disabled by configuration> ## credentials - keyring: - active_backends: - PlaintextKeyring with no encyption v.1.0 at /home/umii/hendr522/.local/share/python_keyring/keyring_pass.cfg - config_file: /home/umii/hendr522/.config/python_keyring/keyringrc.cfg - data_root: /home/umii/hendr522/.local/share/python_keyring ## datalad - full_version: 0.14.7 - version: 0.14.7 ## dependencies - annexremote: 1.5.0 - appdirs: 1.4.4 - boto: 2.49.0 - cmd:7z: 16.02 - cmd:annex: 8.20210803-g99bb214 - cmd:bundled-git: UNKNOWN - cmd:git: 2.32.0 - cmd:system-git: 2.32.0 - cmd:system-ssh: 8.6p1 - exifread: 2.1.2 - humanize: 3.11.0 - iso8601: 0.1.16 - keyring: 23.0.1 - keyrings.alt: 4.0.2 - msgpack: 1.0.2 - mutagen: 1.45.1 - requests: 2.26.0 - wrapt: 1.12.1 ## environment - LANG: en_US.UTF-8 - PATH: /home/umii/hendr522/SW/miniconda3/envs/datalad_and_nda/bin:/home/umii/hendr522/SW/miniconda3/condabin:/panfs/roc/msisoft/fsl/6.0.1/bin:/panfs/roc/msisoft/R/4.0.0/bin:/home/umii/hendr522/SW/aws-cli/bin:/home/dhp/public/storage/s3policy_bin:/home/umii/hendr522/SW/sublime_text_3:/home/umii/hendr522/SW/workbench/bin_rh_linux64:/panfs/roc/msisoft/rclone/1.38/bin:/panfs/roc/groups/3/umii/hendr522/SW/VSCode-linux-x64/bin:/home/umii/hendr522/SW/pycharm-community-2021.1.1/bin:/home/umii/hendr522/bin:/home/faird/shared/CBRAIN_distro/cbrain_git_ruby_gems/ruby/2.7.0/bin:/panfs/roc/msisoft/ruby/2.7.0/bin:/opt/msi/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/opt/puppetlabs/bin:/home/umii/hendr522/.rvm/bin:/home/umii/hendr522/.rvm/bin:/panfs/roc/groups/3/umii/hendr522/SW/simNIBS/bin ## extensions ## git-annex - build flags: - Assistant - Webapp - Pairing - Inotify - DBus - DesktopNotify - TorrentParser - MagicMime - Feeds - Testsuite - S3 - WebDAV - dependency versions: - aws-0.22 - bloomfilter-2.0.1.0 - cryptonite-0.26 - DAV-1.3.4 - feed-1.3.0.1 - ghc-8.8.4 - http-client-0.6.4.1 [Truncated] - load_error: No module named 'libxmp' [xmp.py:<module>:20] - module: datalad.metadata.extractors.xmp ## metadata_indexers ## python - implementation: CPython - version: 3.9.6 ## system - distribution: centos/7/Core - encoding: - default: utf-8 - filesystem: utf-8 - locale.prefered: UTF-8 - max_path_length: 294 - name: Linux - release: 3.10.0-1160.36.2.el7.x86_64 - type: posix - version: #1 SMP Wed Jul 21 11:57:15 UTC 2021 </details> ``` username_1: removed verbatim decoration in WTF -- looks how it looks now ;) so -- did you succeed with credentials workaround. FWIW, submitted a PR to fix it up so would not be needed: https://github.com/datalad/datalad/pull/5892 (you can `python3 -m pip install git+https://github.com/username_1/datalad@bf-delete-credential` and if `datalad --version` reports `0.14.6+60.g3cddd7c4a` -- you got it)
gigaSproule/swagger-gradle-plugin
334057380
Title: SpringMvc: Create default tags for @RestController Question: username_0: When no special annotations are provided all operations have no tags and the SwaggerUI will list them under default. In case of SpringMVC (and also JAXRS) operations are normally put into context with the class they belong. Allow to configure a property so that all operations within a class will have a tag with the SimpleClassName of the class they belong. Answers: username_1: As with the other issue, feel free to raise a PR if you have the time. However, do you have a simple example I can quickly chuck into a test when I have some spare time, to make sure I don't screw it up? username_0: Will try to provide a PR next week Status: Issue closed
origo-map/origo
180004952
Title: removeOverlays does not remove overlays Question: username_0: The removeOverlays function in viewer.js does not actually remove overlays. The overlay elements remain in the document. Suggest changing the way the overlays variable is set from var overlays = map.getOverlays(); to var overlays = map.getOverlays().getArray(); Answers: username_1: I recall I've seen the problem. I have made a PR that fixes the problem #93 . To make it work I also had to improve Popup a litte bit. Now removeOverlays will clear all overlays in the map including the popup bound to the overlay. To remove a specific overlay the ol.map method to remove overlay can be used. Status: Issue closed
bootstrap-vue/bootstrap-vue
469409298
Title: Tooltips broken on dropdown Question: username_0: ### Describe the bug When trying to add a tooltip to a drop down menu, the tooltip will move behind the menu on click (should dissappear), the move back in front of the menu when you hover over the menu items (should not be displayed). ### Steps to reproduce the bug ``` <b-dropdown variant="link" v-b-tooltip.hover.bottom title="My Dropdown" id="my-dropdown"> <template slot="button-content"> My Dropdown </template> <b-dropdown-item>Item 1</b-dropdown-item> <b-dropdown-item>Item 2</b-dropdown-item> </b-dropdown> ``` ### Expected behavior The expected behavior is that the tooltip should only show when hovering over the dropdown button, not when hovering over the menu items. ![image](https://user-images.githubusercontent.com/3223296/61405663-35ec9e80-a88f-11e9-8465-676d810c511d.png) ### Versions **Libraries:** - BootstrapVue: 2.0.0-rc.14 - Bootstrap: 4.3.1 - Vue: 2.6.8 **Environment:** - Device: Macbook Pro - OS: Mojave - Browser: Chrome - Version: 75.0.3770.100 ### Additional context I think a good fix that will allow better customization of the dropdown could be to allow full access to use a custom button, rather than just a <slot> for the button text. If we could use a custom button, we could simply add the tooltip directive to the button instead of the entire dropdown element. Answers: username_1: The reason the tooltip moves is because the menu is inside the root wrapper div element. It wouldn't be easy to allow a custom button, as we need to be able to bind event listeners to the button, and since we wouldn't be controlling the button (or even know if it rendered at all) we couldn't add the event listener to it, nor set the disabled state when the dropdown is disabled. Tooltips listen for bubbled up events on the element they are applied to (so we can handle when elements inside the trigger element are hovered). We might be able to add in a filter on the events (for tooltip/popover) to see if they are triggered on children of a `dropdown-menu` container, and ignore them. username_0: I think that would be a reasonable solution. Not sure if it's too opinionated, but I can't imagine a case where someone would want the current behavior. Status: Issue closed username_1: Bootstrap v2.0.0-rc.27 has been released username_3: The issue kind of persists, but instead of showing the bootstrap tooltip, shows the browser tooltip. As a workaround, I suggest adding an empty element with the tooltip, and make it occupy the whole space: ``` // html: <template #button-content> <icon name="toggle"/> <div class="__toggle-tooltip" v-b-tooltip title="Toggle menu"></div> </template> // css: dropdown-toggle[aria-expanded="false"] > .__toggle-tooltip { position: absolute; top: 0; left: 0; bottom: 0; right: 0; } ```
Ironholds/jammr
106086888
Title: Modify `attach()` to randomly shuffle data? Question: username_0: I was thinking something along the lines of: ```r replace("attach", function(what, pos = 2L, name = deparse(substitute(what)), warn.conflicts = TRUE) { if (runif(1) < 0.5) { what[] <- lapply(what, sample) } attach(what, pos = 2L, name = deparse(substitute(what)), warn.conflicts = TRUE) }) ``` Note: I don't know if this would work. Answers: username_1: Is "attach" the one people use instead of library() or am I thinking of something else? username_0: You're thinking of `require()`. Attach is the one that people use to make columns in data frames accessible without using `$` or `[[`. It is famously known to cause problems on its own: http://www.r-bloggers.com/to-attach-or-not-attach-that-is-the-question/ example: ```r data(iris) attach(iris) Species # equivalent to iris$Species detach(iris) Species # error ``` Status: Issue closed
adobe/S3Mock
562742040
Title: S3Mock overwrites file data with ACL XML on 'putObject' Question: username_0: First of all - great code - helps a lot with debugging S3 code locally. While trying to use [S3Browser](https://s3browser.com/) with the mock as a server I encountered a problem with uploading files (which translates to calling the `FileStoreController#putObject` endpoint). After some debugging I found out the following: The application invokes `putObject` **twice** - once with file contents and once again with an XML that contains ACL data. The code does not distinguish between the 2 calls and thus overwrites the original file data with the ACL XML content. A quick analysis of the requests shows that there are some differences between the requests in the headers - which do not seem definitive enough to provide a distinction, However, there is one significant difference that seems to be most indicative that the call with the ACL contains an `acl` parameter (i.e., `PUT /some/path/to/bucket?acl`). For the time being I "patched" my code locally by checking if there is an `acl` parameter, and if so ignore the payload and return the `S3Object` that was generated when the file data was uploaded. I do believe though that it should be handled somehow (perhaps the ACL is required somewhere else - I just did not encounter such a need). As a side-note - I looked for some documentation on this REST call but could not find any (I did not look very deep though due to lack of time...) Answers: username_1: Thanks for the thorough report! I think adding a method that has a query parameter โ€žaclโ€œ defined should do the trick to route that call to that other method. Would you like to try adding that in a PR? username_0: I would love to, but I am swamped - I barely had enough time to diagnose this problem. Furthermore, I am not sure how the new method that handles it should behave: * Should it store the data ? * If so - where ? how ? when will it be read ? (I did not check if S3Browser asks for it...)
Sweeper777/PocketDiary
171630941
Title: Diaries disappear after changing SIM card Question: username_0: I just changed my SIM card and the calendar shows that I don't have any diaries! No green dots is visible and tapping on the dates does not show any diaries. But the search still works... Status: Issue closed Answers: username_0: I found out that this is actually related to the time zone. It is now fixed!
PaddlePaddle/Paddle
257655094
Title: How to know it is the training phase or the inference phase in operators in new framework? Question: username_0: Some operatorโ€˜s calculation, such `Dropout`, `BatchNorm` is different between training phase and testing phase. Now, the `run` function of the operator only needs `core.Scope` and `core.DeviceContext`, and how to run operator in Python API as follows: ```python net_op = core.Net.create() # create fc_op # net_op.append_op(fc_op) # net_op.append_op(mse_op) ctx = core.DeviceContext.create(core.CPUPlace()) scope = core.Scope() # scope.new_var('X') # ... net_op.run(scope, ctx) ``` The op can not determine whether it is training phase or inference phase. And it's strange to put this flag into `DeviceContext`. And the `ExecutionContext` is not exposed to users in Python API. This problem had been discussed in the `Hi` discussion group. Now the dropout operator does not handle this case. I just record this problem here.<issue_closed> Status: Issue closed
rails-girls-summer-of-code/summer-of-code
199525175
Title: Review and update the coaches guide for 2017 Question: username_0: Based on feedback from coaches which we received last year. Answers: username_1: Some problems reported by the coaches of 2016: - I was not sure what my role as a coach would be / what were the deliverables at the end of the program - Wasn't sure how the program would be executed within a specific project - Not clear how many coaches are actually necessary - What is the role of a remote coach that is not coaching on a day-to-day basis, but instead for project-specific things - Wasn't clear about the benefits to be a coach - Expected level of the students was not clear - Mentor's role I think additionally to the guide we need to make a 1 page pdf with really short answers on the most popular questions like the ones above. Additionally, it might make sense to tell coaches a bit more about "what to expect" regarding: - communication with students: possible unresponsiveness; - communication with students: possible issues with students being not co-operative ("lack of authority"); - students' skills: coding level is different from what has been expected. username_1: One more thing: how to confirm your coach status in a team. Was asked 1000 times last year. username_2: When a coach is added to a team, they (should) get an email containing a link. Behind that link is a confirmation button username_1: @username_2 Yes, I actually made that note not to forget to include this into the Guide :) Sometimes coaches don't get the email (there is no public email in their Github profile or their email address is outdated, or the mail gets into spam, or whatever), and then they need to go to the team's page and press the confirmation button manually. This was not obvious to the coaches last year. username_0: I think there are three types of information we need to provide coaches (and other participants) with: - information about applying/basic info about the program (found in guides on the website) - information about using the teams app/troubleshooting (teams app help page) - more detailed information about roles, handling communication problems, etc.. (separate guide provided to coaches of selected teams). Information related to using the Teams app (e.g. troubleshooting) would be good to have on the teams app help page (see https://github.com/rails-girls-summer-of-code/rgsoc-teams/issues/614). We should be able to make the link to the help page more prominent if necessary. This could include screenshots and information about confirming the account. The coach guide on the website is meant as basic information for coaches who wish to apply and find a team. I think that any very specific information about the program/the deliverables/communication tools/etc... should be provided to coaches after their team has been accepted, in the same way that we provide selected teams and selected mentors with more information before the program starts... sort of like an onboarding document. This could definitely be provided as a pdf, or as a link to a document in google docs, or any other format which we think would make sense, I don't really know what would work best there. My preferred way would be to, in the future, have all these extra documents in the teams app itself, in a personal โ€œdashboardโ€. username_1: @username_0 I agree with everything. Does this ticket cover only the basic guide on the website? If so, there are very few changes to be made. Many feedback issues will be covered by the onboarding guide. Btw, it seems so strange that we didn't have it before :D username_0: @username_1 I feel the same way โ€” an onboarding guide is definitely something we've always needed.. Yes, the original ticket covered the changes on the website only (as it's in our website repo). I will create a separate issue in `organization` for the onboarding guide, so I'd suggest to stick in here to the changes that we need to make on the coaches guide page on our website. thank you! ๐Ÿ‘ Status: Issue closed
Geoportail-Luxembourg/geoportailv3
79488511
Title: build problem Question: username_0: there seems to be a build problem right now on master. I made an update-node-modules and now I get this when building: ``` ./node_modules/openlayers/node_modules/.bin/closure-util build build.json .build/build.js info closure-util Reading build config info closure-util Getting Closure dependencies ERR! closure-util Unsatisfied dependency "ol.DrawEvent" in script: /var/www/vhosts/geoportailv3/node_modules/ngeo/src/ol-ext/interaction/measure.js make: *** [.build/build.js] Error 1 ``` Answers: username_1: Yes, there's a PR opened in ngeo for that: https://github.com/camptocamp/ngeo/pull/224. I'll merge it. username_1: Should be fixed. Status: Issue closed
rossfuhrman/_why_the_lucky_markov
581651179
Title: darkly clotted purple at the hotel buffet tables today. I may translate, the regexp. Question: username_0: Toot: darkly clotted purple at the hotel buffet tables today. I may translate, the regexp. One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
KOTORCommunityPatches/K1_Community_Patch
520004895
Title: Manaan Hrakert Rift - Machinery destruction/shark poisoning movies can be accidentally skipped Question: username_0: The BIKs for blowing up the machinery or poisoning the shark are very easily skipped unintentionally. The relevant DLG nodes may need the addition of `NoClicksFor` to prevent it. Answers: username_0: Since this appears to be unresolvable, I'm closing this for now. Status: Issue closed
ABI-Tutorials/CellML-OpenCOR-PMR
240254517
Title: Error in sodium channel component text Question: username_0: As per https://groups.google.com/d/msgid/opencor-users/331a507d-3a77-4f6d-96a3-3ef3dde86436%40googlegroups.com?utm_medium=email&utm_source=footer There is an extra ``E_Na`` being defined which when copy-pasted into OpenCOR will result in an overconstrained model. Linked SED-ML/CellML works fine and correctly only has one ``E_Na`` definition. Answers: username_0: Need to check if this component is used elsewhere in the tutorial. Status: Issue closed
club-soda/club-soda-guide
402620710
Title: Link wholesalers/retailers with drinks/brands Question: username_0: As an admin I want to be able to link wholesalers/retailers with the drinks/brands that they stock so that customers know where they can buy those drinks Using the same dropdown that are used to connect venues and drinks. ![image](https://user-images.githubusercontent.com/16775804/51668941-41839a80-1fbb-11e9-944a-9cf800c56330.png) From a technical perspective it is far more complex to integrate this form with the 'new retailer' form: Answers: username_1: I'm having issues testing this - see : https://github.com/club-soda/club-soda-guide/issues/401#issuecomment-457601558 Status: Issue closed
typelevel/cats
505740957
Title: Instances for functions beyond Function1 Question: username_0: Instances for functions beyond `Function1` are missing. Theoretically we could define at least `Monad` instances for all FunctionN. `Functor` is especially useful because we can use `map` as `andThen` which itself is missing from Scala. I would say that if we want to implement this, then the instances would have to be code-generated. Another alternative is to add them to `kittens` by utilizing `shapeless`. Scalaz has instances upto `Function8`. Apologies if this was discussed before, I couldn't find any issue. Answers: username_0: I actually tried adding them to `kittens` and it's not possible because we can't abstract over `(A1, ... An) => R` as an `F[R]` where `type F[x] = (A1, .... An) => x` unless we add some additional functionality to shapeless itself. username_0: On the other hand that would add a **lot** of code if we generate all upto `Function22` for all possible typeclass instances. Status: Issue closed
interconnectapp/discuss
135156064
Title: Todos Question: username_0: A recovered todo list for this project from Workflowy: ``` InterConnect #project - [COMPLETE] Get it ready for the Gittip Retreat (3-6 January) @anyone #missed - [COMPLETE] Find out when the Gittip retreat is @anyone #now - [COMPLETE] Get two-way calling going @programmer #soon - [COMPLETE] Use rtc.io @programmer #soon - [COMPLETE] Add pointers and miniview #dropped @programmer #soon "We ended up using polymer for good reasons, see the github issues" - Accomplishments - [COMPLETE] Identify the problem - [COMPLETE] Identify some solutions - [COMPLETE] Hone in on the solution that actually makes sense - [COMPLETE] Figure out the roadmap forward - [COMPLETE] Evaluate that it is actually possible to build - Current focus - Communicate the vision effectively #amorphous "Requires making the pitch more public, using the pitch as an actual website" - Communicate the progress effectively #amorphous "Requires making this list public, and implementation of a website" - Communicate the roadmap effectively #amorphous "Requires making this list public, and doing videos about the potential, and a website to showcase the vision" - Have a designer finish the mockups #amorphous - Build the project to a MVP level #amorphous - Build a landing page for marketing #amorphous - Actionable Tasks - Make the pitch more public by creating a website for it - Find a design for the website - Implement the design for the website - Complete the MVP - Recording of video calls - Find recording library - Either implement using Chrome APIs, or as a Desktop App - Sending of video calls - Video calls with yourself as a journal - Add people via email - Conversation view - Profile view - [COMPLETE] Create a hubot bot for InterConnect, rather than connecting to IRC directly #soon @programmer #dropped "We don't be adding text chat, see market thoughts issue for more details" ```
spring-projects/spring-data-mongodb
1076542227
Title: Migrating spring data mongo from 1.9.9 to 3.2.2 getiing run time time exception Question: username_0: getCredentials called.... Replica Set: <Mongo nodes ips>org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'currencyInfo' defined in URL [jar:file:/home/pg/pg-services/communication-exchange-service/lib/promotion-communication-adapter.jar!/META-INF/spring/promotion-exchange-beans.xml]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 50000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@4bc59b27. Client view of cluster state is {type=REPLICA_SET, servers=[] I followed xml based mongo templete creation, old : <mongo:db-factory id="campaignsDBMongoDBFactory" mongo-ref="campaignsDBMongo" dbname="${campaignsDB.mongodb.db}"/> <mongo:repositories base-package="com.gvc.campaignsdb.spring.nsfw.repositories" mongo-template-ref="campaignsDBMongoTemplate"/> <bean id="campaignsDBMongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg name="mongoDbFactory" ref="campaignsDBMongoDBFactory"/> </bean> <mongo:mongo-client replica-set="#{campaignsDBMongoConfig.replicaSet}" id="campaignsDBMongo" credentials="#{campaignsDBMongoConfig.credentials}"> <mongo:client-options min-connections-per-host="${campaignsDB.mongodb.minConnectionsPerHost}" connections-per-host="${campaignsDB.mongodb.connectionsPerHost}" connect-timeout="${campaignsDB.mongodb.connectTimeout}" heartbeat-connect-timeout="${campaignsDB.mongodb.heartbeatConnectTimeout}" heartbeat-frequency="${campaignsDB.mongodb.heartbeatFrequency}" heartbeat-socket-timeout="${campaignsDB.mongodb.heartbeatSocketTimeout}" max-connection-idle-time="${campaignsDB.mongodb.maxConnectionIdleTime}" max-connection-life-time="${campaignsDB.mongodb.maxConnectionLifeTime}" max-wait-time="${campaignsDB.mongodb.maxWaitTime}" min-heartbeat-frequency="${campaignsDB.mongodb.minHeartbeatFrequency}" socket-keep-alive="${campaignsDB.mongodb.socketKeepAlive}" socket-timeout="${campaignsDB.mongodb.socketTimeout}" /> </mongo:mongo-client> new xml <mongo:repositories base-package="com.gvc.campaignsdb.spring.nsfw.repositories" mongo-template-ref="myOps" /> <mongo:mongo-client replica-set="rs" id="campaignsDBMongo" credential="#{campaignsDBMongoConfig.credentials}" > <mongo:client-settings cluster-hosts="#{campaignsDBMongoConfig.replicaSet}" cluster-server-selection-timeout="${campaignsDB.mongodb.connectTimeout}" connection-pool-min-size="${campaignsDB.mongodb.minConnectionsPerHost}" connection-pool-max-size="${campaignsDB.mongodb.connectionsPerHost}" connection-pool-max-wait-time="${campaignsDB.mongodb.maxWaitTime}" connection-pool-max-connection-idle-time="${campaignsDB.mongodb.maxConnectionIdleTime}" connection-pool-max-connection-life-time="${campaignsDB.mongodb.maxConnectionLifeTime}" server-heartbeat-frequency="${campaignsDB.mongodb.heartbeatFrequency}" server-min-heartbeat-frequency="${campaignsDB.mongodb.minHeartbeatFrequency}" socket-connect-timeout="${campaignsDB.mongodb.socketTimeout}" /> </mongo:mongo-client> <mongo:db-factory id="campaignsDBMongoDBFactory" mongo-client-ref="campaignsDBMongo" dbname="${campaignsDB.mongodb.db}"/> <bean id="campaignsDBMongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg name="mongoDbFactory" ref="campaignsDBMongoDBFactory"/> </bean> -------------- campaignsDBMongoConfig.credentials} old value : this.credentials = this.userName + ":" + dbPwd + "@" + this.authDb; new value: this.credentials = MongoCredential.createCredential(this.userName,dbPwd,ch); Answers: username_0: Can you please help me with correct approach of xml creation username_0: Hi Team, Please help me with this. This is very important change for my project. Can you please provide details. Status: Issue closed username_0: getCredentials called.... Replica Set: <Mongo nodes ips>org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'currencyInfo' defined in URL [jar:file:/home/pg/pg-services/communication-exchange-service/lib/promotion-communication-adapter.jar!/META-INF/spring/promotion-exchange-beans.xml]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 50000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@4bc59b27. Client view of cluster state is {type=REPLICA_SET, servers=[] I followed xml based mongo templete creation, old : <mongo:db-factory id="campaignsDBMongoDBFactory" mongo-ref="campaignsDBMongo" dbname="${campaignsDB.mongodb.db}"/> <mongo:repositories base-package="com.gvc.campaignsdb.spring.nsfw.repositories" mongo-template-ref="campaignsDBMongoTemplate"/> <bean id="campaignsDBMongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg name="mongoDbFactory" ref="campaignsDBMongoDBFactory"/> </bean> <mongo:mongo-client replica-set="#{campaignsDBMongoConfig.replicaSet}" id="campaignsDBMongo" credentials="#{campaignsDBMongoConfig.credentials}"> <mongo:client-options min-connections-per-host="${campaignsDB.mongodb.minConnectionsPerHost}" connections-per-host="${campaignsDB.mongodb.connectionsPerHost}" connect-timeout="${campaignsDB.mongodb.connectTimeout}" heartbeat-connect-timeout="${campaignsDB.mongodb.heartbeatConnectTimeout}" heartbeat-frequency="${campaignsDB.mongodb.heartbeatFrequency}" heartbeat-socket-timeout="${campaignsDB.mongodb.heartbeatSocketTimeout}" max-connection-idle-time="${campaignsDB.mongodb.maxConnectionIdleTime}" max-connection-life-time="${campaignsDB.mongodb.maxConnectionLifeTime}" max-wait-time="${campaignsDB.mongodb.maxWaitTime}" min-heartbeat-frequency="${campaignsDB.mongodb.minHeartbeatFrequency}" socket-keep-alive="${campaignsDB.mongodb.socketKeepAlive}" socket-timeout="${campaignsDB.mongodb.socketTimeout}" /> </mongo:mongo-client> new xml <mongo:repositories base-package="com.gvc.campaignsdb.spring.nsfw.repositories" mongo-template-ref="myOps" /> <mongo:mongo-client replica-set="rs" id="campaignsDBMongo" credential="#{campaignsDBMongoConfig.credentials}" > <mongo:client-settings cluster-hosts="#{campaignsDBMongoConfig.replicaSet}" cluster-server-selection-timeout="${campaignsDB.mongodb.connectTimeout}" connection-pool-min-size="${campaignsDB.mongodb.minConnectionsPerHost}" connection-pool-max-size="${campaignsDB.mongodb.connectionsPerHost}" connection-pool-max-wait-time="${campaignsDB.mongodb.maxWaitTime}" connection-pool-max-connection-idle-time="${campaignsDB.mongodb.maxConnectionIdleTime}" connection-pool-max-connection-life-time="${campaignsDB.mongodb.maxConnectionLifeTime}" server-heartbeat-frequency="${campaignsDB.mongodb.heartbeatFrequency}" server-min-heartbeat-frequency="${campaignsDB.mongodb.minHeartbeatFrequency}" socket-connect-timeout="${campaignsDB.mongodb.socketTimeout}" /> </mongo:mongo-client> <mongo:db-factory id="campaignsDBMongoDBFactory" mongo-client-ref="campaignsDBMongo" dbname="${campaignsDB.mongodb.db}"/> <bean id="campaignsDBMongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate"> <constructor-arg name="mongoDbFactory" ref="campaignsDBMongoDBFactory"/> </bean> -------------- campaignsDBMongoConfig.credentials} old value : this.credentials = this.userName + ":" + dbPwd + "@" + this.authDb; new value: this.credentials = MongoCredential.createCredential(this.userName,dbPwd,ch); username_1: Moving from _1.x_ to _3.x_ is quite a change that involves config, client and other differences. With the information at hand we cannot tell why the client won't connect to the replica set. Maybe you could set up the client manually with the required settings to see which ones are not set the expected way using XML config. username_0: Hi Christophstobl, When I got the info from spring docs i changed to new xml version changes, I googled it alot, But I didn't find any xml type of config in 3.x version of spring data mongo. Can you please provide any sample application with xml based in 3.x. username_0: Hi Christophstobl, As our company using mongo huge. We are doding millions of transactions in milliseconds on mongo. so we need best way of connecting multiple mongos in multiple applications and we have replica sets also. username_1: With the given information I was not able to reproduce the issue. The mentioned authentication changes should not lead to a server selection timeout but raise a `MongoSecurityException`. My best guess (based on the information you provided `type=REPLICA_SET, servers=[]`) is something related to `clusterHosts`. Please try debugging your application. Breakpoints in `MongoParsingUtils` and `MongoClientFactoryBean` might help you. username_2: If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed. username_2: Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue. Status: Issue closed
CoinAlpha/hummingbot
1054257040
Title: Design multi-wallet architecture Question: username_0: As discussed in today's standup, we decided to allow users to interact with multiple blockchains simultaneously after issue #4263, because doing so now would be simpler than accommodating it later on. However, the standup raised the issue that to enable users to interact with multiple blockchains, we should accommodate the need to allow users to maintain multiple wallets so that they can interact with multiple blockchains. We decided to accommodate multiple wallets into the client/Gateway architecture now. Below are the work items related to this effort: 1. Change nonceManager so that it uses different wallet instances: Victor mentioned that we should look at the old gateway-v1 nonceManager design. 2. Client should include specify wallet address and handle wallet-related errors in requests, since if the client maintains multiple wallets, it has to tell Gateway which to use. See diagram below that Paulo created: ![image (9).png](https://images.zenhubusercontent.com/545072957c6ccc6277a5c098/11d0d550-b49d-4b41-957c-54c70a5969dc) 3. Tasks related to supporting multiple wallets in the client user interface. I'm not sure we tackle these now, but here are what I think are the main tasks: a. Add a `wallet` command to add, list and export wallets a. Change `balance` command so that you can run `balance -wallet [wallet address] or -chain [chain] -exchange [exchange]" to query balances Answers: username_1: The key to achieve this is to add network and wallet to Gate exchange configs: https://github.com/CoinAlpha/hummingbot/issues/4729 For client, we'll need to https://github.com/CoinAlpha/hummingbot/issues/4732 https://github.com/CoinAlpha/hummingbot/issues/4733 username_1: We'll also need to design how we're going to manage private keys, I'll investigate and create a story for this. username_1: I added Wallet Management ticket. https://github.com/CoinAlpha/hummingbot/issues/4798 if the design passes review, we can close off this ticket. Status: Issue closed
unlock-protocol/unlock
367814317
Title: Server side rendering Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here. Answers: username_1: Currently the unlock-app uses create-react-app. Even though it's a great tool to get bootstrapped on building react apps, I don't think it's a good idea to keep using it if you would like to support SSR, code-splitting, Redux and server-side Hot Module Reloading in the future. Adding those features one by one is good idea for full control of the tools the app is using but could lead to a messy codebase. I recommend using some other established tools like [Next.js](https://github.com/zeit/next.js) and [Razzle](https://github.com/jaredpalmer/razzle) that handles implementing the mentioned features in an elegant way. Both CRA and Next.js are great tools to build client-side apps with the possibility of exporting static HTML/JS/CSS assets. Here is a quick comparaison between both tools: ## Differences between CRA and Next.js CRA: * Great for building basic SPAs * Agnostic on the backend Next.js: * Route SSR out-of-the-box with flexibility to not SSR part of the app using [react-no-ssr](https://github.com/kadirahq/react-no-ssr) for example. * Takes care of updating babel and webpack for you. * Great support of other tools (Redux, styled-components, various web app frameworks..). * Support for code-splitting by route or by feature using [dynamic-imports](https://github.com/zeit/next.js#dynamic-import). * Great api to pass props to your components from the server via [getInitialProps](https://github.com/zeit/next.js#fetching-data-and-component-lifecycle). unlock-app already takes use of the "pages" folder structure which is very similar to Next.js, so I believe that would make the migrating a bit easier. There is a [thread](https://github.com/facebook/create-react-app/issues/2559) in CRA where people discuss this topic, someone mentioned that Next.js has a limitation to persist the react state between routes, I didn't try this yet but I believe that this should be possible now using the [`App.js`](https://github.com/zeit/next.js#custom-app) component. username_0: I think you are right that we will eventually need to eject, so I guess now is as good a time as ever. I'll work on a PR which does that and maybe ask you to review it. Any pitfalls/gotchas? username_1: @username_0 Let me give a shot first, I don't think you need to eject on the migration. At least if things go wrong we can always reset to the pre-ejected commit. username_1: @username_0 I was able to remove cra and replace it with next.js for most of the static files. However, I have discovered some challenges with the rest of the app when it come to dealing with removing most of the react-router stuff. First most of `Link` component have be changed to the Nextjs `Link`. Also some of the file that represent a route have to be moved around including the dashboard, creator and provider route. Since these routes are purely depending on the client, I had to manage to get them all wrapped into some code that prevent them from rendering on the server especially when it come to the services that require connecting to the network. I would like to get a better understanding of the structure of the routes, this is what I came up with under my new pages folder: ![image](https://user-images.githubusercontent.com/1974993/46908819-7a10d400-cef6-11e8-9b9f-bdfaafdacc97.png) I have also noticed that `/lock/:lockaddress` is defined under the Switch statement: https://github.com/unlock-protocol/unlock/blob/7ff1131f2f9d2590a02e0b4eb023ebddefc59f13/unlock-app/src/components/Unlock.js#L48 Would you like to have it as its unique path also or be part of `creator/lock/:lockaddress`? Also I have thought that moving the error state logic be moved to Layout is a good idea since we would like to show that only on cases where pages are depending on the network and not on pages such as home, about and jobs. Let me know your thoughts. username_0: Thanks a lot @username_1 ! The `/lock/:lockAddress` is actually not the same as `creator/lock/:lockaddress`. Basically, the first one is what is loaded as part of the iframe on pages which include a lock. (where a consumer can actually buy a key to access/view the content), while the latter is the page that the creator of the lock can use to view data about the lock (this page may actually be deprecated based on the design that we currently have). As for moving the error logic and `withConfig` away from the `Layout`, I think this is a fair point. We can work on that (would you be ok to open an issue for it?) It will take me a couple hours to review your PR. Please stay tuned ;) username_1: Yeah sorry that it got a little lengthy. However, most of it is building up the `pages` folder and moving files from `/public` to `/static`. I will be adding some comments and please feel free to hit me up if you need me to walk through any part of it. Since I had to actually get rid of `Unlock.js`, I moved the error logic to `withConfig` for now. We can move it around wherever it makes senses if needed. username_0: Also, I just added this issue to our Gitcoin so you can claim the bounty once it's merged ;) Please [apply for it!](https://gitcoin.co/issue/unlock-protocol/unlock/310/1488) There should be a comment right here โฌ‡๏ธ in a couple minutes. Status: Issue closed
STEllAR-GROUP/hpx
307201823
Title: Rare tests.unit.host_.block_allocator test failure on 1.1.0-rc1 Question: username_0: I've built HPX 1.1.0-rc1 on my Ryzen machine with Clang 3.8.1 in a Debian container and am running the test suite repeatedly. Over 142 runs of the test suite on commit fee6ba2 `test.unit.host_.block_allocator` failed once with the following output: ``` test 561 Start 561: tests.unit.host_.block_allocator 561: Test command: /usr/bin/python "/tree/build-rwdi/bin/hpxrun.py" "/tree/build-rwdi/bin/block_allocator_test" "-e" "0" "-l" "1" "-t" "1" "-v" "--" 561: Test timeout computed to be: 70 561: using seed: 1820188137 561: {stack-trace}: 11 frames: 561: 0x7f6285403746 : hpx::util::stack_trace::trace(void**, unsigned long) + 0x6 in /tree/build-rwdi/lib/libhpx.so.1 561: 0x7f6284e0914b : hpx::util::backtrace::backtrace(unsigned long) + 0x5b in /tree/build-rwdi/lib/libhpx.so.1 561: 0x7f6284e49df9 : hpx::termination_handler(int) + 0x249 in /tree/build-rwdi/lib/libhpx.so.1 561: 0x7f6282f570c0 : ??? + 0x7f6282f570c0 in /lib/x86_64-linux-gnu/libpthread.so.0 561: 0x421694 : void hpx::parallel::util::detail::static_partitioner_with_cleanup<hpx::parallel::execution::parallel_policy_shim<hpx::compute::host::block_executor<hpx::threads::executors::local_priority_queue_attached_executor>, hpx::parallel::execution::static_chunk_size>, void, std::pair<boost::range_detail::integer_iterator<unsigned long>, boost::range_detail::integer_iterator<unsigned long> > >::call<hpx::parallel::execution::parallel_policy_shim<hpx::compute::host::block_executor<hpx::threads::executors::local_priority_queue_attached_executor>, hpx::parallel::execution::static_chunk_size>, boost::range_detail::integer_iterator<unsigned long>, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(boost::range_detail::integer_iterator<unsigned long>, unsigned long)#1}, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(std::vector<hpx::lcos::future<std::pair<boost::range_detail::integer_iterator<unsigned long>, boost::range_detail::integer_iterator<unsigned long> > >, std::allocator<hpx::lcos::future> >&&)#1}, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(std::pair<boost::range_detail::integer_iterator<unsigned long>, boost::range_detail::integer_iterator<unsigned long> >&&)#1}>(hpx::parallel::execution::parallel_policy_shim<hpx::compute::host::block_executor<hpx::threads::executors::local_priority_queue_attached_executor>, hpx::parallel::execution::static_chunk_size>&&, boost::range_detail::integer_iterator<unsigned long>, unsigned long, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(boost::range_detail::integer_iterator<unsigned long>, unsigned long)#1}&&, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(std::vector<hpx::lcos::future<std::pair<boost::range_detail::integer_iterator<unsigned long>, boost::range_detail::integer_iterator<unsigned long> > >, std::allocator<hpx::lcos::future> >&&)#1}&&, void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long)::{lambda(std::pair<boost::range_detail::integer_iterator<unsigned long>, boost::range_detail::integer_iterator<unsigned long> >&&)#1}&&) + 0x94 in /tree/build-rwdi/bin/block_allocator_test561: 0x420faa : void hpx::compute::host::block_allocator<int, hpx::threads::executors::local_priority_queue_attached_executor>::bulk_construct<int>(int*, unsigned long) + 0x1aa in /tree/build-rwdi/bin/block_allocator_test561: 0x41dd5e : void test_bulk_allocator<int>(unsigned long) + 0x3e in /tree/build-rwdi/bin/block_allocator_test561: 0x41d58c : hpx_main(boost::program_options::variables_map&) + 0x25c in /tree/build-rwdi/bin/block_allocator_test561: 0x7f6284e567c0 : hpx::runtime_impl::run_helper(hpx::util::function<int (), false> const&, int&) + 0x4f0 in /tree/build-rwdi/lib/libhpx.so.1 561: 0x7f628524e26d : hpx::threads::coroutines::detail::coroutine_impl::operator()() + 0x9d in /tree/build-rwdi/lib/libhpx.so.1 561: 0x7f62852b5236 : void hpx::threads::coroutines::detail::lx::trampoline<hpx::threads::coroutines::detail::coroutine_impl>(hpx::threads::coroutines::detail::coroutine_impl*) + 0x6 in /tree/build-rwdi/lib/libhpx.so.1 561: {what}: Floating point exception 561: {config}: 561: HPX_WITH_AGAS_DUMP_REFCNT_ENTRIES=OFF 561: HPX_WITH_APEX=OFF 561: HPX_WITH_ATTACH_DEBUGGER_ON_TEST_FAILURE=OFF 561: HPX_WITH_AUTOMATIC_SERIALIZATION_REGISTRATION=ON 561: HPX_WITH_AWAIT=OFF 561: HPX_WITH_CXX14_RETURN_TYPE_DEDUCTION=TRUE 561: HPX_WITH_GOOGLE_PERFTOOLS=OFF 561: HPX_WITH_INCLUSIVE_SCAN_COMPATIBILITY=ON 561: HPX_WITH_IO_COUNTERS=ON 561: HPX_WITH_IO_POOL=ON 561: HPX_WITH_ITTNOTIFY=OFF 561: HPX_WITH_LOGGING=ON 561: HPX_WITH_MORE_THAN_64_THREADS=OFF 561: HPX_WITH_NATIVE_TLS=ON 561: HPX_WITH_NETWORKING=ON 561: HPX_WITH_PAPI=OFF 561: HPX_WITH_PARCELPORT_ACTION_COUNTERS=OFF 561: HPX_WITH_PARCELPORT_LIBFABRIC=OFF 561: HPX_WITH_PARCELPORT_MPI=OFF 561: HPX_WITH_PARCELPORT_MPI_MULTITHREADED=OFF 561: HPX_WITH_PARCELPORT_TCP=ON 561: HPX_WITH_PARCELPORT_VERBS=OFF 561: HPX_WITH_PARCEL_COALESCING=ON 561: HPX_WITH_PARCEL_PROFILING=OFF 561: HPX_WITH_SCHEDULER_LOCAL_STORAGE=OFF 561: HPX_WITH_SPINLOCK_DEADLOCK_DETECTION=OFF 561: HPX_WITH_STACKTRACES=ON 561: HPX_WITH_SWAP_CONTEXT_EMULATION=OFF 561: HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION=OFF 561: HPX_WITH_THREAD_CREATION_AND_CLEANUP_RATES=OFF 561: HPX_WITH_THREAD_CUMULATIVE_COUNTS=ON 561: HPX_WITH_THREAD_DEBUG_INFO=OFF 561: HPX_WITH_THREAD_DESCRIPTION_FULL=OFF 561: HPX_WITH_THREAD_GUARD_PAGE=ON 561: HPX_WITH_THREAD_IDLE_RATES=OFF 561: HPX_WITH_THREAD_LOCAL_STORAGE=OFF 561: HPX_WITH_THREAD_MANAGER_IDLE_BACKOFF=ON 561: HPX_WITH_THREAD_QUEUE_WAITTIME=OFF 561: HPX_WITH_THREAD_STACK_MMAP=ON 561: HPX_WITH_THREAD_STEALING_COUNTS=ON [Truncated] ## Actual Behavior ... Please describe the behavior you actually observed. ## Steps to Reproduce the Problem ... Please be as specific as possible while describing how to reproduce your problem. 1. 1. 1. ## Specifications ... Please describe your environment - HPX Version: - Platform (compiler, OS): Status: Issue closed Answers: username_1: I missed this one when creating #3267 (duplicate). Since that one is closed, I'm closing this one as well.
FasterXML/jackson-databind
564143096
Title: Polymorphism of field - Problems Question: username_0: Hello, I need some help to resolve a problem. The client send to the server this JSON `{"call":"sendMessage","data":{"type":"ShortcutBarAddRequestMessage","data":{"barType":1,"shortcut":{"_type":"ShortcutSpell","slot":4,"spellId":"0"}}}}` My service deserialize the message by the type of data field (in this case is ShortcutBarAddRequestMessage. The class is `public class ShortcutBarAddRequestMessage extends Message { public short barType; public Shortcut shortcut; public ShortcutBarAddRequestMessage() { } public ShortcutBarAddRequestMessage(short barType, Shortcut shortcut) { this.barType = barType; this.shortcut = shortcut; } }` The problem -> Shortcut is a parent class, the client precise de _type in the shortcut field, When with a simple deserialization Jackson try to parse, the field spellId is missing (logical because it check the Shortcut class due the the attributs of ShortcutBarAddRequestMessage. ` public Message GetMessage() throws ClassNotFoundException, InstantiationException, Exception { Class<?> c = Class.forName("elk.network.messages." + data.type); Message msgInstance = (Message) c.newInstance(); if(data.data != null) { ObjectMapper objectMapper = new ObjectMapper(); final String json = objectMapper.writeValueAsString(data.data); // -> throw due to field unknown return objectMapper.readValue(json, msgInstance.getClass()); } return msgInstance; }` So my question is, How can I precise that the shortcut field of ShortcutBarAddRequestMessage has the _type of the shortcut field in the JSON message. (Many other Message has this problems) Regards Answers: username_1: Issue tracker is for reporting issues or requesting new features: not as support forum for usage questions. Please consider using * https://groups.google.com/forum/#!forum/jackson-user (user mailing list) * https://gitter.im/FasterXML/jackson-databind (online chat) Status: Issue closed
milvus-io/milvus
685215546
Title: when use the dockerfile to deploy k8s, I got an error Deadline Exceeded Question: username_0: I use the dockerfile ![image](https://user-images.githubusercontent.com/20982126/91131090-4b153d00-e6df-11ea-9dcc-c9f837d2a489.png) I can build and run it localhost , and use the python sdk can use it, But I deploy it on K8s, I change the cache_size of the server_config.yaml from 4GB to1GB, and 2 pods, every pod have 2cores and 4GB, the service is started suceefully, but I use python sdk to insert some data to it, I got the error ![image](https://user-images.githubusercontent.com/20982126/91132207-b4954b80-e6df-11ea-9a8e-e000d6d1c446.png) Answers: username_1: @username_0 , I think you need to check if your network is ok or your server is running well ? Was there any proxy in your k8s environment? username_0: yes, it is running well, and the k8s give me a host, I can ping it username_0: ![image](https://user-images.githubusercontent.com/20982126/91162439-a99de380-e6fe-11ea-9518-aabfa019b41e.png) now I find the error Status: Issue closed
Grinnode-live/2020-grin-bug-bash-challenge
764894534
Title: grin-wallet uses .api_secret when talking to grin node /v2/foreign Question: username_0: start with clean ubuntu 20: --------------- `cd ~` ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source ~/.bashrc ``` `sudo bash -c "set -ex && apt-get update && apt-get --no-install-recommends --yes install clang libclang-dev llvm-dev libncurses5 libncursesw5 cmake git locales libssl-dev pkg-config" ` ``` git clone https://github.com/mimblewimble/grin.git cd grin git checkout v5.0.0-beta.2 cargo build ``` `./target/debug/grin` (Let that run, start a new shell) ``` cd ~ git clone https://github.com/mimblewimble/grin-wallet.git cd grin-wallet git checkout v5.0.0-beta.2 cargo build ``` (wait for node to finish syncing) `./target/debug/grin-wallet init` --- `cat ~/.grin/main/.api_secret` AU5zX2eBCSaBaXzJ7b3r `cat ~/.grin/main/.foreign_api_secret` 5G5stpjgh9E3vUDVJydc Verify that the nodes /v2/foreign API is working and that its checking the secret `curl -i -XPOST -ugrin:XXX -d "{ \"method\": \"get_tip\", \"params\": [], \"jsonrpc\": \"2.0\", \"id\": 11 }" http://localhost:3413/v2/foreign` That should fail with "**HTTP/1.1 401 Unauthorized**". Verify that the nodes /v2/foreign API is checking against the .foreign_api_secret `curl -i -XPOST -ugrin:$(cat ~/.grin/main/.foreign_api_secret) -d "{ \"method\": \"get_tip\", \"params\": [], \"jsonrpc\": \"2.0\", \"id\": 11 }" http://localhost:3413/v2/foreign` That should succeed with "**HTTP/1.1 200 OK**" and return some data. Make note: **the nodes /v2/foreign API is checking against the .foreign_api_secret** Check if the wallet works: in a 3rd shell - start a tcpdump watching for calls to the nodes port 3413 `sudo tcpdump -i lo -s 0 -A 'tcp dst port 3413 and tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504F5354 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x3C21444F'` [Truncated] host: 127.0.0.1:3413 content-length: 61 03:42:08.894351 IP localhost.3413 > localhost.56448: Flags [P.], seq 1:132, ack 274, win 512, options [nop,nop,TS val 3707947292 ecr 3707947292], length 131 E...1j@.@. ..........U...o..9.l............ ........HTTP/1.1 **401 Unauthorized** www-authenticate: Basic realm=GrinForeignAPI content-length: 0 date: Sun, 13 Dec 2020 03:42:08 GMT ``` see the line with "authorization: Basic **Z3JpbjpBVTV6WDJlQkNTYUJhWHpKN2Izcg==**" decode the baicauth to see what password was used: `echo Z3JpbjpBVTV6WDJlQkNTYUJhWHpKN2Izcg== | base64 -d` grin:AU5zX2eBCSaBaXzJ7b3r this is the ~/.grin/main/.api_secret. The wallet is sending the .api_secret to the nodes /v2/foreign API. But as we saw above, the node is using ~/.grin/main/.foreign_api_secret Answers: username_1: made small formatting changes. username_0: This is confirmed fixed in rc1 username_2: *Note: As per @username_0's comment, this is confirmed fixed in `rc1`. I will test to see if I can reproduce the fix.* ## Environment **OS**: Debian 10\ **Grin Node**: `grin 5.0.0-rc.2` \ **Grin Wallet**: `grin-wallet 5.0.0-rc.1` \ **System Info**: `Linux debian3 4.19.0-13-amd64 #1 SMP Debian 4.19.160-2 (2020-11-28) x86_64 GNU/Linux` ## Steps ### 0: Building the node and wallet <details> <summary> <b><i>See here for the full steps for building GRIN-Node v5.0.0-rc.2.</b></i> </summary> 1. Download GRIN-Node v5.0.0-rc.2. ```shell $ wget https://github.com/mimblewimble/grin/archive/v5.0.0-rc.2.tar.gz ``` 1. Extract `v5.0.0-rc.2.tar.gz`. ```shell $ tar -xvf v5.0.0-rc.2.tar.gz ``` * Output should be as follows. ``` grin-5.0.0-rc.2/ grin-5.0.0-rc.2/.cargo/ grin-5.0.0-rc.2/.cargo/config grin-5.0.0-rc.2/.ci/ grin-5.0.0-rc.2/.ci/general-jobs grin-5.0.0-rc.2/.ci/release.yml grin-5.0.0-rc.2/.ci/test.yml grin-5.0.0-rc.2/.ci/windows-release.yml grin-5.0.0-rc.2/.editorconfig grin-5.0.0-rc.2/.github/ ... ``` 1. Install Rust. ```shell $ curl https://sh.rustup.rs -sSf | sh; source $HOME/.cargo/env ``` * Proceed with installation with default profile. ``` default host triple: x86_64-unknown-linux-gnu default toolchain: stable (default) profile: default modify PATH variable: yes ``` * Output should be as follows. ``` stable-x86_64-unknown-linux-gnu installed - rustc 1.48.0 (7eac88abb 2020-11-16) ``` 1. Download dependencies, including `libcursesw5`. ```shell # apt install build-essential git tor cmake git libgit2-dev clang libncursesw5 libncurses5-dev libncursesw5-dev zlib1g-dev pkg-config libssl-dev llvm ``` 1. Build GRIN-Node v5.0.0-rc.2. ```shell $ cd grin-5.0.0-rc.2/ $ cargo build --release [Truncated] 1. The output is `grin:1AbKLuWgKhEp6hDUuVEV`. Compare with your `.api_secret` and `.foreign_api_secret`. ``` $ cat ~/.grin/main/.api_secret ``` *Output*: ``` z3KB0GMAEkCb7JTXMCyc ``` --- ``` $ cat ~/.grin/main/.foreign_api_secret ``` *Output*: ``` 1AbKLuWgKhEp6hDUuVEV ``` 1. Based on my findings with the new GRIN releases, the output matches the `.foreign_api_secret` now and not the `.api_secret` like before. ## Conclusion Following @username_0's steps, I was able to confirm that the bug has been fixed as of GRIN-Node `v5.0.0-rc.2` and GRIN-Wallet `v5.0.0-rc.1` version. username_3: Thanks for checking that, @username_2 ! Status: Issue closed
plentico/plenti
788599104
Title: Backtics in script break compile step in v8go Question: username_0: This [lifecycle example](https://svelte.dev/tutorial/onmount) uses backtics for the API url: ```js import { onMount } from 'svelte'; let photos = []; onMount(async () => { const res = await fetch(`https://jsonplaceholder.typicode.com/photos?_limit=20`); photos = await res.json(); }); ``` That should work, but it breaks the build: ``` V8go could not compile ssrJs.code: SyntaxError: missing ) after argument list Can't render htmlComponent: TypeError: Cannot read property 'length' of undefined ``` Answers: username_1: Yep. Here's a simpler test case using `index.svelte` from the 'default' site from `plenti new site`: ```<script> export let title, intro, components, allContent; import Grid from '../components/grid.svelte'; import Uses from '../components/template.svelte'; const foo = `bar`; </script> ``` The value of `foo` - which you may know - is now technically referred to as a JS template literal. username_2: hi @username_0 , Any idea on when the back tick bug will be resolved? I just encountered it when trying to use template literals and environment variables. username_0: Hey @username_2, this is a pesky bug! @joas8211 and I have been hitting it recently too on some of the git-cms work we're doing. I'll take another look at this to see if it's something I can resolve. Thanks for the ping! username_0: So the problem here is nested template literals. The [svelte.compile()](https://svelte.dev/docs#compile-time-svelte-compile) function takes a string as its first argument, since we're reading that directly from a component file, we wrap the `<script>`, the `<style>` and any HTML in a template string (with backticks) so it can be run by the Svelte compiler. The call to the Svelte compiler itself is also a string because we need to run that in v8go like this: ```go ctx.RunScript("var { js, css } = svelte.compile(`"+componentStr+"`, {css: false, hydratable: true});", "compile_svelte") ``` The problem is if the component we read has backticks, the `componentStr` will close early and throw syntax errors. I could escape the backticks we read in like so: ```go componentStr = strings.ReplaceAll(componentStr, "`", "\\`") ``` That would allow us to use simple template strings like: ```svelte <script> let test = `Hi. I'm Jim `; </script> <h1>{test}</h1> ``` But won't work with expressions like: ```svelte <script> let name = "Jim"; let test = `Hi. I'm ${name} `; </script> <h1>{test}</h1> ``` That'll throw: `javascript stack trace: ReferenceError: name is not defined` Originally I thought we might be able to [nest templates](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#nesting_templates) by preprocessing the backticks and putting them inside a placeholder `${ }`. For example, this is valid JS: ```js let e = "expression"; console.log(`template within ${`template with ${e}`}`); // prints "template within template with expression" ``` This doesn't seem to work because the actual compiling of the component chokes on this syntax. Is there an easy solution here I'm missing? username_0: Release [v0.5.1](https://github.com/plentico/plenti/releases/tag/v0.5.1) should allow the use of simple backtics without blowing up the build, unfortunately it still won't be able to evaluate expressions in replacement patterns `${ }`.
kubernetes/kubernetes
392064224
Title: dig will stop and return empty when dnsmasq replied "REFUSED" with the dig tool builded in the e2e test Question: username_0: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> **What happened**: dig will be stoped and return empty when dnsmasq replied "REFUSED" **What you expected to happen**: dig success finially when handling replies from dnsmasq **How to reproduce it (as minimally and precisely as possible)**: 1. start the kubernetes conformance test with the test case "should provide DNS for cluster ". 2. login the dns test container k8s_jessie-querier_dns-test and use dig command. **Anything else we need to know?**: 1. e2e test version: v1.12.1 2. dig version in the e2e test: DiG 9.9.5-9+deb8u15-Debian dig result: # dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A # 3. dig version installed manually on host: DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 dig result: # dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A kubernetes.default.svc.cluster.local. 5 IN A 10.254.0.1 **Environment**: Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8<PASSWORD>3dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} <!-- DO NOT EDIT BELOW THIS LINE --> /kind bug Answers: username_0: /sig e2e username_1: @kubernetes/sig-testing-bugs username_2: I am not able to reproduce this in the latest 1.12 e2e tests. If you can consistently reproduce it please reopen. username_2: /remove-triage unresolved
NCEAS/z-test-issues
213560495
Title: check root element names for consistency Question: username_0: --- Author Name: **<NAME>** (<NAME>) Original Redmine Issue: 498, https://projects.ecoinformatics.org/ecoinfo/issues/498 Original Date: 2002-05-01 Original Assignee: <NAME> --- Changes as decided upon at the Sevilleta EML meeting, April 24-25, 2002: 1) check root element names for all modules to be sure they are consistent. Some now use an "eml-" prefix, others don't. My notes from the meeting indicate that we should make the root elements all camel caps, and get rid of the "eml-" prefix (although the file names would keep the prefix). I'm not positive that I go tthis right, though. Opinions? Answers: username_0: --- Original Redmine Comment Author Name: **<NAME>** (<NAME>) Original Date: 2002-05-20T20:40:47Z --- changed all roots to just the name of the file without eml- using camelCaps where appropriate. I'm unsure what to do with eml-literature because the root is currently called 'citation' instead of 'literature'. Should I change it? username_0: --- Original Redmine Comment Author Name: **<NAME>** (<NAME>) Original Date: 2002-05-10T20:43:37Z --- Unless anyone has objections, I'm going to change all of the roots to just use camel notation without the eml. For instance tableEntity not emlTableEntity and not eml-tableEntity. If there are any objections tell me by Monday. I'll make the changes on Tuesday to give people time to respond. username_0: --- Original Redmine Comment Author Name: **<NAME>** (<NAME>) Original Date: 2002-05-02T01:13:12Z --- Looking at Corinna's notes, she says we decided to keep the "eml" but remove the "-", so that root elements would be of the form: "emlLiterature". Does that mean that we have "emlDataTable", contrary to what the bug for eml-entity says we decided (dataTable)? username_0: --- Author Name: **<NAME>** (<NAME>) Original Redmine Issue: 498, https://projects.ecoinformatics.org/ecoinfo/issues/498 Original Date: 2002-05-01 Original Assignee: <NAME> --- Changes as decided upon at the Sevilleta EML meeting, April 24-25, 2002: 1) check root element names for all modules to be sure they are consistent. Some now use an "eml-" prefix, others don't. My notes from the meeting indicate that we should make the root elements all camel caps, and get rid of the "eml-" prefix (although the file names would keep the prefix). I'm not positive that I go tthis right, though. Opinions? username_0: --- Original Redmine Comment Author Name: **<NAME>** (<NAME>) Original Date: 2002-06-14T01:53:16Z --- ok, checked and fixed. In CVS. username_0: --- Original Redmine Comment Author Name: **Redmine Admin** (Redmine Admin) Original Date: 2013-03-27T21:14:28Z --- Original Bugzilla ID was 498 Status: Issue closed
sjasthi/abcd
628116159
Title: create_dress.php issue Question: username_0: This file has references to quiz_master db. It is actually making a query to this db. since every one of us have that database on our machines, the issues is not noticed. However, if you don't have quiz_master db, then create a dress will not work.
cuba-platform/cuba
590858167
Title: User Substitution Editor is slow when there are many users in database Question: username_0: ### Environment CUBA version: 7.2 ### Description of the bug or enhancement User Substitution Editor uses LookupField and loads all users in the system to the screen. When number of users in the system is very big (tens of thousands) - then this screen uses too much memory and works slow. See forum topic: https://www.cuba-platform.com/discuss/t/problem-adding-substitute-user-when-large-number-of-users/11939/2 Suggestions: 1) Rework screen to use SuggestionField 2) OR look into entity statistics mechanism and dynamically create one of two fields: LookupField or PickerField depending on sec$User instance count.
Adldap2/Adldap2
293062028
Title: Connection vs Authentication Question: username_0: Hi @username_1, Hope you're well ! Juste a little question. I now update Adldap2 to last version and I rewrite my classes. I have now something that I misunderstand. If I want to authenticate my user, i really use to connect berfore ? I do auth like that ```php $this->config = new Configuration\DomainConfiguration(Config::get('ldap_connexion_options')); $this->provider = new Connections\Provider($this->config); $this->provider->auth()->attempt($username, $password, $bind_as_user = true); ``` It works correctly but I'm not sure it's correct like that. Thanks Answers: username_0: Another important point for me. I really need to set config as object because I need to set base_dn after authentication username_0: So, I continue, Is this config correct ? ```php public function __construct() { // Construct new connexion $this->options = Config::get('ldap_connexion_options'); $this->connection = parent::__construct(array('p1' => $this->options)); $this->config = new Configuration\DomainConfiguration($this->options); $this->provider = new Connections\Provider($this->config, $this->connection); $this->setDefaultProvider('p1'); ``` username_1: Hi @username_0, If you're wanting the user to run LDAP operations instead of a user configured in your configuration object, then this is definitely the option you'd like to use. Passing in the third parameter performs a `ldap_bind()` to your LDAP server and **does not** perform a rebind to a user that you have configured inside the configuration object. Using this method also validates that the username and password given are not empty strings or `null` values by throwing exceptions: https://github.com/Adldap2/Adldap2/blob/master/src/Auth/Guard.php#L41 Using the `bind()` method alone would achieve the same effect, but it **does not** validate their username and password: https://github.com/Adldap2/Adldap2/blob/master/src/Auth/Guard.php#L73-L80 Status: Issue closed username_0: Hi @username_1, Thanks for your answer. I'll investigate what is the best for me.
fabianfiorotto/quick-query
256342676
Title: MS SQL Domain Login support Question: username_0: MS SQL Server Windows 10 One Dark / Atom Dark 1.19.7 Logging in to MS SQL Server does not work when using "domain" authentication. Answers: username_1: Since SQL Server isn't free software is very hard for me to test these kinds of features. Could you help me with this? Please hardcode the domain in the constructor of the connection class on your local copy of the module and test if it connects to your server. https://github.com/username_1/quick-query-mssql/blob/3f7e06868bd374f702ba9fa61a31d5ea7cabd84f/lib/quick-query-mssql-connection.coffee#L82 ```coffeescipt constructor: (@info)-> @info.server = @info.host @info.database ?= "master" @info.domain = "your_domain_here" ``` If this works for you I'll figure out how to add the widget to fill the domain in the connection view. username_0: Sorry for the delayed response. That would be awesome to help get this working for MS SQL Server. I completely understand that MS SQL Server isn't free software and it becomes difficult to develop for that established connection. Actually, SSMS is free and I think there is a "free" / lite version that MS offers now, but the difficulty would still be this situation, where you have a "domain" account on the MS SQL Server. I think that would need Active Directory, which I don't think has a free version. Actually, connecting to MS SQL Server with a local ID that is created on SQL server works great...no issues, no complaints. Just as an FYI....so it seems specific to the domain accounts. I tried to hardcode my domain as you suggested. I left the "@emitter in there and tried to remove it and neither worked. Code Snippet below. ` constructor: (@info)-> @info.server = @info.host @info.database ?= "master" @info.domain = "my_domain" @emitter = new Emitter()` and ` constructor: (@info)-> @info.server = @info.host @info.database ?= "master" @info.domain = "my_domain"` username_1: Don't remove the emitter. I forgot to tell you that you need to restart atom each time you change the code. You can restart it pressing <kbd>ctrl</kbd>+<kbd>shift</kbd>+<kbd>f5</kbd> username_0: Thank you, that actually worked. I closed and reopened it each time, but that didn't seem to be enough. The ctrl + shift + f5 worked with the addition of the `@info.domain = "mydomain" ` ...and leaving in the emitter. However, as you know, because the domain is hardcoded, i can't add or create another connection that is not part of the domain. My case (which is annoying), I have a domain MS SQL Server that I use a domain account to connect to it, and I have another non-domain MS SQL Server (hosted by another company) that I have to connect to and I use a 'local' ID on that SQL server. So this works great, but its just obnoxious for me to go back and forth between each. Not your fault or anything, I am just venting. Thank you again. username_1: Hey! that's great! I'll add the widget to fill the domain son. Meanwhile, you can add a little hack to use it. ``` constructor: (@info)-> @info.server = @info.host @info.database ?= "master" @info.domain = "your_domain" if @info.host == "some_host" ``` username_0: Hey there, I really don't know CoffeeScript at all and haven't developed at all for Atom.io. I have done some JavaScript and jQuery development. With the code you gave me and some searches I was able to possible bring it to the next level....probably something similar to what you are going to do. ` constructor: (@info)-> @info.server = @info.host @info.database ?= "master" @info.domain = if (@info.user.search /\\/)>-1 then @info.user.split("\\").slice(0,1).toString() else "" @info.user = @info.user.split("\\").slice(-1).toString() if @info.user.search /\\/ >= 0 @emitter = new Emitter()` username_1: I think you could achieve the same doing ``` constructor: (@info)-> @info.server = @info.host @info.database ?= "master" [@info.domain,@info.user] = @info.user.split(/\\/) if @info.user.search(/\\/) > -1 @emitter = new Emitter() ``` I already thought in doing something like that. I don't like the idea of adding that much options to the connect view, it could become confusing... but I don't know... username_0: I agree, too many options for that connection view could be too confusing. But based on this, we wouldn't be adding another form field for a user to enter, so it wouldn't be too confusing for user entry. Possibly just change the title of the user field from "user" to "domain*\user (*optional domain)" or something. For the code not to be too confusing....having 2 lines separated like what I have makes it less confusing, though I know my code is very verbose. What you have written, combining it into 2 lines I don't think is confusing at all. (I am slightly confused about the "IF" statement after the assignment in the same line, but that seems to be a caveat of the language that one would pick up). This is also the only place that the domain and user would be spliced together in general. username_0: got the update...i am not 100% if its working or not, I am having some general issues right now with connecting to MS SQL via domain. I will let you know, thank you! Status: Issue closed username_0: Corrected everything on my end for testing and it works great! Thank you for the update!
tleunen/eslint-import-resolver-babel-module
266555631
Title: npm update Question: username_0: Not sure I'm missing something but when I run `npm i --save eslint-import-resolver-babel-module` I get the version 4.0.0-beta.3 This version requires me to install peerDependency as per the warning `npm WARN [email protected] requires a peer of babel-plugin-module-resolver@>3.0.0-beta but none is installed. You must install peer dependencies yourself.` However running `npm i --save [email protected]` return an error ```bash npm ERR! code ETARGET npm ERR! notarget No matching version found for [email protected] npm ERR! notarget In most cases you or one of your dependencies are requesting npm ERR! notarget a package version that doesn't exist. ``` Am I missing something? Answers: username_1: I think it's a semver issue - the peerDependency should be `^3.0.0-beta.0` Though, even after installing `beta.0`, I'm getting `has incorrect peer dependency babel-core@^6.0.0` cause I have `^7.0.0-beta.2` but the constraint is set to `^6.0.0 || >7.0.0-alpha`... username_2: @username_3 the incorrect tag is causing some trouble for modules using automated dependency updating mechanisms. Would you mind correcting the `latest` label or is the v3 coming in the next days anyways? username_3: Yes, we need to fix #74 first, I'll investigate again tonight. We released officially the babel plugin, so an official release of the eslint plugin will come soon, once we fix the issue as I said. I didn't plan that one :( Ideally, both would have been updated at the same time username_4: Any further word on this? Status: Issue closed username_4: This shouldn't be closed until v4 is released...? "You can't simply use this package without locking its version" is an outstanding issue which seems to only be tracked here. username_3: Agreed. I plan to release v4 with a support for Babel 6 only, and a v5 beta for Babel 7. More info tomorrow. username_3: Not sure I'm missing something but when I run `npm i --save eslint-import-resolver-babel-module` I get the version 4.0.0-beta.3 This version requires me to install peerDependency as per the warning ```bash npm WARN [email protected] requires a peer of babel-plugin-module-resolver@>3.0.0-beta but none is installed. You must install peer dependencies yourself. ``` However running `npm i --save [email protected]` return an error ```bash npm ERR! code ETARGET npm ERR! notarget No matching version found for [email protected] npm ERR! notarget In most cases you or one of your dependencies are requesting npm ERR! notarget a package version that doesn't exist. ``` Am I missing something? Status: Issue closed username_3: v4 has been released with babel 6 only. v5.beta with babel 7 will be published very soon.
code-golf/code-golf
679943570
Title: Buggy C compiler Question: username_0: It is hard to be sure, but it appears that C solutions -- at least my best -- rely on a C compiler bug that allows writing `while(++v)` in place of `while(*++v)`, which for a correct compiler would mean something completely different. Maybe switch to a better compiler? Answers: username_1: The compiler isn't at fault here. See [clang](https://tio.run/##S9ZNzknMS///XzkzLzmnNCXVprgkJTNfL8OOKzOvRCE3MTNPA8RI1knOSCzS0irTrOYqz8jMSdXQ1i7TLCgCyqVpKKkWWCmoFsfkKemU6SgA1Vhz1f7//z@xKN0QRBiBCGMA), [gcc](https://tio.run/##S9ZNT07@/185My85pzQl1aa4JCUzXy/Djiszr0QhNzEzTwPESNZJzkgs0tIq06zmKs/IzEnV0NYu0ywoAsqlaSipFlgpqBbH5CnplOkoANVYc9X@//8/sSjdEEQYgQhjAA) and [tcc](https://tio.run/##S9YtSU7@/185My85pzQl1aa4JCUzXy/Djiszr0QhNzEzTwPESNZJzkgs0tIq06zmKs/IzEnV0NYu0ywoAsqlaSipFlgpqBbH5CnplOkoANVYc9X@//8/sSjdEEQYgQhjAA) (code.golf uses tcc). `++v` and `*++v` both increment v by the pointer size (8 on 64bit), the different is where the program stops. `*++v` stops when v points to a null (just after argv), but ++v keeps going past the end of argv, past the end of envp and tries to read invalid memory and raises a segfault. username_0: I understand what it is doing. The point is that when the program runs under code.golf, it does *not* segfault. Instead, it reports success. At first I thought this was because it encounters a null pointer before it finds one that points to unaddressable memory, but testing ruled that out. username_1: Error messages are hidden if the stdout is correct, could that be it? username_2: It does segfault. It's just that code.golf ignores that. Just like it ignores all output to stderr and exit codes. It was specifically updated to support this. Previously, it didn't read all of the output when a segfault occurred. username_0: I have seen it report a segfault. You seem to be saying that a segfault is ignored so long as all the correct output occurs first, but reported otherwise? If that is chosen behavior, then by definition it's not a bug. But maybe the docs should say so? Status: Issue closed
davidhalter/jedi
39570929
Title: linter crashes on PEP 3132 Extended Iterable Unpacking Question: username_0: When linting this code: first, *rest = stuff use(first) I get a scary traceback: Traceback (most recent call last): File "/home/petr/programs/Python-3.4.0/__installed__/lib/python3.4/runpy.py", line 171, in _run_module_as_main "__main__", mod_spec) File "/home/petr/programs/Python-3.4.0/__installed__/lib/python3.4/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/petr/dev/jedi/jedi/__main__.py", line 36, in <module> for error in jedi.Script(path=path)._analysis(): File "/home/petr/dev/jedi/jedi/api/__init__.py", line 602, in _analysis self._evaluator.eval_statement(stmt) File "/home/petr/dev/jedi/jedi/evaluate/cache.py", line 39, in wrapper rv = function(obj, *args, **kwargs) File "/home/petr/dev/jedi/jedi/evaluate/recursion.py", line 24, in run result = func(evaluator, stmt, *args, **kwargs) File "/home/petr/dev/jedi/jedi/debug.py", line 51, in wrapper result = func(*args, **kwargs) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 132, in eval_statement result = self.eval_expression_list(expression_list) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 170, in eval_expression_list return self.process_precedence_element(p) or [] File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 180, in process_precedence_element return self.eval_statement_element(el) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 215, in eval_statement_element return self.eval_call(element) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 226, in eval_call return self.eval_call_path(path, par, s.start_pos) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 240, in eval_call_path search_global=True) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 113, in find_types return f.find(scopes, resolve_decorator, search_global) File "/home/petr/dev/jedi/jedi/debug.py", line 51, in wrapper result = func(*args, **kwargs) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 47, in find types = self._names_to_types(names, resolve_decorator) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 222, in _names_to_types types += self._remove_statements(typ, name) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 260, in _remove_statements types += evaluator.eval_statement(stmt, seek_name=unicode(self.name_str)) File "/home/petr/dev/jedi/jedi/evaluate/cache.py", line 39, in wrapper rv = function(obj, *args, **kwargs) File "/home/petr/dev/jedi/jedi/evaluate/recursion.py", line 24, in run result = func(evaluator, stmt, *args, **kwargs) File "/home/petr/dev/jedi/jedi/debug.py", line 51, in wrapper result = func(*args, **kwargs) File "/home/petr/dev/jedi/jedi/evaluate/__init__.py", line 158, in eval_statement new_result += finder.find_assignments(ass_expression_list[0], result, seek_name) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 563, in find_assignments return _assign_tuples(lhs, results, seek_name) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 544, in _assign_tuples result += find_assignments(command, r, seek_name) File "/home/petr/dev/jedi/jedi/evaluate/finder.py", line 564, in find_assignments elif unicode(lhs.name.names[-1]) == seek_name: AttributeError: 'Operator' object has no attribute 'name' Answers: username_1: Doesn't crash anymore. But note that I'm currently not actively working on the linter (but this issue was fixed). Status: Issue closed
denoland/deno
984121864
Title: Enable "exactOptionalPropertyTypes" for Typescript by default Question: username_0: In TypeScript 4.4, the option "exactOptionalPropertyTypes" was introduced https://devblogs.microsoft.com/typescript/announcing-typescript-4-4/#exact-optional-property-types. Now that https://github.com/denoland/deno/pull/11678 is merged, I petition that we add this compiler flag. It ensures that optional field values ```ts interface Foo { bar: string baz?: string } ``` cannot be assigned as `undefined`. E.g. ```ts // with exactOptionalPropertyTypes enabled this is invalid const foo1: { bar: '', baz: undefined } ``` This is a good change for catching subtle errors with Object.key, object spreading, etc. There is likely code that is broken with this change though. I would say if this feels like a valuable flag to add, the sooner its added the less code that will be broken. I can of course still be disabled with a tsconfig.json for projects that rely on this behavior, but I imagine that will be a select fe Answers: username_0: Fair enough. It sounds like theres already an established answer for "pedantic" flags. Should I close this issue? username_1: Maybe someone else has an argument otherwise. No problem keeping it open for a while. username_2: This goes beyond just this particular option, but it would be nice if Deno supported applying options to only "own" code. i.e. I tend to prefer using `noUncheckedIndexedAccess` as when I've added it to a number of my codebases, the majority of errors produced were actual unidentified bugs. But enabling these sort've options on library code tends to explode as these options aren't in super high usage, so libraries often don't consider them. It would be a significant change, but if Deno did actually go for the **most-strict** options on all of these, then library authors would be encouraged to write code that is as correct as possible. Users wanting a less restrictive configuration wouldn't have to worry about breaking libraries as code working under these strict flags I'm pretty sure is always meant to compatible with the option being off.
sebastianliu/etcd-adapter
379748297
Title: Add Travis CI and code coverage badge Question: username_0: Casbin's adapters traditionally use Travis CI and coveralls.io. Can you support them? Thanks. Status: Issue closed Answers: username_1: Thank you for your suggestion. I have added a Travis CI and code coverage for this new adapter. Other Suggestions are welcome.
gimli-rs/object
560109743
Title: Improve error handling Question: username_0: The `Object` traits currently use `Option` in places where they really should use `Result`. Answers: username_1: Further, the things that currently use Result use it with a bad choice of E type: - &amp;'static str: https://docs.rs/object/0.17.0/object/read/struct.File.html#method.parse - String: https://docs.rs/object/0.17.0/object/write/struct.Object.html#method.write - (): https://docs.rs/object/0.17.0/object/write/struct.Object.html#method.symbol_section_and_offset This causes problems downstream because there is no Error impl for any of these types. See https://github.com/probe-rs/probe-rs/pull/143#discussion_r376802742 for one example where this is a problem. username_0: What's bad about using `map_err` to convert to an error type you prefer? username_1: @username_0 it makes handling `object`'s errors noisy and annoying, whether the downstream crate is using a type erased error type *or* a custom one. ```rust // type erased error type use anyhow::Result; pub fn demo() -> Result<()> { let data = std::fs::read("...")?; // okay let value: T = serde_json::from_str(...)?; // okay let object = object::File::parse(data)?; // DOESN'T WORK } ``` ```rust // custom error type use thiserror::Error; #[derive(Error, Debug)] pub enum Error { Io(#[from] std::io::Error), Json(#[from] serde_json::Error), } pub fn demo() -> Result<(), Error> { let data = std::fs::read("...")?; // okay let value: T = serde_json::from_str(...)?; // okay let object = object::File::parse(data)?; // DOESN'T WORK // (doesn't make sense to have a From<&'static str> impl) } ``` username_0: @username_1 So wrapping the `String` or `&'static str` in a newtype that implements `Error` would be a sufficient fix? username_1: Yes that would work. Status: Issue closed
zuuuiko/docnote
165985550
Title: ะฏ, ัะบ ะปั–ะบะฐั€, ั…ะพั‡ัƒ ะผะฐั‚ะธ ะผะพะถะปะธะฒั–ัั‚ัŒ ะทะณะตะฝะตั€ัƒะฒะฐั‚ะธ ะดะพะบัƒะผะตะฝั‚ ะ”ะพะบัƒะผะตะฝั‚ ะฟั€ะธะบั€ะตะฟะปะตะฝะธะต Question: username_0: [ะŸั€ะธะบั€ะตะฟะปะตะฝะธะต.docx](https://github.com/username_1/docnote/files/368034/default.docx) Answers: username_0: [ะŸั€ะธะบั€ะตะฟะปะตะฝะธะต.docx](https://github.com/username_1/docnote/files/368034/default.docx) username_0: ะŸั€ะธัั‹ะปะฐัŽ ะฒะฐะผ ะ”ะพะบัƒะผะตะฝั‚ ะฟั€ะธะบั€ะตะฟะปะตะฝะธะต. ะ’ะพั‚ ั‚ะฐะบะพะน ะถัƒั€ะฝะฐะป ะผั‹ ะดะตะปะฐะตะผ 1 ั€ะฐะท ะฒ ะณะพะดัƒ. ะ’ัะต ะฟะตั€ะตะฟะธัั‹ะฒะฐะตะผ. ะะตะพะฑั…ะพะดะธะผะพ ัะดะตะปะฐั‚ัŒ ั‚ะฐะบ, ั‡ั‚ะพะฑั‹ ั ะฝะฐะถะฐะป ะบะฝะพะฟะบัƒ ัะฒะตัั‚ะธ ะฟั€ะธะบั€ะตะฟะปะตะฝะธะต ะธ ะพะฝะพ ะผะฝะต ะฒั‹ะดะฐะปะพ ะฒะตััŒ ัั‚ะพั‚ ะถัƒั€ะฝะฐะป. ะฏ ะตะณะพ ะฝะฐ ะบะพะผะฟ ัะพั…ั€ะฐะฝะธะป. ะ ะฐัะฟะตั‡ะฐั‚ะฐะป. ะ˜ ะทะฐะฒั‹ะป. ะ ะฒัะต ั‚ะต, ะบั‚ะพ ะฒั‹ะฑั‹ะป. ะฏ ะฒ ะบะฐั€ั‚ะพั‡ะบะต ะฟะฐั†ะธะตะฝั‚ะฐ ะพั‚ะผะตั‚ะธะป, ั‡ั‚ะพ ะพะฝ ะฒั‹ะฑั‹ะป. ะ˜ ะพะฝ ัƒะดะฐะปัะตั‚ัั ะฒ ะฐั€ั…ะธะฒ ะฟะฐั†ะธะตะฝั‚ะพะฒ. ะ˜ ะฝะต ะพั‚ัะฒะตั‡ะธะฒะฐะตั‚ ะผะฝะต โ€“ ั‡ั‚ะพ ะพะฝ ัƒ ะผะตะฝั ะตัั‚ัŒ. ะ ะดะฐั‚ัƒ ะบะพะณะดะฐ ะพะฝ ะฒั‹ะฑั‹ะป โ€“ ะผั‹ ะฒั€ัƒั‡ะฝัƒัŽ ะฟั€ะพะฟะธัั‹ะฒะฐะตะผ ะฒ ัั‚ะพะผ ะถัƒั€ะฝะฐะปะต. ะ“ะปะฐะฒะฝะพะต ั‡ั‚ะพะฑั‹ ะผั‹ ะ—ะฝะฐะปะธ ะบัƒะดะฐ ะพะฝ ะฒั‹ะฑั‹ะป. ะ ะผั‹ ัั‚ะพ ะทะฝะฐะตะผ ะธ ะฟะธัˆะตะผ ะฒ ะตะณะพ ะบะฐั€ั‚ะพั‡ะบะต. ะŸะพะบะฐ ะผั‹ ะฒัะต ัั‚ะพ ะฝะฐะฑะธั€ะฐะตะผ ะฒั€ัƒั‡ะฝัƒัŽ. ะ˜ ะบั‚ะพ ะฒั‹ะฑั‹ะป ะฒั‹ั‡ะตั€ะบะธะฒะฐะตะผ. ะŸะพั„ะฐะผะธะปัŒะฝะพ ะฟะพ ะถัƒั€ะฝะฐะปัƒ ะฟั€ะพั…ั€ะพะดะธะผัั. ะšั‚ะพ ะตัั‚ัŒ, ะฐ ะบั‚ะพ ะฒั‹ะฑั‹ะป. Status: Issue closed
accounts-js/accounts
387304563
Title: Need to change the current graphql User resolvers Question: username_0: On the pull request #513 this new resolver has been introduced: https://github.com/accounts-js/accounts/commit/e42e6c7ee12c2fe9b828c725c10eac1a49eb6343#diff-7c84961d16582309b79b1a9c44c7f1e3R2 We need to change the resolver to not use _id which will only work with mongo and fail with other databases<issue_closed> Status: Issue closed
icsharpcode/ILSpy
756490
Title: Tabbed UI Question: username_0: The current version of ILSpy is able to display only one decompiled item at a time. Usually I need to see more methods or classes to find out how they work together, so I have added support for tabbed interface (based on AvalonDock). It's available in master branch of my username_0/ILSpy fork (username_0/ILSpy@a58dbcf108be591e6390) if you want to try it out. It's not a complete solution, rather a working example. What do you think about this idea? Should ILSpy stay SDI or could it be enhanced this way? Answers: username_1: Please add tabs, this issue has been open since 2011 when tabs where not widely implemented, they are now, and people are used to tabs, please provide the option to either have a new window or a new tab as at present you cannot open multiple tabs at the same time. The reason the .NET Reflector UI has problems with tabs is because you don't get the option to open in a new tab by middle-clicking the mouse. In order to provide tabs you should have a context menu to open in a new tab, middle click on function references and the option to turn tabs on or off and then there won't be any issues. username_2: +1 tabs would make it so much more useful username_3: +1 This is a very big thing. In general, some UI features could be incredibly helpful. Tabs, Favorite/bookmarks, Projects/Groups (to throw specific parts of assemblies together, like making a link to grouped buggy code). I know some of this is on the roadmap, but as someone who uses it almost every day to make obfuscation checks, IL reading, and version differentials, it would be incredibly helpful to have these features. username_4: @username_5 @Radon222 Is there any progress on this? Honestly, I'd settle for "bookmarks" - I finally found one method I need, in a list of 1000 methods, but now I need to check another method, and coming back here is going to be a pain :P Tabs would resolve this, as would bookmarks if a UI overhaul is imminent. username_5: Implementing bookmarks is on my list. However, I am currently very very busy, so no idea, when or if I ever will get around to it. :( Status: Issue closed username_5: superseded by #1724
everypolitician/everypolitician
110659040
Title: Summarise changes in everypolitician-data PR comments Question: username_0: When a pull request is opened on everypolitician-data it would be useful to get a summary of what the change affects. For example if the change renames a bunch of parties it could include a table which showed the previous and current names of the parties. Same for people, countries etc. This could be triggered by the viewer-sinatra pull request, which already includes a snapshot of before and after in the diff of the `DATASOURCE` file change. Once it's figured out what the changes are it could comment on the original everypolitician-data pull request with a summary. Answers: username_0: This now exists as [everypolitician-pull_request](https://github.com/everypolitician/everypolitician-pull_request), which runs as part of the everypolitician-data Travis build and posts a comment summarizing the pull request as part of the build. Status: Issue closed
ionic-team/capacitor
450985700
Title: Some plugins become undefined due to ordering of cordova plugin_list Question: username_0: **Description of the problem:** Like the title states, the new ordering of plugin_list causes some plugins to become undefined. If there is a merge sharing the same Cordova namespace as a clobber, then the merge gets clobbered. launchnavigator is one such plugin that has a merge (for platform specific JS code) with a clobber (its `common.js` code). Affected platform - [x] Android - [x] iOS - [ ] electron - [ ] web OS of the development machine - [x] Windows - [ ] macOS - [ ] linux **Capacitor version:** 1.0.0, but bug exists since beta-25 **Steps to reproduce:** ionic start foobar blank --capacitor ionic build npm i uk.co.workingedge.phonegap.plugin.launchnavigator npx cap add android npx cap open android Project sync, build and run. **Issue:** launchnavigator (or whatever plugin that relies on merges and clobbers) is undefined I have a PR to fix this. Prior to #1524, the merge into the launchnavigator namespace happens after the clobber. After #1524, the merge becomes clobbered instead because `clobberKey = ""` comes alphabetically before `clobberKey = "launchnavigator"`, and launchnavigator is thus missing everything in `plugins/uk.co.workingedge.phonegap.plugin.launchnavigator/www/android/launchnavigator.js`. Answers: username_1: Looks like the Fixes: didn't work. Closing as #1616 is merged. Status: Issue closed
nanoframework/Home
349083916
Title: VS crashes when starting debug Question: username_0: ## Details about Problem nanoFramework area (Visual Studio extension ) VS version (if appropriate): 15.7.6 ## Detailed repro steps so we can see the same problem 1. Open a Solutions with several projects on which some reference others 2. Hit F5 or start debug 3. The deployment occurs and sometimes the output from the device starts showing on the debug output window 4. VS crashes without any other error message Answers: username_0: Looking at Windows Event Viewer there is a crash report there but without information relevant to find the root cause. Status: Issue closed
Lidemy/homeworks-3rd
487755463
Title: [Week13] Question: username_0: https://github.com/Lidemy/mentor-program-3rd-username_0/pull/26 Answers: username_0: ่ซ‹ๅ• Huli~ ็›ฎๅ‰็•™่จ€ๆฟๅœจ google chrome ไธŠๆœƒ็”ข็”Ÿ้Œฏ่ชค่จŠๆฏ ```Unchecked runtime.lastError: Could not establish connection. Receiving end does not exist.``` ![image](https://user-images.githubusercontent.com/9928579/64062723-ea166e80-cc1c-11e9-83df-42691267dbd2.png) ไธ”ๅœจ [stackoverflow](https://stackoverflow.com/questions/54619817/how-to-fix-unchecked-runtime-lasterror-could-not-establish-connection-receivi) ไนŸๆœ‰็œ‹ๅˆฐ็›ธ้—œๆๅ•๏ผŒไธ็Ÿฅ้“ๆœ‰ๆฒ’ๆœ‰ไป€้บผ่งฃๆฑบ่พฆๆณ• ? ๆˆ–ๆ˜ฏ็›ดๆŽฅๆžœๆ–ทๆฌๅฎถๅˆฐ firefox (? XDD username_1: ๆˆ‘ๅœจๆˆ‘้›ป่…ฆไธŠ้–‹ๆฒ’ๅ‡บ็พ้€™ๅ€‹ๅ•้กŒ่€ถ๏ผŒๆฒ’่พฆๆณ•้‡็พ Status: Issue closed username_2: ็œ‹ stack overflow ๅฅฝๅƒๆ˜ฏ่ชชๆŸๅ€‹ Chrome ๆ“ดๅ……ๅŠŸ่ƒฝ้€ ๆˆ็š„ๅ•้กŒ๏ผŒ่งฃๆณ•ๆ‡‰่ฉฒๅฐฑๆ˜ฏๅœ็”จ้‚ฃๅ€‹ๆ“ดๅ……ๅŠŸ่ƒฝ๏ฝž็œ‹ไฝ ่ฆไธ่ฆๆ…ขๆ…ขๅŽปๆ‰พๅ‡บๅ…‡ๆ‰‹ๆ˜ฏ่ชฐ๏ผˆ๏ผŸ๏ผ‰
fox-one/bugs-hunter
576061876
Title: ๆฐขๅฎšๆŠ•ๆทปๅŠ ๆ•ฐๆฎๅŠŸ่ƒฝ๏ผŒๅฆ‚ๆžœๅฏไปฅ้€š่ฟ‡่ดญไนฐๅคšๅฐ‘boxๅ’Œไปทๆ ผๆŽจ็ฎ—ๅ‡บ่Šฑ่ดนๅคšๅฐ‘USDTๅฐฑๆ›ดๅฅฝไบ† Question: username_0: **ๆ่ฟฐ้”™่ฏฏ** ๆฐขๅฎšๆŠ•ๆทปๅŠ ๆ•ฐๆฎๅŠŸ่ƒฝ๏ผŒๅฆ‚ๆžœๅฏไปฅ้€š่ฟ‡่ดญไนฐๅคšๅฐ‘boxๅ’Œไปทๆ ผๆŽจ็ฎ—ๅ‡บ่Šฑ่ดนๅคšๅฐ‘USDTๅฐฑๆ›ดๅฅฝไบ†ใ€‚ **้‡็Žฐ** ้‡็Žฐ่กŒไธบ็š„ๆญฅ้ชค๏ผš 1.ๆ‰“ๅผ€fox.oneไธญ็š„ๆฐขๅฎšๆŠ• 2.็‚นๅ‡ปๅผ€ๅง‹ 3.็‚นๅ‡ปๅ…ทไฝ“็š„ๅฎšๆŠ•่ฎกๅˆ’ 4.ๅœจๅކๅฒ่ฎฐๅฝ•็‚นๅ‡ปๆทปๅŠ /ไฟฎๆ”น 5.็‚นๅ‡ปๆทปๅŠ ่ฎฐๅฝ• 6.ๅกซๅ†™่ดญไนฐไบ†ๅคšๅฐ‘boxๅ’Œๅ•ไปท๏ผŒๆ€ปไปทๆ ผไธ่‡ชๅŠจ็”Ÿๆˆใ€‚ **้ข„ๆœŸ่กŒไธบ** ๅฏนๆ‚จๆœŸๆœ›ๅ‘็”Ÿ็š„ไบ‹ๆƒ…็š„็ฎ€ๆดๆ˜Žไบ†็š„ๆ่ฟฐใ€‚ **ๆˆชๅ›พ** ![Screenshot_20200305-154709](https://user-images.githubusercontent.com/29365811/75959998-432bcb80-5efa-11ea-99ba-075e29366104.jpg) ็Žฏๅขƒไฟกๆฏ๏ผˆ่ฏทๅกซๅ†™ไปฅไธ‹ไฟกๆฏ๏ผ‰๏ผš ๆ“ไฝœ็ณป็ปŸ๏ผšๅฎ‰ๅ“ ็‰ˆๆœฌ8.1.0 ่ฎพๅค‡๏ผš่ฃ่€€8c ๆ“ไฝœ็ณป็ปŸ๏ผšEMUI ็‰ˆๆœฌ 8.2.0 ๆ‰‹ๆœบ็ณป็ปŸ่ฏญ่จ€ [ไธญๆ–‡] Fox.ONE ็‰ˆๆœฌ 2.11.0๏ผˆ215๏ผ‰ Fox.ONE ้‚€่ฏท็ YWXRMM Answers: username_1: ๆ˜ฏๅฏไปฅๅšๅˆฐ็š„๏ผŒๅทฒ็กฎ่ฎค๏ผŒๅฅ–ๅŠฑๅฐ†ไบŽ3ไธชๅทฅไฝœๆ—ฅๅ†…ๅ‘ๆ”พ๏ผ
cloudfour/cloudfour.com-patterns
1185494925
Title: Component code doc example should show "embed"/"include" examples Question: username_0: There seems to have been a regression. Before code example docs looked like this (image pulled from #1607): ![image](https://user-images.githubusercontent.com/459757/160716631-74fd23eb-d595-4318-ac1a-9a6436613761.png) Instead, they now look like this: <img width="1277" alt="Screen Shot 2022-03-29 at 3 21 00 PM" src="https://user-images.githubusercontent.com/459757/160716658-59e8bde6-a1af-467d-acc6-2323e55eb739.png">
Azure/azure-sdk-tools
808081278
Title: ApiView: Include the package's dependencies Question: username_0: I always have to ask "what other packages are you depending on" during reviews and it would be nice if we could add that informational automatically from the NuGet/Maven/etc. package. We're very strict about 3rd party dependencies and don't want anything sneaking in accidentally. Answers: username_0: This isn't just for .NET - I'd like every language to include dependencies. I think this should be a P0. username_1: I've added support for C#, but as far as all of the other languages go it is still an open issue.
gravitational/teleport
1019079639
Title: libfido2 support for tsh Question: username_0: ### What Add libfido2 support for `tsh`. libfido2 implements both CTAP1 and CTAP2 protocols, allowing us to better leverage the server-side Webauthn implementation. It seems to be the outstanding (only?) client-side implementation for CTAP2. It supports Linux, macOS and Windows, among others. This doesn't include Touch ID support - that is a different can of worms. ### How libfido2, unfortunately, is only available as a native library, which causes a few complications for `tsh`. Go bindings are available via the [github.com/keys-pub/go-libfido2](https://github.com/keys-pub/go-libfido2) package. A draft implementation would work as follows: - If libfido2 is available in the system, we use it and open support for CTAP2/Windows - If libfido2 is not available we fallback to the current CTAP1 implementation, based on [github.com/flynn/u2f](https://github.com/flynn/u2f) Also note that libfido2 has its own set of dependencies: libcbor, OpenSSL 1.1+, zlib and libudev (Linux only). Our audience is fairly technical, in particular for `tsh`, so maybe installing a few packages is not much of an issue, but that remains as a discussion point. - https://github.com/Yubico/libfido2 - https://github.com/keys-pub/go-libfido2 ### Why This gives us: - CTAP2 support, allowing us to use more modern authenticator APIs - Windows CTAP1/2 support, removing the present limitations of `tsh` in the platform ### Workaround We don't necessarily need libfido2, what we actually want is CTAP2 support for `tsh` (it's just that the list of options seem dim). Some research in this area might do us good, there may be something I missed in my initial combing for libraries. CTAP1 works perfectly fine for the moment - I'm not aware of any CTAP2-exclusive authenticators. It does limit our options in terms of Webauthn features, though, and might become a limitation in the future. Answers: username_0: Related work: https://github.com/gravitational/teleport/issues/9160 username_0: I've now landed enough PRs in master that I think we can call this done. I expect that there will be refinements to be done when we get people trying it out, but the overall implementation is there. :tada: Status: Issue closed
MicrosoftDocs/azure-docs
418714497
Title: Power BI desktop does not work well with this method Question: username_0: The ability to load balance and specifically retain a permanent server address is sorely missed in AAS, so I was really happy to find this workaround. However if you attempt to open a pbix from the Windows File Explorer that uses such a connection, you will be presented with an error "Authentication failed: User ID and Password are required when user interface is not available." If you start PBI desktop first and open the file from there, you will be prompted for sign-on to Azure as expected, but for some reason - twice - before you can access the report. Also worth noting for Win 7 users is that the default setting of an Azure website is TLS 1.2 which is not supported by Win 7 by default and will cause PBI desktop to throw exceptions and crash. As a final thought, why does the link:// method in PBI not support a fully qualified server/model string? When passing link from the website there is no way to point the client to the correct database, only the server. --- #### Dokumentinformasjon โš  *Ikke rediger denne delen. Den kreves for koblingen docs.microsoft.com โžŸ GitHub-problem.* * ID: 0db9d14e-64b9-5f7c-b861-824be6344db8 * Version Independent ID: 50f012fe-4eb0-6cde-f737-87ddc82a9617 * Content: [Azure Analysis Services alias server names](https://docs.microsoft.com/nb-no/azure/analysis-services/analysis-services-server-alias#feedback) * Content Source: [articles/analysis-services/analysis-services-server-alias.md](https://github.com/Microsoft/azure-docs/blob/master/articles/analysis-services/analysis-services-server-alias.md) * Service: **azure-analysis-services** * GitHub Login: @Minewiskan * Microsoft Alias: **owend** Answers: username_1: @username_0 Thanks for the question! We are investigating and will update you soon. username_1: @username_0 Could you ask your question in the forums? The audience there would be a better suited to answer your question. [https://social.msdn.microsoft.com/Forums/en-US/home?forum=AzureAnalysisServices](https://social.msdn.microsoft.com/Forums/en-US/home?forum=AzureAnalysisServices) Status: Issue closed username_1: @username_0 We will now proceed to close this thread. If there are further questions regarding this matter, please comment and we will gladly continue the discussion.
vercel/styled-jsx
1137701205
Title: Using `ncc` to build `dist/index` causes `__dirname` to end up in "browser bundle" Question: username_0: #### Do you want to request a _feature_ or report a _bug_? A bug. #### What is the current behavior? `styled-jsx/dist/index/index.js` includes reference to `__dirname`. As far as I can tell it's there because `ncc` is used to compile `dist/index` and `ncc` [is designed to support Node.js runtime](https://github.com/vercel/ncc/issues/836#issuecomment-1002545817). ```js /************************************************************************/ /******/ /* webpack/runtime/compat */ /******/ /******/ if (typeof __nccwpck_require__ !== 'undefined') __nccwpck_require__.ab = __dirname + "/"; /******/ /************************************************************************/ ``` Maybe it's expected that every JS bundler removes `__dirname` while bundling for the browser? #### Reproduce steps Download https://registry.npmjs.org/styled-jsx/-/styled-jsx-5.0.0.tgz and open `package/dist/index/index.js`. Search for `__dirname` (it's in the line 733). I can also provide a full project a bundler configured etc., but since it has a couple of moving pieces (ESBuild, Babel) I decided to leave it off. Let me know if it would help! #### What is the expected behavior? Bundle to be used in the browser doesn't use `__dirname` since it's not available there? #### Environment (include versions) - Version of styled-jsx (or next.js if it's being used): 5.0.0, no `next.js` - Browser: n/a - OS: n/a (macOS) #### Did this work in previous versions? Until https://github.com/vercel/styled-jsx/pull/770 `index.js` was being built with `babel`. Answers: username_1: Hi @username_0 thanks for noticing this, I'm curious how are you using styled-jsx in your project. Are you playing it without webpack so that it causes an error? username_0: Ah, that's a story on its own! ๐Ÿ˜ƒ At [Glow](https://glow.app) we're bundling a React project with esbuild while using `styled-jsx` through a custom esbuild+babel plugin (similar to [`esbuild-babel-plugin`](https://github.com/nativew/esbuild-plugin-babel)). I decided to support `styled-jsx` (despite this custom builder setup) since we have a lot of reusable components already written with `styled-jsx` in other, regular Next.js projects. I actually succeeded to configure everything (I slapped another esbuild plugin on top of everything which replaces `__dirname` with paths, similar to [this one](https://github.com/evanw/esbuild/issues/859#issuecomment-829154955)), but in the end wanted to post this issue in case: - this has some side effects I don't know about which may be a reason to look into removing this in `styled-jsx` (instead of relying on us-consumers to deal with it) - someone else encounters a similar problem in the future (for reference). username_2: @username_1 what this error looks like for us with Webpack 4 running in a Chrome extension is ``` index.js?7164:733 Uncaught (in promise) ReferenceError: __dirname is not defined at eval (index.js?7164:733:1) at Object.eval (index.js?7164:758:11) at eval (index.js:760:30) at Object../node_modules/styled-jsx/dist/index/index.js (chunkname.ec7e4660aa5bb1463783.js:20548:1) at __webpack_require__ (app.js:85:30) at eval (style.js?317d:1:18) at Object../node_modules/styled-jsx/style.js (chunkName.ec7e4660aa5bb1463783.js:20555:1) at __webpack_require__ (app.js:85:30) at eval (textField.tsx:3:74) at Module../pathToComponent.tsx (chunkname2.ec7073e772c145515caa.js:1971:1) ``` username_2: After bumping webpack@4 to [email protected], this issue is no longer present.
CityOfZion/NeoLink
282703078
Title: Implement config pane Question: username_0: To be done in React in the development branch Answers: username_1: Is there a design which describes what is required for this? username_0: no there's not. I'm added private net support currently which implements something here. A start at least. username_0: Here's what I have currently: <img width="310" alt="screen shot 2018-01-15 at 9 26 15 am" src="https://user-images.githubusercontent.com/327999/34946791-359c9f98-f9d6-11e7-896c-f15cc98d51e0.png"> <img width="313" alt="screen shot 2018-01-15 at 9 26 09 am" src="https://user-images.githubusercontent.com/327999/34946792-35b69bdc-f9d6-11e7-82fa-ac71797e7f1c.png"> username_0: This has been merged ... for now I think this is what we want in the config pane. Closing. Status: Issue closed
xabre/xamarin-forms-tab-badge
397425808
Title: [iOS Bug] IndexOutOfRange when setting the Text property Question: username_0: My implementation is like so: ``` var tab = new AccountPage(); var navigation = new NavigationPage(tab) { Title = "Profil", Icon = "Profile", }; tab.SetBinding(TabBadge.BadgeTextProperty, nameof(BottomMenuViewModel.Count)); tab.SetBinding(TabBadge.BadgeColorProperty, nameof(BottomMenuViewModel.Color)); tab.SetBinding(TabBadge.BadgeTextColorProperty, nameof(BottomMenuViewModel.TextColor)); tab.BindingContext = viewModel; Children.Add(navigation); ``` When changing the BadgeTextProperty I got a IndexOutOfRange on iOS Only. The StackTrace take me to the Plugin.Badge.iOS.BadgedTabbedPageRenderer.OnTabbedPagePropertyChanged Answers: username_0: I think is about the CheckValidTabIndex function in [the iOS Renderer](https://github.com/username_1/xamarin-forms-tab-badge/blob/master/Source/Plugin.Badge.iOS/BadgedTabbedPageRenderer.cs) The function is searching for the IndexOf(tab) and need to be the IndexOf(navigation) regarding my implementation. #58 username_0: For a workaround we could do : ``` public bool CheckValidTabIndex(Page page, out int tabIndex) { tabIndex = Tabbed.Children.IndexOf(page); if(tabIndex == -1 && page.Parent != null) { tabIndex = Tabbed.Children.IndexOf(page.Parent); } return tabIndex < TabBar.Items.Length && tabIndex > 0;; } ``` username_1: PR #63 was merged and released with v2.1.1 Status: Issue closed
rsdn/CodeJam
1027360749
Title: QueryableExtensions for ranges generates wrong Expression. Question: username_0: This line https://github.com/rsdn/CodeJam/blob/master/CodeJam.Main/Collections/QueryableExtensions.Ranges.cs#L192 generates `And` but should be `AndAlso`. It should be not not bit operation but predicate. Answers: username_1: Can you think about failing test case with the current implementation ? Thanks. https://github.com/rsdn/CodeJam/blob/master/CodeJam.Main.Tests/Collections/QueryableExtensionsTests.cs username_0: Well in bit operations it works, but when generated SQL, it creates also working but ineffective query. Status: Issue closed username_0: No time, otherwise I'll just propose PR. I've seen this bug accidentally when analysed generated expression tree.
alexander-akhmetov/python-telegram
854376801
Title: queue.Full is raised after a while Question: username_0: After running for a while, queue.Full is raised, leaving the client in a state where it wouldn't process any more updates. Answers: username_1: Do you have any other errors in the logs? The default implementation uses one single threaded worker to process the requests and the client doesn't restart in case of errors. username_0: Yes, might that be the problem? The thread with the handler crashes, so the program stops handling updates, resulting in queue.Full? Status: Issue closed username_0: Alright, handling the error properly seems to have solved the issue. Thank you!
klen/graphite-beacon
155147639
Title: Using multiple config files Question: username_0: How does one use the multiple configs? The version I have is graphite-beacon==0.25.4 The main config file in /etc/graphite-beacon/graphite-beacon.json ``` { "graphite_url": "http://localhost", "public_graphite_url": null, "auth_username": null, "auth_password": <PASSWORD>, "pidfile": null, "format": "short", "interval": "10minute", "time_window": "10minute", "repeat_interval": "2hour", "until": "0second", "logging": "info", "method": "average", "no_data": "critical", "loading_error": "critical" "prefix": "[BEACON]", "critical_handlers": ["log"], "warning_handlers": ["log"], "normal_handlers": ["log"], "send_initial": true, "default_nan_value": -1, "ignore_nan": false, "alerts": [], "include": ["alerts.json"] } ``` The alerts file in /etc/graphite-beacon/alerts.json ``` { "alerts": [ { "name": "Memory", "query": "aliasByNode(collectd.*.memory.memory-free, 1)", "interval": "10minute", "format": "bytes", "rules": ["warning: < 300MB", "critical: > 200MB"] }, { "name": "Site", "source": "url", "query": "http://google.com", "interval": "20second", "rules": ["critical: != 200"] } ] } ``` My issue is that after restarting graphite-beacon and checking the logs, alerts=[] instead of what has been specified in the includes directive to pick the other configurations. I can not have `"include":["/etc/graphite-beacon/alerts.json"]` as the application will not restart. To run graphite-beacon I use `graphite-beacon --config=/etc/graphite-beacon/graphite-beacon.json` Any ideas what I am doing wrong? I have checked previous issues but I am not able to get my settings to work. https://github.com/klen/graphite-beacon/issues/65 https://github.com/klen/graphite-beacon/issues/88 Answers: username_0: Fixed this. with graphite-beacon==0.25.4 Changed from ``` { ... "include": ["alerts.json"] } ``` To ``` { ... "include": ["/etc/graphite-beacon/alerts1.json", "/etc/graphite-beacon/alerts2.json"] } ``` Format of alerts was as earlier shown. I guess I was too tired last night to notice this. So this is a non-issue. I was doing the wrong stuff. Status: Issue closed
crate/crate
743312609
Title: Meta: address all current `content: correction` issues Question: username_0: this meta issue address all current `content: correction` issues planned commits: - Correction: Add a note to `single-node` admonition and copy to shared-nothing concepts document (fixes https://github.com/crate/crate/issues/10058) - Correction: Add note about `doc` only being available after the first table is created (fixes https://github.com/crate/tech-writing-domain/issues/206) - Correction: Add clarification and cross-reference for object literals syntax (fixes https://github.com/crate/tech-writing-domain/issues/201) - Correction: Add missing link to the Enterprise Features document (fixes https://github.com/crate/crate/issues/10253) - Correction: Update "stock host Java VM JIT compilers" link (fixes https://github.com/crate/tech-writing-domain/issues/323) - Correction: Cross-reference `blocks.read_only_allow_delete` (fixes https://github.com/crate/crate/issues/10394) - Correction: Fix link to `text` data type (fixes https://github.com/crate/crate/issues/10445) - Correction: Fix default authentication config documentation and add a note about Linux distro package-specific deviations (fixes https://github.com/crate/crate/issues/10119) - Correction: Clarify object array access syntax (fixes https://github.com/crate/crate/issues/10110) - Correction: Fix geospatial doc tests to catch reported errors and update corresponding example SQL (fixes https://github.com/crate/tech-writing-domain/issues/208) - Correction: Minor copyedit and some link fixes for release notes (fixes https://github.com/crate/tech-writing-domain/issues/155) - Correction: Add note about the unavailability of `KILL` on CrateDB Cloud (fixes https://github.com/crate/crate/issues/10209) - Correction: Use HTTPS instead of HTTP for links (fixes https://github.com/crate/tech-writing-domain/issues/316) Answers: username_0: this is done Status: Issue closed
quarkusio/quarkus
833631995
Title: 1.12.x failed to build native package with io.quarkus:quarkus-amazon-lambda-xray Question: username_0: ## Describe the bug `1.12.1.Final` and `1.12.2.Final` failed to build native package with `io.quarkus:quarkus-amazon-lambda-xray` dependency. ### Actual behavior I got this error while building native package ``` Error: No instances of sun.security.provider.NativePRNG are allowed in the image heap as this class should be initialized at image runtime. To see how this object got instantiated use --trace-object-instantiation=sun.security.provider.NativePRNG. Detailed message: Trace: Object was reached by reading field java.security.SecureRandom.secureRandomSpi of constant java.security.SecureRandom@511394c6 reached by scanning method com.amazonaws.xray.ThreadLocalStorage.getRandom(ThreadLocalStorage.java:64) Call path from entry point to com.amazonaws.xray.ThreadLocalStorage.getRandom(): at com.amazonaws.xray.ThreadLocalStorage.getRandom(ThreadLocalStorage.java:64) at com.amazonaws.xray.internal.SecureIdGenerator.newTraceId(SecureIdGenerator.java:38) at com.amazonaws.xray.entities.TraceID.<init>(TraceID.java:123) at com.oracle.svm.reflect.TraceID_constructor_2bbc23dee901af749a2b73bb6c92ad7e96d74f97_766.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Constructor.java:490) at java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:780) at java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:722) at com.oracle.svm.core.jdk.SystemPropertiesSupport.initializeLazyValue(SystemPropertiesSupport.java:211) at com.oracle.svm.core.jdk.SystemPropertiesSupport.setProperty(SystemPropertiesSupport.java:196) at com.oracle.svm.core.jdk.Target_java_lang_System.setProperty(JavaLangSubstitutions.java:287) at com.oracle.svm.jni.JNIJavaCallWrappers.jniInvoke_ARRAY:Ljava_lang_System_2_0002esetProperty_00028Ljava_lang_String_2Ljava_lang_String_2_00029Ljava_lang_String_2(generated:0) ``` ## To Reproduce I'm building native package for AWS lambda, and it seems that the upgraded version of the Xray dependency in `1.12.1.Final` causes this problem. I still can build the package if I remove `io.quarkus:quarkus-amazon-lambda-xray` from the dependency list. ## Environment (please complete the following information): ### Output of `uname -a` or `ver` ``` Darwin local 19.6.0 Darwin Kernel Version 19.6.0: Thu Oct 29 22:56:45 PDT 2020; root:xnu-6153.141.2.2~1/RELEASE_X86_64 x86_64 ``` ### Output of `java -version` ``` openjdk version "11.0.10" 2021-01-19 OpenJDK Runtime Environment (build 11.0.10+9) OpenJDK 64-Bit Server VM (build 11.0.10+9, mixed mode) ``` ### GraalVM version (if different from Java) ``` quay.io/quarkus/ubi-quarkus-native-image:21.0.0-java11 ``` ### Quarkus version or git rev `1.12.2.Final` ### Build tool (ie. output of `mvnw --version` or `gradlew --version`) ``` ------------------------------------------------------------ Gradle 6.8.3 ------------------------------------------------------------ Build time: 2021-02-22 16:13:28 UTC Revision: 9e26b4a9ebb910eaa1b8da8ff8575e514bc61c78 Kotlin: 1.4.20 Groovy: 2.5.12 Ant: Apache Ant(TM) version 1.10.9 compiled on September 27 2020 JVM: 11.0.10 (Oracle Corporation 11.0.10+9) OS: Mac OS X 10.15.7 x86_64 ``` Answers: username_1: Apparently something got broken by an XRay upgrade and we didn't notice it as there are no native tests for it. https://github.com/quarkusio/quarkus/pull/15810 should fix it. It would be nice if you could test it by following the instructions in https://github.com/quarkusio/quarkus/blob/main/CONTRIBUTING.md#building-main (except you need to build the PR branch instead of main). Thanks! username_1: Oh and thanks for the very detailed bug report btw. It helps a lot! Status: Issue closed
appwrite/appwrite
1000370985
Title: Realtime throws diconnect error each second Question: username_0: ## ๐Ÿ› Bug Report Realtime appears to disconnect from the server all the time, leading to the following error being thrown each second: ``` app-all.js?v=192.168.3.11:12 Realtime got disconnected. Reconnect will be attempted in 1 second. ``` This issue seems to not only happen in testing or production environments but on the Appwrite console itself. ## Have you spent some time to check if this issue has been raised before? Yes. I talked with two users (including @username_1) on discord. ## To Reproduce Setup a basic realtime example. I used the Web SDK. There shouldn't be any other requirements to reproduce this error. ## Expected behaviour I expected the realtime api to connect to my server without throwing any errors. ## Actual Behavior The realtime api throws the above error each second. ![image](https://user-images.githubusercontent.com/66096031/133936907-6876e092-d642-49f4-8f33-9c71ed5de4c9.png) _Taken from the Appwrite console_ ## Your Environment I used the Web SDK, with Vite. My SDK version is 4.0.1. Additional I was able to reproduce this bug in both Chrome and Edge. Answers: username_1: I can confirm I have seen this issue with Websockets disconnecting on my Appwrite 0.10 features demo app https://cookie-clicker-appwrite.netlify.app/ ![image](https://user-images.githubusercontent.com/19310830/133936969-a0e861f0-5b08-4b54-a0b5-d6d751d75b39.png) username_2: This should be fixed now with the new Web SDK version `4.0.2` ๐Ÿ‘๐Ÿป
lidiawu/TSDF-Fusion
423526480
Title: Missing verb in sentence Question: username_0: The sentence " Input to our project color map Ci and depth map Di at resolution of 640*480 pixels." does not have a verb, which makes the sentence not readable. It should be revised as "Input to our project color map Ci and depth map Di are at resolution of 640*480 pixels."
clay/claycli
441753262
Title: Add docs to clayplatform.com Question: username_0: The claycli website / documentation should be added to docs.clayplatform.com Answers: username_1: Hello @username_0 @PedroAlbR Sure we can add it. The reason that `claycli` have his own page is that at the beginning of the docs task, I was told that every repo should have its own docs page. Can we add a ticket to discuss this issue? username_0: Hi @username_1 where are the claycli docs currently? username_1: At this url => https://docs.clayplatform.com/claycli/docs/cli.html @username_0 username_0: In this case, they're already on docs.clayplatform.com, but we could just add a link to the docs. If I go to the main page, there's no obvious way to get to the claycli docs. username_1: Agreed; we can put it on the [Getting Started](https://docs.clayplatform.com/docs/getting-started) page or on the [Glossary](https://docs.clayplatform.com/docs/glossary) What do you think? username_0: Definitely in the Glossary, but there should also be something on the front page--either a section that describes how Clay is built of many parts with links to the docs, or a link to another "clay overview" page. Would also be interested in what @username_2 thinks. username_2: Sorry for the delay guys! All of this sounds right. I think it makes sense to have a Glossary @username_1 and I agree with @username_0 that a "Clay Overview" would be a good idea. username_1: Thanks for your time @username_0 @username_2 ๐Ÿ˜„ I'm gonna close this issue and create a ticket for the new task Status: Issue closed username_1: Here the tickets - https://app.zenhub.com/workspaces/clay-repos-596524da4a8a0769f2f03175/issues/clay/clay.github.io/9 - https://app.zenhub.com/workspaces/clay-repos-596524da4a8a0769f2f03175/issues/clay/clay.github.io/10 Feel free to edit the tickets ๐Ÿ˜… @username_0 @username_2
unoplatform/Uno.Samples
1186313078
Title: Operation is not valid due to the current state of the object Question: username_0: Hi. I am using Visual Studio 2022 17.2.0 (Preview 2) and upon creating a fresh project, I am getting this. ![image](https://user-images.githubusercontent.com/16351038/160823459-7fedd9f3-ff37-4fac-be0b-842bae58579c.png) When I went into my repos, I can see this but it does not have a solution file. ![image](https://user-images.githubusercontent.com/16351038/160823945-55cd1bd1-31dd-400e-a66c-5472791747d3.png)
mjbondra/koa-session-mongoose
284672302
Title: const updatedAt = { ...schema.updatedAt, expires }; Question: username_0: C:\Users\Administrator\Desktop\koa-shop\Koa-onlineShop-api\node_modules\[email protected]@koa-session-mongoose\lib\index.js:13 const updatedAt = { ...schema.updatedAt, expires }; ^^^ SyntaxError: Unexpected token ... Answers: username_1: Which version of Node.js are you using? Basic support for `const` has existed without flags since at least v4. **See: https://github.com/username_1/koa-session-mongoose#prerequisites** This store requires [node@>=8.0.0](https://nodejs.org), [koa@>=2.0.0](http://koajs.com) and [koa-session@>=5.0.0](https://github.com/koajs/session). If you are using older dependencies, consider using [koa-session-mongoose@\^1.0.0](https://gitlab.com/wondermonger/koa-session-mongoose/tree/v1.0.0). username_1: This looks like an issue with the use of the spread property `...schema.updatedAt` in the object literal. This requires `node@>=8.3.0`, I will update the readme to reflect that. http://node.green/#ESNEXT-candidate--stage-3--object-rest-spread-properties-object-spread-properties Status: Issue closed
qfish/XAlign
298862623
Title: Support for Object Mapper mapping alignment Question: username_0: How to add support to format this? ``` mutating func mapping(map: Map) { teamId <- map["team_id"] channelId <- map["channel_id"] msgCount <- map["msg_count"] mentionCount <- map["mention_count"] } ``` to ``` mutating func mapping(map: Map) { teamId <- map["team_id"] channelId <- map["channel_id"] msgCount <- map["msg_count"] mentionCount <- map["mention_count"] } ```
pythonindia/wye
113145698
Title: Not able to create region lead Question: username_0: In region page (http://127.0.0.1:8000/region/), create region lead(http://127.0.0.1:8000/region/lead/create/) is not working. ![wye](https://cloud.githubusercontent.com/assets/4463796/10709763/f1885626-7a57-11e5-8be0-f7b107989155.png) Answers: username_1: Fixed it. Latest code check in with work. Status: Issue closed username_0: :+1:
dueapp/Due-macOS
675835824
Title: Alert volume control when "Stop alert sounds when clicking on a notification" option is enabled Question: username_0: Related: #1 When this option is enabled, the volume of alerts cannot be separately controlled, and can be quite loud relative to other system alerts and sounds. Think it makes sense to provide a separate volume control. This volume control slider should only appear, or be enabled when the option "Stop alert sounds when clicking on a notification" is enabled. It should look and work like System Preferences > Sound > Alert volume: ![CleanShot 2020-08-10 at 11 03 43@2x](https://user-images.githubusercontent.com/524420/89749150-363e8400-daf9-11ea-9123-cdb1b166e883.png) Thinking aloud about where this setting could go: 1 - **General** ![CleanShot 2020-08-10 at 11 01 10@2x](https://user-images.githubusercontent.com/524420/89749070-dea01880-daf8-11ea-902c-3e5559ba9e44.png) Pros: 1. Near to Reminders/Timers alert Cons: 1. Since the volume control only makes sense for "Stop alert sounds when clicking on a notification", it should actually appear near that, which is currently under Preferences > Notifications. 2. General setting getting quite long. 2 - **Notifications** ![CleanShot 2020-08-10 at 11 01 11@2x](https://user-images.githubusercontent.com/524420/89749250-84538780-daf9-11ea-88b3-22091fe0f24d.png) Pros: 1. Near "Stop alert sounds when clicking on a notification", which makes the most sense given that the slider should only be visible/enabled when the option is enabled Cons: 1. I imagine it's difficult to align the label, the slider together with the checkbox in a way that looks pleasing, especially if the slider is hidden until the checkbox is enabled. 3 - New Pane, **Sounds** ![CleanShot 2020-08-10 at 11 08 24@2x](https://user-images.githubusercontent.com/524420/89749338-db595c80-daf9-11ea-82ac-365430f9c292.png) We could introduce another preference pane and consolidate sound related settings into it. These would include the current settings: 1. Preferences > General > Reminder alert: 2. Preferences > General > Timer alert: 3. Preferences > General > Play interface sound effects 4. Preferences > Notifications > Stop alert sounds when clicking on a notification and finally the newly-introduced slider for alert volumes. I'm thinking 3 is better since these options feel like they're better consolidated together. In future we could also expand another slider for interface sound effects if necessary.
brinckmann/montepython_public
801245245
Title: Problems in get_cl - No lensed Cl computed Question: username_0: Hello, I am running Monte Python v 3.3 and CLASS v 2.9 with Python 3 (although I have tried using Python 2 and other class versions and the error persists). I run just LCDM with no changes to class. If I use just the baseTTTEEE2018 param file, I keep getting the following error right after the chains initialise: "File "montepython/MontePython.py", line 40, in <module> sys.exit(run()) File "/home/username_0/Simulations/montepython_public/montepython/run.py", line 45, in run sampler.run(cosmo, data, command_line) File "/home/username_0/Simulations/montepython_public/montepython/sampler.py", line 46, in run mcmc.chain(cosmo, data, command_line) File "/home/username_0/Simulations/montepython_public/montepython/mcmc.py", line 787, in chain newloglike = sampler.compute_lkl(cosmo, data) File "/home/username_0/Simulations/montepython_public/montepython/sampler.py", line 792, in compute_lkl value = likelihood.loglkl(cosmo, data) File "/home/username_0/Simulations/montepython_public/montepython/likelihood_class.py", line 953, in loglkl cl = self.get_cl(cosmo) File "/home/username_0/Simulations/montepython_public/montepython/likelihood_class.py", line 192, in get_cl cl = cosmo.lensed_cl(int(l_max)) File "classy.pyx", line 567, in classy.Class.lensed_cl (/home/username_0/Simulations/class/python/../python/classy.c:9149) classy.CosmoSevereError: Error in Class: No lensed Cl computed" Do you have any idea where the problem might be coming from? It seems that if I try several times, the chains eventually run without a problem but the fact that I get this error so often makes me suspicious that there might be some bug. I have been trying to understand where this comes from for some months now but I just don't understand what the problem might be... Thank you! Answers: username_1: Hi, This is surprising. Something might have changed on the CLASS end, I'll look into it. Can you try to add data.cosmo_arguments['lensing'] = 'yes' to your param file? Best, Thejs username_0: Hi, Thank you for your prompt reply! Yes, I have tried doing so and the error persists. I have also tried adding data.cosmo_arguments['output'] = 'tCl,pCl,lCl,mPk' with no success... It seems like the error comes up around line 567 of classy.pyx, where indeed we have "if not spectra: raise CosmoSevereError("No lensed Cl computed") lmaxR = self.le.l_lensed_max" But I am not sure what could be causing the error... I have also tried running on a cluster and on my laptop separately and every time the error comes up... Thank you again! username_1: Hi, It looks like you might need to raise a ticket on the CLASS github page (of course look through open and closed issues first) as this seems to be a problem on that end. I suspect some default settings have changed in a recent version and that's what is causing a problem, but I'm not quite sure what's going on and I don't have time to investigate right now. Please get back to me if you figure out the problem, otherwise I can try to see if I can replicate your problem (and then hopefully fix it!) Best, Thejs username_0: Right, I will do so, thank you! Status: Issue closed
LiskHQ/lisk-docs
463140805
Title: Add Post-installation section to lisk-core/setup/docker page Question: username_0: ## Install service suggested from [thread](https://stackoverflow.com/questions/43671482/how-to-run-docker-compose-up-d-at-system-start-up) ``` # /etc/systemd/system/docker-compose-lisk.service [Unit] Description=Docker Compose Application Service Requires=docker.service After=docker.service [Service] WorkingDirectory=/home/lisk/git/lisk-core/docker/testnet/ ExecStart=/usr/local/bin/docker-compose up TimeoutStartSec=0 Restart=on-failure StartLimitIntervalSec=60 StartLimitBurst=3 [Install] WantedBy=multi-user.target ``` ## Useful commands ``` # Tip systemctl status docker-compose-lisk.service # Tip sudo journalctl -u docker-compose-lisk.service ```<issue_closed> Status: Issue closed
tunapanda/wikonnect
781853189
Title: [FEATURE] [FRONTEND] Filter by multiple tags front end Question: username_0: ### Describe the user story: <!-- eg Admin should be able to delete single or multiple users so that it is easier to manage users. --> ### Tasks (where applicable) <!-- Please describe front-end tasks needed to accomplisd this--> - [ ] Filter by multiple tags - [ ] Clear filters - [ ] Match design spec<issue_closed> Status: Issue closed
tsenart/vegeta
442384736
Title: Treat timeouts as errors Question: username_0: But when creating an HTML plot with cat results.bin | vegeta plot > plot.htm those errors are not displayed, making it seem that the test completed all requests successfully. Is there a way to make those errors stand out? Answers: username_1: Would you be able to provide me with your `results.bin` for debugging please? username_0: Thanks for looking into it. [Here's](https://mega.nz/#!XtJXHSrA!FrvGu-U65NqfVG92nvQqXTMNVjUlA8CpVm5xdrEtjk8) the `results.bin` containing a couple of reported timeouts. Status: Issue closed username_1: Fix released: https://github.com/username_1/vegeta/releases/tag/cli%2Fv12.5.1
codevise/pageflow
233147970
Title: Set maxlength on editor text inputs Question: username_0: People often end up typing more text into input fields than can be stored in the respective database column. The save request fails without clear indication what needs to be fixed. To improve the situation we would need to * add support for a `maxLength` option to the relevant [input views](https://github.com/codevise/pageflow/tree/master/app/assets/javascripts/pageflow/ui/views/inputs) * set sensible defaults or configure certain inputs specifically<issue_closed> Status: Issue closed
scp-fs2open/fs2open.github.com
205123833
Title: SEXP warnings when combining "and" SEXP with "true" argument and setting event log flags Question: username_0: I know, the title is very confusing and I have no idea what those things have in common but I'll try to explain the bug as best as I can. When debugging the capship command mission "The Great Escape" there are a lot of "ERROR: op_num function returned -1, this should not happen.." messages. I traced that to the "Warp done!" SEXP and then moved that to a separate mission to reproduce the bug. I can reliably reproduce the bug but as soon as I remove the line "+Event Log Flags: ( "true" )" the debug message goes away. Here is the mission: [SEXPBug.fs2](https://github.com/scp-fs2open/fs2open.github.com/files/750178/SEXPBug.fs2.txt) Also, the bug has something to do with combining an "and" SEXP and a "true" constant. When I remove that and simply use the parameter of the SEXP that is relevant the message disappears again. @username_2 Maybe you know what may be going on here? I think you were the one who implemented the SEXP log code. Answers: username_1: I believe the log was coded by @Karajorma. username_0: Ah, ok. I guess anyone who has experience with the SEXP code could be helpful here. username_0: :man_facepalming: I just saw that there is a condition before the call that fails that checks if the event should be logged. I guess that explains why it doesn't happen if SEXP logging is disabled. username_0: It looks like there is some kind of issue where the `text` of a SEXP node is not being set to the operator name which means that the lookup fails. The value of that SEXP node has the `SEXP_KNOWN_TRUE` value so maybe there is some kind of interaction with that. username_2: Taking a look at this. username_2: Ah, crap. I *can't* take a look at this, due to #1051. When the game hits the breakpoint, focus is returned to the debugger, but the window still shows FreeSpace. I can interact with dialogs but I can't actually see what I'm doing. I had to use the keyboard shortcut to exit the debugger in order to get back to normal. username_2: With #1051 out of the way, I think I know what's going on. The sexp node structure is a little weird in that, when dealing with a list, the CAR of the list refers to the operator and the CDR refers to the arguments. This comes from Lisp, and CAR represents Sexp_nodes[n].first while CDR represents Sexp_nodes[n].rest. I'll make a new PR. username_2: I figured it out and have a solution ready to go. Though I need to check with karajorma as to how to actually activate logging. I PMmed him on HLP. username_0: The test mission I uploaded uses event logging so you could use that to test your solution. username_2: I did. Although I see text written to the buffer when I trace through, nothing gets printed to the log file. username_1: Are you looking at the right log file? The one in `%APPDATA%\HardLightProductions\FreeSpaceOpen\<mod>\data\`? username_2: Well, not at first; I was looking under FreeSpace2\data. But I just looked under the %APPDATA% folder and there wasn't much more: ``` 02/11 23:48:03~ FS2_Open Mission Log - Opened Mission Untitled loaded. ``` Still none of the event logging I was expecting. username_2: The event log might actually be broken currently. I traced through deeply enough to find some mask values that weren't overlapping with each other. The ~Q didn't work either. I PMmed @Karajorma. I'm pretty sure my solution should fix it, assuming I can get the event logging system in a state to test with. username_2: PR opened: #1215 username_2: Okay, thanks to some troubleshooting with @karajorma, this looks like it's working now. The event log wasn't refreshing while FSO was running, which confused things. Status: Issue closed username_0: This was fixed by #1215.
elastic/ecctl
582917796
Title: Fix sample command in ecctl init wizard Question: username_0: Currently when using `ecctl init` command there is a suggested command `ecctl deployment elasticsearch list` at the end of the wizard to check everything is working as expected. The command doesn't work and returns ``` { "message": "The requested resource could not be found.", "ok": false } ``` The command should be updated to list deployments using `ecctl deployment list`. // cc @karencfv @username_1 @Kushmaro<issue_closed> Status: Issue closed
vime-js/vime
776850443
Title: support: HLS player not loading manifest and video on iPhone Question: username_0: Hi team, I am using the HLS component and for some reason the video is loading everywhere **except** Safari on iPhone. All I see is the poster image but the video never loads. I noticed in the `Network` tab that the request to the manifest file is never made on the iPhone but there are no errors in the console yet the rendered HTML on iPhone matches that on the MacBook. Wondering if anybody has thoughts as to what's going on or what else I can do to debug? ### Devices tested - โœ… Nokia Lumia 521 on Chrome - โœ… MacBook Pro on Safari 14 - โœ… MacBook Pro on Chrome - โœ… MacBook Pro on Firefox - โœ… iPad Pro on Safari 14 - โŒ iPhone 7 on Safari 13 - โŒ iPhone X on Safari 14 - โŒ iPhone 12 Mini on Safari 14 ### Code ```js <Player autoplay muted playsinline aspectRatio="1388:1080" loop> <Hls version="latest" poster={posterUrl}> <source data-src={manifestUrl} type="application/x-mpegURL" /> </Hls> </Player> ``` ### Version - Vime: `5.0.19` - Gatsby: `2.29.2` Answers: username_0: Your browser does not support the<code class="sc-vm-file">video</code>element. </video> ``` @username_1 any idea what would cause Vime to switch and render the `File` provider even though `HLS` is specified? username_1: Hey @username_0 I'm sorry I didn't get back to you earlier, currently working on Vime 6 with the new team and it's hard getting to these issues at the moment as we redesign the player from the ground up. I tried to think through a bunch of reasons why but nothing comes to mind off the surface. I'd need to debug it in iOS Safari and walk through the code. Probably something obscure with how that browser renders elements in the Shadow DOM? Unfortunately not sure. username_0: I appreciate the response, will keep digging. Thanks Rahim username_0: @username_1 figured out why the video wasn't loading but not the root cause. On iPhone, the `src` attribute was added to the `source` but missing on the `video` element. Couldn't pinpoint why but wondering if maybe there is some weird race condition with [`refresh`](https://github.com/vime-js/vime/blob/master/core/src/components/providers/file/file.tsx#L263) and / or [`didSrcSetChange`](https://github.com/vime-js/vime/blob/master/core/src/components/providers/file/file.tsx#L280). As a workaround, I am now setting `src` explicitly, but now wondering if setting both `data-src` and `src` overrides lazy-loading? ```js <Hls version="latest" poster={posterUrl}> <source data-src={manifestUrl} src={manifestUrl} type="application/x-mpegURL" /> </Hls> ``` username_1: Hey @username_0 ye you're on the right track... I think the issue is that the DOM hasn't been updated at the time of the [media load call](https://github.com/vime-js/vime/blob/master/core/src/components/providers/file/file.tsx#L306) so it's a hit or miss after initial load. In other words if the `src` attribute hasn't updated in the DOM yet from the most recent `refresh` call, media load will do nothing. This can be fixed by making the load request at the appropriate time by using `window.requestAnimationFrame(...)`. I'll release a patch for it as soon as I can ๐Ÿ˜„ username_0: Ahhh nice catch. I appreciate your help on this and thank you for all the work you're doing here! โค๏ธ username_0: @username_1 any chance you've been able to start tackling this? username_1: Hey @username_0 unfortunately not because I'm completely consumed by the new version of the player for work (have to hit some milestones). I'm really sorry I didn't get around to this but I just don't have the bandwidth. Hopefully once the new player is up and running I'll be able to start squashing bugs and focusing on helping everyone. username_0: Ah totally understandable. Thanks for your contributions on Vime and excited for the next version! username_0: Just wanted to update this ticket, bug still persists after updating to the latest versions ### Version - Vime: `5.3.1` - Gatsby: `4.4.0`
google/oss-fuzz
182628693
Title: Various comments on oss-fuzz documentation and process Question: username_0: Let me use this issue as a laundry list for small improvements. * Change the scripts and the docs to use `/work/libFuzzer.a` instead of `/work/libfuzzer/*.o`. Not that it will have any different result, but this is the way we document it in libFuzzer docs. Answers: username_1: Created issues: #31 #32 #33 Added link to gitHub forking guide & removed sudo documentation. As for shorter header, I don't know. Our guidelines specifically ask for this header. Status: Issue closed
lettuce-io/lettuce-core
363599639
Title: BraveTracing should support custom names for the service name Question: username_0: ## Feature Request #### Is your feature request related to a problem? Please describe It is likely for multiple Redis installations to coexist in a production environment. Currently the service name for spans created by the Lettuce is hardcoded to "redis" which means that every service (using Lettuce client) will report spans with the service name "redis", regardless if they are using different Redis installations. #### Describe the solution you'd like There should be a builder for a BraveTracing that would allow users to provide a custom name to use for the service name for spans. #### Describe alternatives you've considered Didn't consider any alternatives. #### Teachability, Documentation, Adoption, Migration Strategy Currently a BraveTracing is created like this: `BraveTracing.create(tracing)` This could be changed to something like this (no name override): `BraveTracing.builder(tracing).build()` Or like this (with name override) ``BraveTracing.builder(tracing).serviceName("customNameHere").build()`` Answers: username_0: Draft implementation here: https://github.com/lettuce-io/lettuce-core/pull/866 if it's easier to explain and discuss the proposed changes. username_1: Awesome improvement. Thanks a lot for submitting a pull request. Tests are sort of flaky on Travis CI so don't get distracted by that. username_1: I enhanced your PR with customization hooks for `Span` and `Endpoint` and applied a round of polishing. That's now on `master`. Status: Issue closed
toggl/toggldesktop
514232845
Title: Toggl Stops Each Task at 30 Minutes Question: username_0: <!-- Before submitting a new issue, please make sure that the same issue has not been created already --> ### ๐Ÿ’ป Environment Platform: Windows PC, Desktop OS Version: Windows 10 Toggl Version: 7.4.1023 64-bit ### ๐Ÿž Actual behavior 1. I am tracking a task. 2. Task stop tracking exactly at 30 minutes. 3. I don't realize it has stopped tracking the time for that task because this has never been an issue before. I am super frustrated by the time Toggl reminds me to start tracking my tasks again because I assumed I had been all that time (usually about 20 min lost each time, since that's my reminder notification interval). ### ๐Ÿ’ฏ Expected behavior 1. I begin tracking a task. 2. It tracks the task until I tell it to stop or I log off for the night. ### ๐Ÿ”จ Steps to reproduce <!-- Clear steps to reproduce the issue --> 1. Start tracking a task. 2. Task stops tracking at 30 minutes. 3. ### ๐Ÿ“ฆ Additional info No error logs, no error occurs. Attached screenshot of the 30-minute increments. ![Toggl_StopsEvery30Min](https://user-images.githubusercontent.com/44852729/67810264-84493600-fa57-11e9-85af-77215bb7df3f.png) Answers: username_1: Hi there, do you have Pomodoro turned on in the app settings? username_2: Only feature that stops time automatically is the Pomodoro tracker. Please double-check if you have it enabled in the settings. Also if you use Toggl Button please also check Pomodoro is not turned on in the settings of Toggl Button. We've had similar reports before and the fact that Toggl Button and Desktop have similar features has caused som confusion. username_2: Please get back to us if you have done so and we can dig deeper if needed. username_0: I have checked and confirmed that neither the Chrome extension nor the desktop app have the Pomodoro timer enabled. Chrome extension: ![Toggl_PomoOffChrome](https://user-images.githubusercontent.com/44852729/67872101-b22d8980-faee-11e9-96c1-4242316fbd14.png) Desktop app: ![Toggl_PomoOffDesktop](https://user-images.githubusercontent.com/44852729/67872193-d9845680-faee-11e9-9e32-aca8dd77453f.png) username_3: Same here, using mac. username_4: I've got the same issue as @username_3. On Catalina 10.15.7. Very weird as I haven't got this issue on my other macbooks username_0: I never did get this to work, so now I use one or the other (extension or desktop app). That is the only way it functions properly. username_5: Since this is still open, I will add that the function of the pomodoro timer auto stopping the current task is incredibly frustrating. On the chrome plugin there is an option for the timer to be purely a reminder, whereas on desktop no such option exists. ![image](https://user-images.githubusercontent.com/25844278/100593982-c4858280-3301-11eb-97de-31ef3b143074.png) username_6: @username_5 Same in the Mac app. i agree, this is pretty strange behaviour, especially given the fact that the Chrome plugin does it right. username_7: Same issue. Pomodoro settings are off. But stops at 55 minutes
swaywm/sway
847487175
Title: Overlay layer layer-shell clients cause damage/rendering issues with fullscreen mpv Question: username_0: ``` $ swaymsg -pt get_version sway version 1.6-rc2-1d62d6bf (Mar 28 2021, branch 'makepkg') $ pacman -Q mpv mesa mpv 1:0.33.0-4 mesa 21.0.1-1 ``` The best clients I know of to reproduce this with are mpv and mako, but any overlay client should do. Steps to reproduce: ``` # maybe relevant $ cat mpv.conf gpu-context=wayland profile=gpu-hq fs=yes $ cat ~/.config/mako/config layer=overlay anchor=bottom-right default-timeout=4000 ``` 1. Start video playback in mpv fullscreen and pause the video at any time. 2. Switch to another workspace and arrange for a notification to trigger mako, e.g. `sleep 5 && notify-send testing testing` 3. Switch back to mpv and wait for the overlay notification to appear. When the notification appears, the fullscreen mpv window is replaced by whatever image was on the screen before switching to the fullscreen mpv workspace. With the steps above that would be the workspace with the terminal that sent the notify-send, but you could also switch to another workspace before switching to mpv to show that instead. It is a past image of the workspace, not a current render. If there is a bar with a clock on that workspace you can observe that the time does not change. The entire overlay surface is displayed correctly, including the transparent border around the notification which reveals a small corner of the mpv window that should be covering the whole screen. I would attach a picture, but a screenshot doesn't seem to show the issue and in fact immediately corrects it. Here's my shitty drawing instead: grey: image of term on other workspace red: mpv seen through the transparent part of mako's overlay surface blue: opaque mako notification ![g43](https://user-images.githubusercontent.com/12577529/113219019-6cba4200-9235-11eb-87dd-6ee2553d97ae.png) If you play the mpv video, the issue is corrected on the next frame, so you only see a quick flicker when the notification appears and disappears while mpv is playing. You can also replace mako with `wlroots/examples/layer-shell`, and observe rapid flickering of the mpv window <-> previous workspace on every committed frame of the layer-shell client, but _only_ if you switch to fullscreen mpv _before_ the overlay client is mapped. If you start layer-shell and then switch workspaces it works fine. The obvious thing to try is `sway -D damage=rerender`, which does fix it for me. I also tried shuffling some of the logic in output_render but no luck. Really not sure why I can only reproduce with mpv. This is not a regression โ€“ sway has had this behavior for me since as long as I can remember, but I only just tried looking into it recently. Answers: username_1: Maybe that is a problem with direct scanout bypassing the output buffer and somehow not rendering to the buffer correctly when direct scanout has to be disabled for the overlay. username_2: Okay, we need to damage the whole output when switching out of direct scanout. username_0: Yep, that was it. "scan out" and "Scanning out" dodged my grep for "scanout" in the log, but it's there. :sweat_smile: Status: Issue closed