repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
graphql-dotnet/graphql-dotnet | 259329733 | Title: Performance when using large lists.
Question:
username_0: My company has several data sets that have a lot of properties and have lists of lists returned. I've noticed this library could have a few improvements in this area.
I was going to submit a pull request but it looks like this project is moving towards 2.0. I went to submit a pull request but it looks like a lot of the files like DocumentExecutor already have some major changes. I'd be happy to re-evaluate with 2.0 but want to make sure these fit in with any 2.0 objectives. So here are a summary of changes.
The changes address the issue of running tests with a list with 10k items in a list and measuring performance. These changes result in a 120% speed increase in this setup.
https://github.com/username_0/graphql-dotnet/tree/performance
1. The document executor assumes all fields should be run in a new thread. Most of the time this is not true. The data is returned and usually only the NameFieldResolver to resolve the current data. Resolving these async instead results in big performance gains.
Simple example:
https://github.com/username_0/graphql-dotnet/commit/e82565c74f4cd49ac8d005ae069e2f5b3bb4120d#diff-9ee075dfc3429298a3c5de0c1b53bef2
We definitely still want the ability to resolve fields in a thread whenever it would connect to a database or external source. The current pipeline of always using a thread and then recursively calling CompleteValueAsync -> ExecuteFieldsAsync can be optimized. I propose adding an opt in policy on the IFeildResolver to signify if it should run threaded. Also I removed the FieldMiddlewareBuilder default behavior of wrapping every field with a resolver. Allowing the resolver to be null and using the NameFieldResolver as the default helped performance. Most of the changes seen in the performance branch of my fork revolve around this idea.
2. Typically in production environments this library would often be re-running the same query over and over. Adding caching to the documentvalidator improves performance by 500% when just running simple queries repeatedly against the star wars schema. Should this be a built in feature?
Example:
https://github.com/username_0/graphql-dotnet/blob/performance/src/GraphQL/Validation/CachedDocumentValidator.cs
3. Reflection is slowe(er). Especially when running against lists, caching property info can improve performance. This gave about a 2% speed jump.
https://github.com/username_0/graphql-dotnet/blob/performance/src/GraphQL/ObjectExtensions.cs
4. When running a performance analysis the TrimGraphQLTypes was on the list and shaved another 2% of the tests to cache this tests.
https://github.com/username_0/graphql-dotnet/blob/performance/src/GraphQL/GraphQLExtensions.cs
Again I get that not all users have lists or large data sets. These improvements would also help when running queries a second time which I would anticipate being very common for almost all users of this library.
Let me know I can help with these ideas in 2.0. So far the changes are looking great.
Answers:
username_1: This was specifically implemented for metrics. So if you want to opt-out of that pipeline I can see that being beneficial as that does add some overhead.
2. > cache query validation
I do think this could be a good idea, though caching the validation result would only work if the context of the validation does not change. For instance, I have created some authentication/authorization rules which would need to get evaluated on each request since the authentication or authorization context may have changed between requests and between users.
Related to this - you could cache the actual parsing of the `Document` itself based on the `Query`. The `ExecutionOptions` allows you to pass in an already populated `Document` for that purpose. One thing with this sort of caching is making sure that your cache doesn't get overly bloated. So if we were going to provide some sort of caching mechanism for queries I think we would want to make sure that it could be limited to say the top 10/50/100 queries or whatever fits your domain and your servers can handle. There's a specific caching pattern for that though I'm not remembering the exact name of it at present ...
3. 👍 to this one
4. 👍 to this one
So I think it would be great to get 3 &4 in, with a maybe? on 2. I think the changes to 1 could go in a 2.1 or 3.0 release, with finally fixing the serial mutation issue.
username_2: For the validation caching, what if a `bool Cacheable` property was added to the `IValidationRule` interface, and the validation result would be cached if none of the validation rules were uncacheable. That would provide a way for must-run rules to be executed while having static rule results get cached.
username_3: @username_0 Is this issue still actual?
username_4: closing this due to no response. @username_0 please ping back if it still effect you
Status: Issue closed
username_3: May be related to #942 |
the-refinery/workflow | 359284452 | Title: Missing template error when loading dashboard
Question:
username_0: We are using another Craft plugin called Fractal.
I don't think it was installed on the version you had but now that it is there is some sort of conflict occurring. It very well may be the fault of Fractal, I'm not sure.
I'm going to disable Workflow for now until we can figure it out.
Feel free though to go to 3.staging.lynn.edu/lynnedu_admin and turn Workflow back on, then head to the dashboard to see the error occur.
craft\web\twig\TemplateLoaderException: Unable to find the template “lynn-workflow/_widget/settings”. in /chroot/home/staglynn/3.staging.lynn.edu/vendor/ournameismud/fractal/src/Fractal.php:214
Stack trace:
#0 /chroot/home/staglynn/3.staging.lynn.edu/vendor/ournameismud/fractal/src/Fractal.php(159): ournameismud\fractal\FractalTemplateLoader->_findTemplate('lynn-workflow/_...')
#1 /chroot/home/staglynn/3.staging.lynn.edu/vendor/twig/twig/lib/Twig/Environment.php(270): ournameismud\fractal\FractalTemplateLoader->getCacheKey('lynn-workflow/_...')
#2 /chroot/home/staglynn/3.staging.lynn.edu/vendor/twig/twig/lib/Twig/Environment.php(350): Twig_Environment->getTemplateClass('lynn-workflow/_...')
#3 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/web/twig/Environment.php(40): Twig_Environment->loadTemplate('lynn-workflow/_...', NULL)
#4 /chroot/home/staglynn/3.staging.lynn.edu/vendor/twig/twig/lib/Twig/Environment.php(289): craft\web\twig\Environment->loadTemplate('lynn-workflow/_...')
#5 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/web/View.php(331): Twig_Environment->render('lynn-workflow/_...', Array)
#6 /chroot/home/staglynn/3.staging.lynn.edu/vendor/therefinery/lynnworkflow/src/widgets/Submissions.php(44): craft\web\View->renderTemplate('lynn-workflow/_...', Array)
#7 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/controllers/DashboardController.php(70): therefinery\lynnworkflow\widgets\Submissions->getSettingsHtml()
#8 [internal function]: craft\controllers\DashboardController->actionIndex()
#9 /chroot/home/staglynn/3.staging.lynn.edu/vendor/yiisoft/yii2/base/InlineAction.php(57): call_user_func_array(Array, Array)
#10 /chroot/home/staglynn/3.staging.lynn.edu/vendor/yiisoft/yii2/base/Controller.php(157): yii\base\InlineAction->runWithParams(Array)
#11 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/web/Controller.php(103): yii\base\Controller->runAction('index', Array)
#12 /chroot/home/staglynn/3.staging.lynn.edu/vendor/yiisoft/yii2/base/Module.php(528): craft\web\Controller->runAction('index', Array)
#13 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/web/Application.php(282): yii\base\Module->runAction('dashboard/index', Array)
#14 /chroot/home/staglynn/3.staging.lynn.edu/vendor/yiisoft/yii2/web/Application.php(103): craft\web\Application->runAction('dashboard/index', Array)
#15 /chroot/home/staglynn/3.staging.lynn.edu/vendor/craftcms/cms/src/web/Application.php(271): yii\web\Application->handleRequest(Object(craft\web\Request))
#16 /chroot/home/staglynn/3.staging.lynn.edu/vendor/yiisoft/yii2/base/Application.php(386): craft\web\Application->handleRequest(Object(craft\web\Request))
#17 /chroot/home/staglynn/3.staging.lynn.edu/html/index.php(21): yii\base\Application->run()
#18 {main}
Copy Stacktrace Search Stackoverflow Search Google Exception
Twig Template Loading Error – craft\web\twig\TemplateLoaderException
Unable to find the template “lynn-workflow/_widget/settings”.
1. in /chroot/home/staglynn/3.staging.lynn.edu/vendor/ournameismud/fractal/src/Fractal.php at line 214
205206207208209210211212213214215216217218219 }
}
else
{
$template = Craft::$app->getView()->resolveTemplate($name);
}
if (!$template)
{
//throw new TemplateLoaderException($name);
throw new TemplateLoaderException($name, Craft::t('app', 'Unable to find the template “{template}”.', ['template' => $name]));
}
return $template;
}
}
Answers:
username_0: Finding similar errors on other pages as well:
On Craft's plugin page (Settings > Plugins), the workflow plugin includes links for documentation and settings. The documentation link reloads the plugin page. The settings link gives the following error:
Twig Template Loading Error – craft\web\twig\TemplateLoaderException
Unable to find the template “lynn-workflow/_layouts” in "lynnworkflow/settings" at line 1.
Clicking the workflow link in the control panel's sidebar produces the following error:
Twig Template Loading Error – craft\web\twig\TemplateLoaderException
Unable to find the template “lynn-workflow/_layouts” in "lynnworkflow" at line 2.
username_1: Emailed an update, but this error looks to be the fault of Fractal. I am calling my templateFiles in the way specified by the Craft CMS documentation here (https://docs.craftcms.com/v3/extend/updating-plugins.html#rendering-templates) but Fractal appears to be overriding this somehow.
username_1: er, nope. Nevermind. I was reading the error messages incorrectly. Working to fix this now. |
thewca/wca-documents | 436251896 | Title: Update Dues System Policy to include scope for Regional Organisations
Question:
username_0: Seeing the updates to the Dues System Policy (v1.2) made me think of additional detail that could be added to the policy to reflect how WCA Dues are handled for regional organisations.
2.1 makes it clear that the paying of the dues can be done by an organisation, but it is ultimately the responsibility of the WCA Delegate(s). In practice, Speedcubing Australia Inc. is invoiced for all competitions that take place in Australia - this works extremely well as a system and suits us!
Personally, I think it would be good if the role regional organisations can play is recognised in a later version of the policy. Responsibility for payment could ultimately rest on the regional organisation itself, so long as it is recognised by the WCA and has approval by all the region's delegates to act on their behalf in financial matters.
Answers:
username_1: I think this is reasonable.
username_2: I personally prefer that the ultimate responsibility still relies on the WCA Delegate(s) of the competition.
username_3: Just to point a different situation: in Madrid WCA Dues are usually paid by the Madrid association instead by the Spanish association. And the Officially recognized is the last one.
username_4: Resolved in latest update
Status: Issue closed
|
ostroproject/ostro-os-xt | 175179525 | Title: OpenSSL vulnerabilities CVE-2016-2177, CVE-2016-2178, and CVE-2016-2180
Question:
username_0: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2177 , https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2178 , and https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2180 reported by cve-checker.
Answers:
username_0: For CVE-2016-2177, base score is 5.9 -- Medium. For CVE-2016-2177, CVSS base score is 5.5 -- Medium. These probably apply to Ostro XT use cases too.
https://nvd.nist.gov/cvss/v3-calculator?name=CVE-2016-2177&vector=AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H
https://nvd.nist.gov/cvss/v3-calculator?name=CVE-2016-2178&vector=AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N
For CVS-2016-2180, CVSS base score is 7.5 -- High. After modifiers are applied, the CVSS rating is 7.2 -- High.
https://nvd.nist.gov/cvss/v3-calculator?name=CVE-2016-2180&vector=AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Status: Issue closed
|
jupyterhub/dockerspawner | 226496981 | Title: Question: singleuser write permission error
Question:
username_0: Hi,
This is more a Docker question but affects the singleuser notebook I think.
I have Jupyterhub with the DockerSpawner launch a new notebook using Swarm. I have an NFS folder mounted on the host like:
```
/appdata/jupyter-notebooks
```
So when a user logs in the notebook is launched and the folder for the user is created by docker like:
```
/appdata/jupyter-notebooks/grodriguez
```
So this folder is mounted as /appdata/jupyter-notebooks/grodriguez:/home/jovyan/work:rw so all ok up now.
But then when I try to write a file, the notebook says it doesn't have enough permissions to write!
Then I check the /appdata/jupyter-notebooks/grodriguez folder
```
drwxr-xr-x 2 root root 4096 May 5 16:49 grodriguez
```
So this folder is restricted and will only allow root to write in there so the container can't write to that folder. If I change the permissions like:
```
chown 777 /appdata/jupyter-notebooks/grodriguez
```
Then the notebook can write just fine.
So then I decided to test this, I went into the host and manually run my notebook image and write a file just to test the permissions:
```
docker run -v /home/core/test:/test --entrypoint /bin/bash docker-registry.aws.cmcrc.com:5000/cmcrc/uptick-notebook -c "echo 123 > /test/test.log"
/bin/bash: /test/test.log: Permission denied
```
Again, the test folder was created with rwxr-xr-x permissions.
If I change and run as root, all goes ok:
```
docker run -v /home/core/test:/test --user root --entrypoint /bin/bash docker-registry.aws.cmcrc.com:5000/cmcrc/uptick-notebook -c "echo 123 > /test/test.log"
```
So I guess is the jovyan user not able to write to the folder created by docker. But I don't want to use the root user as this image will be used in a public server.
Any ideas how to solve this.
Thanks!
Answers:
username_1: Yeah, if docker mounts a volume that doesn't exist, it will be created with as owned by root with 755 permissions. I'm not sure if there's a way to change this. What you'll need to do is make sure the directory is created with the right owner and/or permissions prior to the container mounting it.
username_0: I see... well... I will have to add a 'USER root' line at the end while we find a way to solve this.
Status: Issue closed
|
GirugaCode/CS-1.3 | 464025926 | Title: Feedback: Trees & Tree Traversals
Question:
username_0: - Your binary tree runs well for the most part, but your `_find_parent_node_recursive` method fails some tests. If you try and run it in your `insert` method (replace its iterative counterpart), you will see it fails tests.
- The bug occurs when you handle the first base case. You return `None`, but you should return `parent`. The goal of this first `if` statement is not to only handle the case that the root does not exist. It also handles the case when you are trying to insert a node that doesn't exist yet. You can imagine that this function traverses to the spot where a node should be inserted (it does this by traversing until it hits a `None`), and then returning the parent of this empty spot (the empty spot is None). Returning `parent` handles the no-root case because parent is first set to None as a default argument (and it will return when node, or self.root, is None), and it handles this new case when a new node needs to be inserted.
Answers:
username_1: Thank you for the feedback! I fixed it and got it!
Status: Issue closed
|
jenkinsci/junit-plugin | 703829878 | Title: Integrate with checks-api plugin
Question:
username_0: This will allow publishing test results to GitHub checks
See https://github.com/jenkinsci/checks-api-plugin/blob/master/docs/consumers-guide.md
and https://github.com/jenkinsci/warnings-ng-plugin/pull/550
This should be updated in the JunitStep https://github.com/jenkinsci/junit-plugin/blob/master/src/main/java/hudson/tasks/junit/pipeline/JUnitResultsStep.java
Answers:
username_0: cc @username_1 in case you want to add anything
username_1: LGTM now. I can help with it after this hiring season if nobody is still interested in it by then.
Status: Issue closed
username_2: FYI, I filed [JENKINS-64714](https://issues.jenkins.io/browse/JENKINS-64714) requesting a similar feature in the [xUnit plugin](https://plugins.jenkins.io/xunit/). |
moov-io/wire | 922997445 | Title: api: empty response when file ID not found for FEDWireMessage endpoint
Question:
username_0: <!-- Please fill out the following questions, thanks! -->
Wire Version: `0.7.2`
**What were you trying to do?**
Replace a file's existing message using `/FEDWireMessage`, where the file ID does not exist.
**What did you expect to see?**
A 404 error response.
**What did you see?**
Empty reply from server.
**How can we reproduce the problem?**
`curl -X POST -d "@fedWireMessage-FEDFundsSold.json" localhost:8088/files/invalid-id/FEDWireMessage -i`<issue_closed>
Status: Issue closed |
da2x/EdgeDeflector | 268457905 | Title: not working after fall creators update
Question:
username_0: edge-deflector stopped working after upgrading windows 10 with the fall creators update to v.1709.
Manually setting edge-deflector as standard in System-Settings > Apps > Standard-Apps > By protocol to use edge-deflector for MIRCOSOFT-EDGE doesn't work either.
Status: Issue closed
Answers:
username_1: Duplicate of issue #/14. Please upgrade to the [latest version](https://github.com/username_1/EdgeDeflector/releases). |
jesus2099/konami-command | 384865210 | Title: Upvoted tags are not seen if not new. Downvoted tags are still seen if not removed.
Question:
username_0: Seen with [any of the others tags or genres](https://musicbrainz.org/artist/5441c29d-3602-4898-b1a1-b77fa23b8e50).
Answers:
username_0: It’s because I only scan for `DOMNodeInserted` event, I should also intercept `DOMAttributeModified` or something like that, if such thing exists.
username_0: - I added `tag1, 2, 3` they were all 3 ownified (now without change). :+1:
- If I upvote existing tag, it is not ownified. :-1:
- I add `british, 1, 2, 3` (where british is existing), only 1, 2 and 3 are ownified. :-1:
- I downvote `british` (after having upvoted it and reload page), it remains ownified. :-1:
Status: Issue closed
username_1: Should this already have been solved by https://github.com/username_0/konami-command/commit/4fa74ddc55ec51927562f6e9d7215e2b43b1120b? Currently I can not upvote/downvote/withdraw any tag in Firefox 83.0 with TAG_TOOLS (from version 2020.11.27) enabled. It seems that the MBS code is somehow blocked by the userscript because even after a page reload the tag sidebar has not changed.
username_0: Oh yes, I will try.
I think I know why this bug, I had to stop the event propagation for some reason but it seems I find another way to avoid that.
Thanks very much, @username_1!
username_0: Seen with [any of the others tags or genres](https://musicbrainz.org/artist/5441c29d-3602-4898-b1a1-b77fa23b8e50/details).
---
- I added `tag1, 2, 3` they were all 3 ownified. :green_circle:
- If I upvote existing tag, it is not ownified. :negative_squared_cross_mark:
- I add `british, 1, 2, 3` (where british is existing), only 1, 2 and 3 are ownified. :negative_squared_cross_mark:
- I downvote `british` (after having upvoted it and reload page), it remains ownified. :negative_squared_cross_mark:
username_0: Hmm, I checked 4fa74ddc55ec51927562f6e9d7215e2b43b1120b, there is no kind of `stop()`.
I will debug as soon as possible.
username_0: I found, it's my small text, inserted at the last minute in my script, after all my tests, between MBS’ React (I suppose) JavaScript and its handled HTML.
I moved it away and it is not blocked any more.
username_0: That's it @username_1, tell me if it works or not, now. Thanks.
Status: Issue closed
username_1: Thank you for the quick fix, it is working again 👍
username_0: Thanks for the confirm! |
xkbcommon/libxkbcommon | 1182657561 | Title: Wrong compose table prefix detection
Question:
username_0: test program:
```
#include <xkbcommon/xkbcommon-compose.h>
#include <locale.h>
#include <assert.h>
#include <iostream>
int main() {
const char *locale;
locale = setlocale(LC_CTYPE, NULL);
auto context = xkb_context_new(XKB_CONTEXT_NO_FLAGS);
auto table = xkb_compose_table_new_from_locale(context, locale, XKB_COMPOSE_COMPILE_NO_FLAGS);
auto state = xkb_compose_state_new(table, XKB_COMPOSE_STATE_NO_FLAGS);
auto result = xkb_compose_state_feed(state, XKB_KEY_dead_circumflex);
assert(result == XKB_COMPOSE_FEED_ACCEPTED);
auto status =
xkb_compose_state_get_status(state);
std::cout << status << std::endl;
assert(status == XKB_COMPOSE_COMPOSING);
result = xkb_compose_state_feed(state, XKB_KEY_e);
status =
xkb_compose_state_get_status(state);
assert(status == XKB_COMPOSE_COMPOSED);
return 0;
}
```
test ~/.XCompose
```
include "%L"
<dead_circumflex> <dead_circumflex> <e> : U1E19
```
XKB_LOG_LEVEL=debug
```
xkbcommon: DEBUG: Include path failed: /home/csslayer/.config/xkb (No such file or directory)
xkbcommon: DEBUG: Include path failed: /home/csslayer/.xkb (No such file or directory)
xkbcommon: DEBUG: Include path failed: /etc/xkb (No such file or directory)
xkbcommon: DEBUG: Include path added: /usr/share/X11/xkb
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5089:46: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5091:48: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5093:48: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5097:47: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5099:46: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5107:48: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5111:46: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5113:46: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5117:45: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /usr/share/X11/locale/en_US.UTF-8/Compose:5120:46: this compose sequence is a duplicate of another; skipping line
xkbcommon: WARNING: /home/csslayer/.XCompose:2:43: a sequence already exists which is a prefix of this sequence; overriding
xkbcommon: DEBUG: created compose table from locale C with path /home/csslayer/.XCompose
```
Expected result:
<dead_circumflex> <e> should produce ê, which is defined in /usr/share/X11/locale/en_US.UTF-8/Compose
```
<dead_circumflex> <e> : "ê" ecircumflex # LATIN SMALL LETTER E WITH CIRCUMFLEX
```
instead of being overriden by the entry in
```
<dead_circumflex> <dead_circumflex> <e> : U1E19
```
Xlib based composing works as expected.
Original report in fcitx: https://github.com/fcitx/fcitx5/issues/472 |
Vaalyn/Lucy-Light | 253160506 | Title: The currently playing song doesn't get updated
Question:
username_0: Due to a change in the Discord API the setGame method doesn't work in the current master branch of Discord.js.
- [ ] Update to current 11.1-dev branch
Status: Issue closed
Answers:
username_0: Fixed in 7f2a541e8a8ea3fbad92b3a0d066f61026ef91c9 |
aritchie/httptransfertasks | 388963550 | Title: File downloaded is encrypted
Question:
username_0: ## Please check all of the platforms you are having the issue on (if platform is not listed, it is not supported)
- [X] iOS
- [ ] Android
- [ ] UWP 16299+
- [X] .NET Standard 2.0
## Version of OS(s) listed above with issue
iOS 12
## Version of Library
2.0.1
## Expected Behaviour
I'm downloading a sqlite file from a blob storage. The file that I posted on the blob is a valid sqlite file, with a few tables created in it.
Once the download is finished, I get a file called: CFNetworkDownload_XXXXXXX.tmp which is fine, but when I try to move and rename the file to the proper folder to be consumed by sqlite-net and azure offline sdk.
## Actual Behavior
I tried to open it with DB Browser for Sqlite, but there seems to be some kind of encryption on the file:
<img width="758" alt="screen shot 2018-12-08 at 4 58 15 pm" src="https://user-images.githubusercontent.com/7211075/49691167-8dad7500-fb0a-11e8-89f9-38e0503c4b07.png"> |
containers/buildah | 426547694 | Title: BUG: tagging is broken for multistage builds
Question:
username_0: [Now that I have access to the nightly ppa's (cf. containers/libpod#2250), I'm starting to file some issues against `podman` - and since the issue template there says that `podman build` issues should be filed for buildah, I'm starting here.]
Assume I have a toy dockerfile, say `podman_test.dockrf`:
```
FROM docker.io/library/ubuntu:bionic
RUN apt-get update && apt-get install curl -y
CMD ["bash"]
```
Then `sudo podman build -t podman_test -f podman_test.dockrf .` produces

So far so good. If I change the dockerfile slightly:
```diff
-FROM docker.io/library/ubuntu:bionic
+FROM docker.io/library/ubuntu:bionic as base
RUN apt-get update && apt-get install curl -y
CMD ["bash"]
```
then I get an untagged image (even though it successfully uses the cache):

Same for `sudo podman build -t podman_test --target=base -f podman_test.dockrf .`
This is quite basic functionality IMO - without tagging, many automation tasks are not possible.
Answers:
username_1: @username_0 thanks for the issue. This is definitely not acting correctly as you surmised. I think the <none><none> is ok for image 21fe, but what I think should be happening is we should have another image there named podman_test with and id of d6ca... , the last line from the build output there.
There appears to be something going on with the --layers option that's not letting that complete successfuly, if I add `--layers=false` to the podman build command, then I see that image.
I'll have to dig more in the next day or too, and may need to check in with @username_2
username_0: Ideally (IMO) both dockerfiles should create exactly the same image, so that 21fe... is not untagged by the build, but stays the same. Builds not being idempotent is another issue I'm encountering (pending further tests).
username_2: I think #1473 will fix this.
username_0: @username_2 @username_1
I just tested the newest nightly after I saw that #1473 had been merged - unfortunately, the issue persists (assuming `http://ppa.launchpad.net/projectatomic/ppa/ubuntu` is up-to-date).
username_1: @username_0 I'm not sure that's up to date. @username_3 can you confirm or deny please?
username_3: @username_0 what version of ubuntu are you on? I see that 16.04 hasn't been updated in a long time. I'll fix that now. 18.04 was last built on 5th april. Travis auto-builds break often for buildah quite likely due to .travis.yml changes.
username_0: @username_3
I'm on Ubuntu 18.04. Is the ppa-update not scheduled regularly (e.g. daily)? Can you say if https://github.com/containers/buildah/commit/29a6c812d924d04402b87c4487ed2d2e1c55a70b was already part of the last build?
@username_2 @username_1
Seems that the last build I tested above (and which fails the scenario laid out in the OP) includes at least #1473, but possibly not #1487.
username_3: --
Lokesh
IRC, GitHub: username_3
GPG: 0xC7C3A0DD
https://keybase.io/username_3
username_2: I think what you're after would conventionally be accomplished by building only through that stage with the `--target` flag, and by using the `-t` flag to specify a tag to apply to the final image.
The name specified in a `FROM ... AS ...` instruction is not treated as a tag for the image built at that stage, as its main use is to avoid requiring that subsequent `COPY --from=` instructions specify stages by number, which can get hard to maintain if you find you need to add a stage to the middle of a multi-stage Dockerfile.
username_0: This is exactly what is *not* working, and what this issue is about, see self-quote from OP. Neither does it work to build the full image (no `--target`) - the final image currently ignores the `-t` tag.
username_2: Aargh, I missed that. This appears to work correctly with buildah at least as of version ffd8d6a5de26e76f4446feadc837d55f9139dad8, though, so I expect we're looking at https://github.com/containers/libpod/pull/2931/ to incorporate those changes into podman.
username_0: @username_2
I've been trying to monitor the PRs and regularly tested with the most current nightly builds for podman/buildah. Seems that this specific commit hasn't made it to the PPA yet, so I can't verify. But thanks for fixing! :)
Yeah, would definitely need to update the version vendored by podman, as I'm planning to use this through podman.
username_0: @username_2
Never mind, I hadn't completely thought through the effects of the vendoring...
Using the latest PPA, it works correctly to do either of:
* `sudo buildah bud -t podman_test -f podman_test.dockrf .`
* `sudo buildah bud -t podman_test --target=base -f podman_test.dockrf .`
Status: Issue closed
username_0: Thanks!
username_0: Just checked with the new PPA after containers/libpod#2931, and now it works with `podman build` too! :) |
PaddlePaddle/Paddle | 945056897 | Title: 安装paddle报错
Question:
username_0: 为使您的问题得到快速解决,在建立Issue前,请您先通过如下方式搜索是否有相似问题:【搜索issue关键字】【使用labels筛选】【官方文档】
建立issue时,为快速解决问题,请您根据使用情况给出如下信息:
- 标题:请包含关键词“安装错误”/“编译错误”,例如“Mac编译错误”
- 版本、环境信息:
1)PaddlePaddle版本:2.0.0
2)CPU:---
3)GPU:3090显卡 GPU版本11.2
4)系统环境:centos、Python版本 3.8
- 安装方式信息:
1)pip安装 pip install paddlepaddle-gpu==2.0.0 -i https://mirror.baidu.com/pypi/simple
安装paddle gpu版本后运行paddle.utils.run_check()命令报错
报错日志:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-afc005eaa74e> in <module>
----> 1 paddle.utils.run_check()
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/utils/install_check.py in run_check()
162 device_count = len(device_list)
163
--> 164 _run_static_single(use_cuda)
165 print("PaddlePaddle works well on 1 {}.".format(device_str))
166
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/utils/install_check.py in _run_static_single(use_cuda)
94 exe = paddle.static.Executor(
95 paddle.CUDAPlace(0) if use_cuda else paddle.CPUPlace())
---> 96 exe.run(startup_prog)
97 exe.run(train_prog,
98 feed={input.name: _prepare_data(1)},
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache, return_merged, use_prune)
1108 return_merged=return_merged)
1109 except Exception as e:
-> 1110 six.reraise(*sys.exc_info())
1111
1112 def _run_impl(self, program, feed, fetch_list, feed_var_name,
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/six.py in reraise(tp, value, tb)
701 if value.__traceback__ is not tb:
702 raise value.with_traceback(tb)
--> 703 raise value
704 finally:
705 value = None
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache, return_merged, use_prune)
1096 """
1097 try:
-> 1098 return self._run_impl(
1099 program=program,
1100 feed=feed,
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/fluid/executor.py in _run_impl(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache, return_merged, use_prune)
1228 return_merged=return_merged)
1229
-> 1230 return self._run_program(
1231 program,
1232 feed=feed,
~/.jupyter/Paddlepaddle-2.0.0/lib/python3.8/site-packages/paddle/fluid/executor.py in _run_program(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
1325
1326 if not use_program_cache:
-> 1327 self._default_executor.run(program.desc, scope, 0, True, True,
1328 [fetch_var_name])
1329 else:
RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
Answers:
username_1: 试下升级paddle呢?pip install -U paddlepaddle-gpu==2.0.2
username_0: 可以的,我通过更改版本就能成功安装和测试了
username_1: 嗯嗯好的~
Status: Issue closed
|
larskrantz/logger_papertrail_backend | 216001955 | Title: Cannot config with `host`
Question:
username_0: After upgrading to 0.2.0, the following config fails, although it is still stated in the readme as a valid config:
```
config :logger, :logger_papertrail_backend,
host: "somehost:someport",
level: :info,
system_name: "my_app"
```
Changing to `url` fixes it:
```
config :logger, :logger_papertrail_backend,
url: "papertrail://somehost:someport/my_app",
level: :info
```
Answers:
username_0: This is the error I get when trying with the first config:
```
** (Mix) Could not start application logger: Logger.App.start(:normal, []) returned an error: shutdown: failed to start child: Logger.Watcher
** (EXIT) an exception was raised:
** (MatchError) no match of right hand side value: {:error, {{{:case_clause, {:EXIT, {%LoggerPapertrailBackend.ConfigurationError{message: "Unknown configuration: [host: \"<HOST>:<PORT>\", level: :info, system_name: \"legolas development\", format: \"$metadata $message\"]"}, [{LoggerPapertrailBackend.Configurator, :configure_papertrail_target, 1, [file: 'lib/configurator.ex', line: 56]}, {LoggerPapertrailBackend.Logger, :configure, 2, [file: 'lib/logger_papertrail_backend/logger.ex', line: 63]}, {LoggerPapertrailBackend.Logger, :init, 1, [file: 'lib/logger_papertrail_backend/logger.ex', line: 19]}, {:gen_event, :server_add_handler, 4, [file: 'gen_event.erl', line: 429]}, {:gen_event, :handle_msg, 5, [file: 'gen_event.erl', line: 274]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 247]}]}}}, [{Logger.Watcher, :do_init, 3, [file: 'lib/logger/watcher.ex', line: 78]}, {:gen_server, :init_it, 6, [file: 'gen_server.erl', line: 328]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 247]}]}, {:child, :undefined, {Logger.Watcher, {Logger, LoggerPapertrailBackend.Logger}}, {Logger.Watcher, :watcher, [Logger, LoggerPapertrailBackend.Logger, LoggerPapertrailBackend.Logger]}, :transient, 5000, :worker, [Logger.Watcher]}}}
(logger) lib/logger/watcher.ex:16: anonymous fn/2 in Logger.Watcher.start_link/3
(elixir) lib/enum.ex:1755: Enum."-reduce/3-lists^foldl/2-0-"/3
(logger) lib/logger/watcher.ex:15: Logger.Watcher.start_link/3
(stdlib) supervisor.erl:365: :supervisor.do_start_child/2
(stdlib) supervisor.erl:348: :supervisor.start_children/3
(stdlib) supervisor.erl:314: :supervisor.init_children/2
(stdlib) gen_server.erl:328: :gen_server.init_it/6
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
```
username_1: Thanks for reporting, I'm on it, I can already see some brainfart-code in Configurator..
username_1: Published 0.2.1, closing #9
Status: Issue closed
username_0: Very nice. Thanks for the quick fix :+1: |
fxxkit/questor | 78380692 | Title: Hamburger design
Question:
username_0: <img src="https://cloud.githubusercontent.com/assets/1659204/7719508/f0378f4c-fef3-11e4-95db-b908bbcc6680.png" width=400px />
<img src="https://cloud.githubusercontent.com/assets/1659204/7719509/f05a5874-fef3-11e4-8240-e2495973017d.png" width=400px />
<img src="https://cloud.githubusercontent.com/assets/1659204/7719510/f0734ed8-fef3-11e4-8ab2-9efbf017beb0.png" width=400px /> |
bdrum/cern-grid | 842755173 | Title: Move a routine of script running to the GRID from aliases to make file
Question:
username_0: I've an idea that using of cmake for generating make target that allow me to run scripts in grid via command:
~~~bash
make local
make test
make full
~~~
is better than using of alias.
The main advantages is that I can form macro folder in build directory and avoid handle copy of file with code. |
zeckson/html-nerds | 538691428 | Title: Б10. Указаны альтернативные варианты шрифта и тип семейства в конце перечисления font-family.
Question:
username_0: `font.css` ( указан только основной шрифт и тип семейства, альтернативный веб-безопасный шрифт отсутствует.):
css/font.css:97
css/font.css:105
css/font.css:115
css/font.css:123
css/font.css:130
css/font.css:137
css/style-header.css:35 — указан неизвестный шрифт
css/style-header.css:89 — указан только основной шрифт и тип семейства, альтернативный веб-безопасный шрифт отсутствует
Answers:
username_1: проверить все шрифта на 2 шрифта и семейство
roboto.arial.sans
username_0: css/style-index.css:228 — указан только основной шрифт и тип семейства, альтернативный веб-безопасный шрифт отсутствует
css/style-catalog.css:105 — указан не известный шрифт `Roboto` (с большой буквы), указан только основной шрифт и тип семейства, альтернативный веб-безопасный шрифт отсутствует |
LGoodDatePicker/LGoodDatePicker | 154158326 | Title: Feature request: Add optional "week of year" numbers to the left side of the calendar panel.
Question:
username_0: Add optional "week of year" numbers to the left side of the calendar panel.
Note: These would need to be locale sensitive. Each locale specifies the definition of the first week of the year, including the first day of the week, and the minimum number of days that constitutes the first week of the year.
Status: Issue closed
Answers:
username_0: This feature was implemented as of [LGoodDatePicker 7.2.1](https://github.com/LGoodDatePicker/LGoodDatePicker/releases). |
mduu/tauchbolde | 379515811 | Title: Add a date-time-picker
Question:
username_0: Currently the from / to date-time field for events is a plain text input field. For more comfort it would be great if it offers date-time picker functionality.
**Ideas:**
- Use HTML5 date-time type inputs so bowser that supports this can render a native picker
- Use some 3rd party date-time picker library which will be supported on all supported browsers<issue_closed>
Status: Issue closed |
coenjacobs/mozart | 725166333 | Title: Sub-dependencies are not processed correctly.
Question:
username_0: I'm trying to change the namespace for `Twig` and here is my `composer.json`:
```
"mozart": {
"dep_namespace": "MBViews\\Dependencies\\",
"dep_directory": "/src/Dependencies/",
"classmap_directory": "/classes/dependencies/",
"classmap_prefix": "MBV_",
"packages": [
"twig/twig"
],
"override_autoload": {},
"delete_vendor_directories": true
}
```
And here is the screenshot of the `Dependencies` folder, which looks quite right. Twig and Symphony packages are moved correctly.

However, inside the `Dependencies/Symphony/Polyfill/Mbstring/bootstrap.php` file, there's are lines like this:
```php
use Symfony\Polyfill\Mbstring as p;
if (!function_exists('mb_convert_encoding')) {
function mb_convert_encoding($s, $to, $from = null) { return p\Mbstring::mb_convert_encoding($s, $to, $from); }
}
// More lines
```
Note the `use` statement here. It should prepended with our configuration in the `composer.json`, which should be `MBViews\Dependencies\Symfony\Polyfill\Mbstring`.
Currently, I manual change it, because there're only 2 packages from Symphony need to change. But if there're many packages, the work will be huge.
Answers:
username_1: @username_0 Can you perhaps try again with the current `master` branch, now that some of the more matured recursive package fixes have been merged, so I expect this to be fixed. If not, could you please provide your entire `composer.json` file here?
username_0: @username_1 I just got a chance to try mozart again and looks like this issue is fixed!
Thanks for your library.
Status: Issue closed
username_1: Glad to hear that @username_0 - happy it's serving you well! |
cdnjs/cdnjs | 235086034 | Title: [Request] Add covervid.min.js
Question:
username_0: **Library name:** covervid
**Git repository url:** https://github.com/stefanerickson/covervid
**npm package name or url** (if there is one):
**License (List them all if it's multiple):** The MIT License (MIT)
**Official homepage:** http://stefanerickson.github.io/covervid/
**Wanna say something? Leave message here:**
=====================
Notes from cdnjs maintainer:
Please read the [README.md](https://github.com/cdnjs/cdnjs#cdnjs-library-repository) and [CONTRIBUTING.md](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md) document first.
We encourage you to add a library via sending pull request,
it'll be faster than just opening a request issue,
since there are tons of issues, please wait with patience,
and please don't forget to read the guidelines for contributing, thanks!!
Answers:
username_1: duplicated with #5427
username_2: Let's move to #5427
Status: Issue closed
|
seqan/seqan | 149476779 | Title: indexRequire() doesn't work for FMIndex
Question:
username_0: I thought the usual interface for creating an index quickly was:
```c++
Index<StringSet<String<Iupac>, Owner<ConcatDirect<>>>, IndexSa<> > ix(seqs);
indexRequire(ix, EsaSa());
```c++
This works. But when I do:
Index<StringSet<String<Iupac>, Owner<ConcatDirect<>>>, FMIndex<> > ix(seqs);
indexRequire(ix, FibreLF());
```
I get all sorts of errors. Instead I have to use `indexCreate(ix)` (no further arguments!) for creating the FMIndex. OTOH when doing the same for an SA-Index you need to specify more arguments. Is there a reason for this inconsistency?
Do you know anything about this @esiragusa ? Could you discuss with @username_1 how to improve this for the future?
Answers:
username_1: I cannot test it right now but I think for the FMIndex the following should work:
indexRequire(ix, FibreSALF());
username_1: @username_0 ``indexRequire(ix)`` does not work for me, instead you should use ``FibreSALF()`` as suggested above.
``indexRequire(ix, FibreLF());`` doesn't make sense for the FM index since you need to create the SA before you can create the LF table. That's probably why they introduced the ``FibreSALF`` tag.
username_0: Ah, ok. I think we should look for some unified interface that makes sure the index is instantiated, but I'll move it to 3.0.
username_1: I close this issue now, it doesn't make sense to keep this issue open for 3.0. @username_0 Please reopen if you feel differently about it.
Status: Issue closed
|
prescottprue/react-redux-firebase | 325035485 | Title: Bug: updateProfileOnLogin not honored for signInWithEmailAndPassword authentication
Question:
username_0: **What is the current behavior?**
It does not create the user profile.
see: https://github.com/username_1/react-redux-firebase/blob/master/src/actions/auth.js#L447
```javascript
if (method === 'signInWithEmailAndPassword') {
return { user: userData };
}
```
**If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via [codesandbox](https://codesandbox.io/) or similar.**
To reproduce, remove the user profile of someone already signup.
Case where this can happens:
- activating userProfile with already created users
- switching between firebase and firestore for storing profiles
- bugs..
**What is the expected behavior?**
I would expect the userProfile to be updated on login of the user, it ease the migration.
Feel free to tell me how to help. I can author a patch if this is recognised as a bug.
Answers:
username_1: @username_0 This is intention as `createUser` is supposed to be used to create new users when logging in with email/password [as outlined in the auth section of the docs](http://react-redux-firebase.com/docs/auth.html#createusercredentials-profile). Does that not work for your use case? or was it just unclear since it is different than other login types?
username_0: I was expecting the profile to be created when the user login as they are not new users, but just users without profiles.
username_0: Would you mind explaining the rationale for not updating the profile on login as the option suggest? It seems to remove some corner cases. What do you think?
username_1: @username_0 The original intention was to keep usage of `createUser` for email signup - my thought was that it would be unclear to mention that `login` creates a profile in RTDB, but not within the authentication tab.
I think that this would remove some corner cases, so going to look into making the change. Open to a PR if you get a chance.
username_1: Started work on this on the `v2.2.0` branch. Did this since we will want a minor version bump for the functionality change. |
swaywm/sway | 400376167 | Title: Thunar FileManager Segfaults
Question:
username_0: I've been trying to use thunar file manager with sway but each time it fails with a segfault.
I tried running thunar with gdb and here is the log:
```
(gdb) run
Starting program: /usr/bin/thunar
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7ffff352e700 (LWP 2577)]
[New Thread 0x7ffff2d2d700 (LWP 2578)]
[New Thread 0x7ffff24ea700 (LWP 2579)]
[New Thread 0x7ffff1ce9700 (LWP 2580)]
Thread 1 "thunar" received signal SIGSEGV, Segmentation fault.
0x00007fff00000033 in ?? ()
```
I'm not sure what is the problem and this occurs every single time.
This is the sway version I'm running:
```
sway version 1.0-beta.2-103-g728e5700 (Jan 6 2019, branch 'master')
```
What file manager works well with swaywm?
Answers:
username_1: Please don't workaround the issue, instead help fixing it! :)
username_2: I have thunar also crashing when launching with "GDK_BACKEND=wayland" in .bashrc, as soon as i interact with it.. no problems with x11 backend.
Status: Issue closed
username_1: Please report this to the Thunar devs. |
amcharts/amcharts4 | 855025185 | Title: add search option to chart
Question:
username_0: Hi, I have a bubble chart that shows more than 600 bullets, is there any way that users can search for a bullet in the chart?
like when u search its name, tooltip coming up?
Thanks!
Answers:
username_1: I'm afraid, there's no such functionality built-in or planned for the future. Sorry.
Status: Issue closed
|
cascadiajs/2015.cascadiajs.com | 49040938 | Title: Talk: Asynchronous js without promises
Question:
username_0: I would like to see a talk about async programming in js without using promises
Answers:
username_1: :+1:
Status: Issue closed
username_2: Dibs. Created talk proposal #76.
If anyone else wants it though, I'm happy to let someone else have it.
username_3: I should have submitted #80 this last night instead of re-writing it 100 times :).
*Furiously re-reading Lean Startup*
I DO want to talk about Promises but I don't want to focus on a specific technology I want to talk patterns so I'm not sure if this talk fits or not. @username_2 Wanna :moyai: :page_facing_up: :scissors: for it?
username_2: I throw :bomb:! I win!
Kidding. :)
What I was planning to talk about was when different systems are useful, since they all have a place. I was going to address callbacks and (dom) events as different systems, since while they are similar there are different practical applications. Then there's also event emitters which have subtly different applications. I wanted to introduce a somewhat novel idea of state machines for async control since they have been helpful to me in my work, and I could see them helping others also. Moreover, my focus was going to be on what we can do *today* rather than tomorrow/someday.
Looking at your proposal, I hadn't considered web workers as part of this, and while I'm not sure what place they could hold, I'm curious to see what you'd use them for. I have yet to see a useful example of generators besides "look I can calculate Fibonacci numbers" so I'd also love to hear how that would be practical.
So, given all that, do you think that our proposals are different enough? Is there overlap, or are we approaching the problem from different directions, with different solutions?
But bottom line, I'd prefer you getting your talk over me getting it, so if you want it I'll close mine.
username_4: With #74 I'd like to focus on state machines / statecharts, so 'async-without-promises' could potentially become a series of talks. I'd be down to collaborate.
username_2: Oh cool, I didn't quite grok that from the proposal. :) I'll close my proposal.
username_3: @username_2 - Thanks for the great response I'd love to get to chat at Cascadia about some of that stuff - especially where you're using state machines for async. I'm pretty sure that Promises are built with state machines btw. I'll be implementing a mini Promise lib to do some research and I'll find out more on that.
@username_4 - Since @username_2 closed his that leaves the two of us so far with this async stuff. I wasn't really going to do much with state machines but we should definitely collaborate a bit so we don't overlap too much.
username_3: @username_0 - What's wrong with Promises anyway? I quite like them. |
botkalista/discord.js.userbot | 895423144 | Title: (node:1873) UnhandledPromiseRejectionWarning: Error: Not supported
Question:
username_0: got this error while starting my bot :
`(node:1873) UnhandledPromiseRejectionWarning: Error: Not supported
at WebSocketManager.client.ws.connect (/home/debian/Discord/Music-bot/node_modules/discord.js.userbot/index.js:28:68)
at Client.login (/home/debian/Discord/Music-bot/node_modules/discord.js/src/client/Client.js:223:21)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
(node:1873) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:1873) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.` |
tinymce/tinymce | 530783709 | Title: Invalid css values in skin.css
Question:
username_0: There are few css errors and warnings in skin.min.css for tinymce 5
https://jigsaw.w3.org/css-validator/validator?uri=http%3A%2F%2Ffiddle.tinymce.com%2Ftinymce%2F5.1.2%2Fskins%2Fui%2Foxide%2Fskin.min.css&profile=css3svg&usermedium=all&warning=1&vextwarning=&lang=en
Answers:
username_1: Ouch, definitely need to look over those errors. Thx for calling us out on it.
Status: Issue closed
username_1: This has been fixed and will be released with an upcoming version |
ibm-openbmc/dev | 697435311 | Title: Update forked HB scripts into pdata
Question:
username_0: Update mrw xml processing scripts in pdata because, the processing logic for few targets which are available in mrw xml got changed based on recent MRW system xml. Those changes available in hostboot and it applicable to pdata scripts (those scripts forked from hostboot).
Answers:
username_0: Patch: https://github.ibm.com/phal/pdata/pull/33
Status: Issue closed
|
django-parler/django-parler | 263056651 | Title: Hiding languages in LANGUAGE columns in admin list_display
Question:
username_0: Say, I had three languages - English, Korean and Chinese and now I just have just two - English and Korean, and set as such in settings.py as well.
The challenge is I still see 'Chinese' in languages column of admin list_display. Of course when I click the Chinese language, I get '502 Bad Gateway' error. Obvious that Chinese is NOT in settings.py. Yes, language tabs are all fine - they just show the languages defined in settings.py
It looks confusing and certainly clients will be tempted to click the language. How can I hide the languages not in settings.py?
Status: Issue closed
Answers:
username_1: Do you have `PARLER_SHOW_EXCLUDED_LANGUAGE_TABS` enabled by accident? |
vincentbernat/dashkiosk | 314969514 | Title: Choosing the name of a display in dashkiosk
Question:
username_0: Hello.
I'm currently working on a project where I have to use dashkiosk to display websites on different displays depending on their role. Each display has their own name (serial number) given by the server at their first connection, and it is stored locally in the display as a cookie.
The thing is that the name is given randomly. I'd like to be able to name them properly to be able to identify them in the admin board of dashkiosk, maybe even include it in the URL, like "https://example.com/receiver?name=XXXXX".
If you have any idea on how to proceed, please let me know.
Answers:
username_1: Receivers cannot choose their name for security reasons (you don't want them to hijack an existing display and get its configuration). However, this can be patched out and the name could only be a suggestion to server.
However, did you notice you can give each display a description? Maybe this would be good enough for you?
username_0: Yes, we already gave a description to the displays.
Maybe if we cannot give a name to a display, we can suggest to the server that it wants to be in a group if it's not assigned yet ? So each display could be redirected in their 'default' groups without having to do it manually on the admin board. |
zarusz/SlimMessageBus | 786164379 | Title: NET 5 Support - Event Hubs
Question:
username_0: Are there any plans to support .NET 5 applications?
I have an application that I am upgrading from a netcoreapp3.1 to net5 and it seems like the EventHub code no longer works after I make the upgrade.
I get the following error when trying to publish a message:
Operation is not valid due to the current state of the object.
at Microsoft.Azure.Amqp.Transport.TransportStream.Flush()
at System.IO.Stream.<>c.<FlushAsync>b__39_0(Object state)
at System.Threading.Tasks.Task.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__277_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at System.Net.Security.SslStream.<ForceAuthenticationAsync>d__171`1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at Microsoft.Azure.Amqp.TaskHelpers.EndAsyncResult(IAsyncResult asyncResult)
at Microsoft.Azure.Amqp.StreamExtensions.EndAuthenticateAsClient(SslStream sslStream, IAsyncResult asyncResult)
at Microsoft.Azure.Amqp.Transport.TlsTransport.HandleOpenComplete(IAsyncResult result, Boolean syncComplete)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Azure.Amqp.ExceptionDispatcher.Throw(Exception exception)
at Microsoft.Azure.Amqp.AsyncResult.End[TAsyncResult](IAsyncResult result)
at Microsoft.Azure.Amqp.AmqpObject.OpenAsyncResult.End(IAsyncResult result)
at Microsoft.Azure.Amqp.AmqpObject.EndOpen(IAsyncResult result)
at Microsoft.Azure.Amqp.Transport.TlsTransportInitiator.HandleTransportOpened(IAsyncResult result)
at Microsoft.Azure.Amqp.Transport.TlsTransportInitiator.OnTransportOpened(IAsyncResult result)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventHubClient.<CreateConnectionAsync>d__32.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.Amqp.FaultTolerantAmqpObject`1.<OnCreateAsync>d__6.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.Amqp.Singleton`1.<GetOrCreateAsync>d__13.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.Amqp.Singleton`1.<GetOrCreateAsync>d__13.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<CreateLinkAsync>d__12.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
[Truncated]
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<OnSendAsync>d__10.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<OnSendAsync>d__10.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.EventHubClient.<SendAsync>d__25.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at SlimMessageBus.Host.AzureEventHub.EventHubMessageBus.<ProduceToTransport>d__9.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at SlimMessageBusTest.Program.<AddLoop>d__4.MoveNext() in
\SlimMessageBusTest\Program.cs:line 88
Answers:
username_1: Yes, the plan is to support .NET 5.
Thanks for the issue report. I will try to reproduce in the next days and see what is wrong. It might just require an upgrade of the underlying event hubs client.
username_1: Hey @username_0 , can you please give this one a try?
https://www.nuget.org/packages/SlimMessageBus.Host.AzureEventHub/1.10.1-rc1
username_0: Thank you for the pre-release package. This update has fixed the issue I was having.
Status: Issue closed
username_0: Are there any plans to support .NET 5 applications?
I have an application that I am upgrading from a netcoreapp3.1 to net5 and it seems like the EventHub code no longer works after I make the upgrade.
I get the following error when trying to publish a message:
Operation is not valid due to the current state of the object.
at Microsoft.Azure.Amqp.Transport.TransportStream.Flush()
at System.IO.Stream.<>c.<FlushAsync>b__39_0(Object state)
at System.Threading.Tasks.Task.InnerInvoke()
at System.Threading.Tasks.Task.<>c.<.cctor>b__277_0(Object obj)
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at System.Net.Security.SslStream.<ForceAuthenticationAsync>d__171`1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at Microsoft.Azure.Amqp.TaskHelpers.EndAsyncResult(IAsyncResult asyncResult)
at Microsoft.Azure.Amqp.StreamExtensions.EndAuthenticateAsClient(SslStream sslStream, IAsyncResult asyncResult)
at Microsoft.Azure.Amqp.Transport.TlsTransport.HandleOpenComplete(IAsyncResult result, Boolean syncComplete)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Azure.Amqp.ExceptionDispatcher.Throw(Exception exception)
at Microsoft.Azure.Amqp.AsyncResult.End[TAsyncResult](IAsyncResult result)
at Microsoft.Azure.Amqp.AmqpObject.OpenAsyncResult.End(IAsyncResult result)
at Microsoft.Azure.Amqp.AmqpObject.EndOpen(IAsyncResult result)
at Microsoft.Azure.Amqp.Transport.TlsTransportInitiator.HandleTransportOpened(IAsyncResult result)
at Microsoft.Azure.Amqp.Transport.TlsTransportInitiator.OnTransportOpened(IAsyncResult result)
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventHubClient.<CreateConnectionAsync>d__32.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.Amqp.FaultTolerantAmqpObject`1.<OnCreateAsync>d__6.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.Amqp.Singleton`1.<GetOrCreateAsync>d__13.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.Amqp.Singleton`1.<GetOrCreateAsync>d__13.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<CreateLinkAsync>d__12.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
[Truncated]
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<OnSendAsync>d__10.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Microsoft.Azure.EventHubs.Amqp.AmqpEventDataSender.<OnSendAsync>d__10.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.Azure.EventHubs.EventHubClient.<SendAsync>d__25.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at SlimMessageBus.Host.AzureEventHub.EventHubMessageBus.<ProduceToTransport>d__9.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
at SlimMessageBusTest.Program.<AddLoop>d__4.MoveNext() in
\SlimMessageBusTest\Program.cs:line 88
username_0: @username_1 Hate to be a bother but I seem to be getting a similar error when trying to use the ServiceBus implementation. Could it also be as simple as updating NuGet packages?
username_1: I will check a bit later and upgrade to latest compatible Service Bus client....
username_1: @username_0 can you check this one?
https://www.nuget.org/packages/SlimMessageBus.Host.AzureServiceBus/1.10.1-rc2
Status: Issue closed
username_0: Sorry I forgot to reply here. This rc2 package is working for me.
username_1: The stable release is available:
https://www.nuget.org/packages/SlimMessageBus.Host.AzureEventHub/1.10.1
https://www.nuget.org/packages/SlimMessageBus.Host.AzureServiceBus/1.10.1 |
Noordsestern/robotframework-dwdweather | 489072309 | Title: Create map with stations
Question:
username_0: # Story
```gherkin
Given a list of stations
When a map is created from list of stations
Then a visual map is created
And all stations are marked at that map
And map is provided in a commong file format for pictures
``` |
nov/json-jwt | 164041368 | Title: Activesupport requires ruby >= 2.2.2
Question:
username_0: Ruby support for 2.1 is broken by activesupport 5.0.x.
Please pin to activesupport to 4.2.6 if ruby < 2.2
Answers:
username_1: Or we just get rid of it, see #29
username_2: can't bundler solve it?
username_3: 👍 I just ran into this as well. Seems like a good short term fix is to update the dependency to < 5
Status: Issue closed
|
angular-material-extensions/contacts | 421088299 | Title: Installation error
Question:
username_0: ## Bug Report or Feature Request (mark with an `x`)
```
- [ x ] bug report -> please search issues before submitting
- [ ] feature request
```
### OS and Version?
MacOS Mojave
### Versions
Package Version
-------------------------------------------------------------
@angular-devkit/architect 0.13.0
@angular-devkit/build-angular 0.13.0
@angular-devkit/build-optimizer 0.13.0
@angular-devkit/build-webpack 0.13.0
@angular-devkit/core 7.3.0
@angular-devkit/schematics 7.3.0
@angular/animations 7.2.9
@angular/cdk 7.3.4
@angular/cli 7.3.0
@angular/common 7.2.3
@angular/compiler 7.2.3
@angular/compiler-cli 7.2.9
@angular/fire 5.1.1
@angular/flex-layout 7.0.0-beta.23
@angular/forms 7.2.9
@angular/language-service 7.2.3
@angular/material 7.3.4
@angular/material-moment-adapter 7.2.2
@angular/platform-browser 7.2.3
@angular/platform-browser-dynamic 7.2.3
@angular/pwa 0.13.0
@angular/router 7.2.3
@angular/service-worker 7.2.2
@ngtools/webpack 7.3.0
@schematics/angular 7.3.0
@schematics/update 0.13.0
rxjs 6.4.0
typescript 3.2.4
webpack 4.29.0
### Repro steps
registering the module in the app.module.ts:
import { NgxMaterialPagesModule } from '@angular-material-extensions/contacts';
Already indicates the error.
### The log given by the failure
ERROR in src/app/app.module.ts(24,10): error TS2305: Module '"../../node_modules/@angular-material-extensions/contacts/contacts"' has no exported member 'NgxMaterialPagesModule'.
### Desired functionality
Would like to use this extension in my app "SelfieTheGame"
### Mention any other details that might be useful
[Selfiegame hosted on Firebase](https://selfiespel-250de.firebaseapp.com/)
[Github repo](https://github.com/username_0/selfiespel)
Answers:
username_1: I'm sorry about that, there was a typo issue in the readme... the module of this library is `MatContactsModule `
Import the module like the following:
`import { MatContactsModule } from '@angular-material-extensions/contacts';`
I will update the documentation tonight!
Thanks you 👍
Note: I can see that you are using [ngx-auth-firebaseui](https://github.com/username_1/ngx-auth-firebaseui) in your selfie app!! This is really awesome 💪
Status: Issue closed
|
SignalR/SignalR | 5383084 | Title: Support CamelCasePropertyNamesContractResolver when serialize
Question:
username_0: For issue 138:
https://github.com/SignalR/SignalR/issues/138
Can we use JsonProperty in the build-in objects to serialize to {"Url":"/signalr","ClientId":"194d19cf-219b-405e-9ed2-d5309d098022"} so that we can choose using CamelCasePropertyNamesContractResolver or not.
Answers:
username_1: +1 for native support.
username_2: +1 for native support
username_3: When i try going the same route of creating a manual resolver it's not working me with latest version of SignalR and JSON.NET. The type JsonNetSerializer cannot be found amongst other things, have these types moved namespaces, assemblies, or been renamed?
username_4: Using SignalR v2.2.0 and Autofac 3.5.2, if I use the code snippets above and register a binding with Autofac like this
```csharp
var settings = new JsonSerializerSettings {ContractResolver = new SignalRContractResolver()};
var serializer = JsonSerializer.Create(settings);
builder.Register(c => serializer).As<JsonSerializer>();
```
then the camel case resolver is being used.
username_5: +1 for native support
username_6: +1 for native support |
karlg100/homebridge-frigidaire | 429372527 | Title: config.pollingInterval is being halved
Question:
username_0: In the Homebridge console window, I observed the “[Frigidaire AC Units]” status messages were polling the unit every 5 seconds (based on the timestamps)
The provided default / sample config does not display the config argument “pollingInterval”, which when not provided defaults to 10000 (ms).
I added pollingInterval (case sensitive) to my platform config and set the value to 30000 (ms) to observe the change in the Homebridge console output and noted the polling interval changed from the observed 5 seconds to 15 seconds.
I then modified the value to 20000 and observed the pollingInterval became 10 seconds.
With this in mind, I believe the interval is being halved (/2) somewhere in code.
Answers:
username_1: I wonder if there are two timers being defined somewhere. how many devices do you have?
username_1: nope. found the bug. not even sure why I put that in there. was dividing this.pollInterval by 2.
Status: Issue closed
|
openjournals/joss-reviews | 537145005 | Title: [PRE REVIEW]: LFSpy: A Python Implementation of Local Feature Selection for Data Classification with scikit-learn Compatibility
Question:
username_0: **Submitting author:** @username_3 (<a href="http://orcid.org/0000-0003-4849-732X"><NAME></a>)
**Repository:** <a href="https://github.com/McMasterRS/LFSpy" target ="_blank">https://github.com/McMasterRS/LFSpy</a>
**Version:** 1.0.0
**Editor:** Pending
**Reviewer:** Pending
**Author instructions**
Thanks for submitting your paper to JOSS @username_3. **Currently, there isn't an JOSS editor assigned** to your paper.
@username_3 if you have any suggestions for potential reviewers then please mention them here in this thread (without tagging them with an @). In addition, [this list of people](https://bit.ly/joss-reviewers) have already agreed to review for JOSS and may be suitable for this submission.
**Editor instructions**
The JOSS submission bot @username_0 is here to help you find and assign reviewers and start the main review. To find out what @username_0 can do for you type:
```
@username_0 commands
```
Answers:
username_0: Hello human, I'm @username_0, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
```
@username_0 commands
```
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
```
@username_0 generate pdf
```
**What happens now?**
This submission is currently in a `pre-review` state which means we are waiting for an editor to be assigned and for them to find some reviewers for your submission. This may take anything between a few hours to a couple of weeks. Thanks for your patience :smile_cat:
You can help the editor by looking at [this list of potential reviewers](https://bit.ly/joss-reviewers) to identify individuals who might be able to review your submission (please start at the bottom of the list). Also, feel free to suggest individuals who are not on this list by mentioning their GitHub handles here.
username_0: ```
Attempting to check references...
```
username_0: ```
Attempting PDF compilation. Reticulating splines etc...
```
username_0: ```Reference check summary:
OK DOIs
- None
MISSING DOIs
- https://doi.org/10.1109/tnnls.2017.2676101 may be missing for title: Logistic localized modeling of the sample space for feature selection and classification
- https://doi.org/10.1109/jbhi.2018.2877738 may be missing for title: A Machine Learning Framework for Automatic and Continuous MMN Detection with Preliminary Results for Coma Outcome Prediction
INVALID DOIs
- None
```
username_0: [ :point_right: Check article proof :page_facing_up: :point_left: ](https://github.com/openjournals/joss-papers/blob/joss.01954/joss.01954/10.21105.joss.01954.pdf)
username_1: Hi @username_3, could you review the references and add the missing DOIs? (Including other references that username_0 may have missed)
username_1: Hi @username_2, could you edit this submission for us?
username_2: @username_1 sure!
username_2: @username_0 assign @username_2 as editor
username_0: OK, the editor is @username_2
username_2: @username_3: as well as fixing the DOIs, please take a look at [this list of potential reviewers](https://bit.ly/joss-reviewers) and let me know if you see anyone who would be an especially good fit (don't @ them!). I'll start pinging reviewers in the next few days one way or another. Thanks!
username_3: Hi @username_2 and @username_1, thanks for such a quick turnaround!
I've added the DOIs for all references.
Going through the list of reviewers, here are some candidates that look like a really good fit to me:
desilinguist, rougier, sealhuang, krother, ahurriyetoglu, nirum, arokem, username_4, oesteban, jsgalan, tupi, username_5, tianxzhu, GalenStocking, ryEllison
username_2: :wave: Hey @username_5! Any chance you're available and willing to review this JOSS submission? Thanks!
username_2: :wave: Hey @username_4! Any chance you're available and willing to review this JOSS submission? Thanks!
username_4: Sure.
username_5: Sure
username_2: @username_0 assign @username_4 as reviewer
username_0: OK, the reviewer is @username_4
username_2: @username_0 add @username_5 as reviewer
username_0: OK, @username_5 is now a reviewer
username_2: @username_0 start review
username_0: OK, I've started the review over in https://github.com/openjournals/joss-reviews/issues/1958. Feel free to close this issue now!
username_2: @username_4, @username_5: thanks! Please head over to #1958 to find the main review.
Status: Issue closed
|
DenisCarriere/turf-jsts | 291523769 | Title: Properly implement `java/lang/IllegalArgumentException`
Question:
username_0: [src/java/lang/IllegalArgumentException.js](https://github.com/username_1/turf-jsts/blob/master/src/java/lang/IllegalArgumentException.js) is not properly implemented, and this threw me off while using the `@turf/intersect` package badly 😉
Answers:
username_0: Submitted a PR: https://github.com/username_1/turf-jsts/pull/2
username_1: 👍 Thanks, I'll patch both `turf-jsts` & `@turf/intersect`, this change won't be reflected in the main `@turf/turf` repo just yet until the next minor release.
Status: Issue closed
|
ManageIQ/manageiq | 151640523 | Title: Miq server doesn't start due to "undefined method 'filter' for Entitlement"
Question:
username_0: missing': undefined method `filters=' for #<Entitlement:0x00000009df2330> (NoMethodError)
from manageiq/app/models/miq_group.rb:71:in `block in seed'
potentially due to https://github.com/ManageIQ/manageiq/pull/8102
cc @username_1
Answers:
username_1: Do you have the commit from #8291? Your database is in an old state and needs to be remigrated.
username_0: will recheck, thanks for the pointer.
username_0: works like a charm. sorry! thanks, closing.
Status: Issue closed
username_1: Great! Never apologize for asking about something I broke ;) |
ziglang/zig | 473817485 | Title: mem.copy should assert that dest doesn't overlap
Question:
username_0: `mem.copy` is missing an assert that the if the destination and source overlap, that the destination is before the source.
Also the comment above it `dest.ptr must be <= src.ptr.` needs correcting.
Similar for `mem.copyBackwards`.
I'm not sure if this is a dupe of #382.
Answers:
username_1: This came up from me asking on IRC about why this code wasn't working as expected:
```zig
const std = @import("std");
const Buffer = std.Buffer;
pub fn main() !void {
var buf = try Buffer.init(std.debug.global_allocator, " foo\n ");
defer buf.deinit();
// will produce "fo\x00" - length of 3
var val = std.mem.trim(u8, buf.toSliceConst(), " \n");
// Uncomment trim below, comment out trim above, and it works as expected
// will produce "foo"
// var val = std.mem.trim(u8, " foo\n ", " \n");
std.debug.warn("trimmed val: {}\n", val);
try buf.replaceContents(val);
std.debug.warn("buf: {}\n", buf.toSliceConst());
}
```
It's similar to this test case, so I expected `buf` to have a slice of `foo` instead of `fo` with a null byte at the end: https://github.com/ziglang/zig/blob/master/std/mem.zig#L388
I didn't recognize that the slices would overlap, hence the output.
username_2: #382 is a prerequisite of this issue
username_0: Could we not have something like:
```zig
assert((@ptrToInt(&dest[source.len - 1]) < @ptrToInt(&source[0])) or
(@ptrToInt(&dest[0]) > @ptrToInt(&source[source.len - 1])));
``` |
NEU-Libraries/scholar-onesearch | 219382569 | Title: Report a problem captured URL
Question:
username_0: When you are in the beta and you click "Report a problem" the form captures a URL that looks like this
http://northeastern-primostaging.hosted.exlibrisgroup.com/primo-explore/fulldisplay?docid=TN_mla2010421069&context=L&vid=NUdev&search_scope=new_everything_scope&tab=default_tab&lang=en_US
But it's not actually primo staging or vid=NUdev
Not urgent--maybe fix before going live this summer, however.
Answers:
username_1: @gam5 can you fix this? New UI Production report a problem needs to send production links not staging links.
Status: Issue closed
username_2: This should be fixed with 70b6db99ca9c9afd96683ea8e5f35d2468268e76 and 2747dd7101798df1a6ca26d04ee408df2db70057
if vid=NU then it goes to prod library website form with prod url
if vid = anything but NU it goes to dev library website form with staging url |
apache/dubbo | 551292378 | Title: 获取Adaptive扩展时存在循环注入的问题
Question:
username_0: dubbo获取Adaptive扩展时会先调用injectExtension方法,通过set方法注入之后再返回暴露给外界使用.注入的对象来源是通过SpiExtensionFactory得到的,但是SpiExtensionFactory也是获取的Adaptive扩展.如果此时有A,B两个类,两者都有对方的set方法,那么虽然在启动时不会报错(注入时会catch到栈溢出异常退出),但其实创建了很多个A和B的对象.关键的是,获取到的A和B内部的对象不是对方,使用时会出现问题
Answers:
username_1: 循环依赖的时候,代码抛出的stackoverflowerror会被捕获后logger打印,然后这个error就不管了,dubbo继续运行。个人认为这种情况dubbo不应该继续执行? @chickenlj
username_2: The relevant logic has changed, try it with the latest version, if you still have problems, you can reopen the issue
Status: Issue closed
|
wix/react-native-navigation | 191705071 | Title: How to detect when app is ready?
Question:
username_0: I'm using firestack library (firebase) to implement an authentication listener on app root component. based on auth status I start a single page app or tab based app.
this auth state listener works when using normal navigator, but with react-native-navigation I only see the splashscreen when starting the app and nothing else. So the auth state listener is either never triggered at all or too early (before app is ready). However when I reload (Cmd + R), the my auth listener works and the correct screen is displayed.
Is there some sort of event that tells me navigation is ready to start single page app or tab based app?
My root component looks like this
```
import React, { Component } from 'react';
import { Navigation } from 'react-native-navigation';
import { registerScreens } from './screens.js';
import firestack from './components/firebase';
registerScreens();
/**
* handle authentication
*/
firestack.auth().onAuthStateChanged((user) => {
if (user) {
// user is signed in
authenticatedRoutes();
}
else {
// use is not signed in
unauthenticatedRoutes();
}
});
function authenticatedRoutes() {
Navigation.startTabBasedApp({
tabs: [
{
label: 'Search',
screen: 'roomary.SearchPage', // this is a registered name for a screen
icon: require('./assets/images/one.png'),
selectedIcon: require('./assets/images/one_selected.png'), // iOS only
title: 'Search'
},
// left out for brevity
]
});
}
function unauthenticatedRoutes() {
Navigation.startSingleScreenApp({
screen: {
screen: 'roomary.LoginPage',
title: 'Anmelden'
}
});
}
```
Answers:
username_1: I don't see anything wrong with your approach - it should work.
When you app is "ready" - it loads react context and static Js code is executed. Can you add some logs, and see which parts are executed and which aren't?
username_0: @username_1 thank you. Not 100 percent sure yet, but there seems to be some bugs (not related to this repo), but to RN's reote debug function. if I disable remote debug it works… but turning off debuging was not really the first thing that came to my mind when trying to debug a problem :D
anyway, thank you for this great repo. I'm closing this now.
Status: Issue closed
|
trailofbits/polytracker | 766418295 | Title: 噶尔县妹子真实找上门服务_bq
Question:
username_0: 噶尔县妹子真实找上门服务〖╋V:⒈O⒎⒎乄⒈⒐O⒐】 作为今年中国电视界最高规格最受瞩目的一场赛事,《中央广播电视总台主持人大赛》首播就引发热烈关注与讨论,城收视率破,节目相关话题阅读量超过亿,视频播放量破亿,同时收获各大新媒体平台热搜余个,《人民日报》《光明日报》《中国青年报》《南方日报》等数百家媒体给予高度评价,成为今年最具口碑和影响力的电视节目之一。网友纷纷表示“总台出手,必属精品!”“全程惊叹,果然是神仙打架!”“都是实力派!高手过招,快、准、稳。”“所有的等待都是值得的,必须守着收看每一期。”而“催更”已经成了主持人大赛观众的日常。今晚点档,《中央广播电视总台主持人大赛》第二期即将在央视综合频道播出。本场比赛的六名新闻类选手分别为高嵩、周瑜、张楚雪、田靖华、商亮和白影;六名文艺类选手分别为齐岱泽、刘熙烨、孔皓、蔡紫、沙海波和刘洋。主持人撒贝宁,点评嘉宾康辉、董卿,将继续与位专业评审及位在线大众评审一起见证选手的精彩表现。(主持人撒贝宁)(点评嘉宾康辉、董卿)(第二期位选手) 主持老将坚守新闻人初心引康辉共鸣,夫妻档选手谈“双倍梦想”背后的苦与乐 有着十年从业经验的“主持老将”周瑜,在三分钟自我展示中讲述自己初为记者时,前往甘肃舟曲报道泥石流灾害的经历。在当时食物短缺的窘迫情况下,陌生大姐送她三块饼的暖心举动对她而言意义重大。“希望当年的大姐能看见今天的节目”,周瑜表示从大姐手中接过饼的那瞬间,明白了新闻人的职责是关注新闻背后每个人的境遇。周瑜的真情流露,引起康辉的共鸣:“我也希望那些平常可能会被大家忽略的人们,能够通过我们的关注,让他们的生活发生一点一点的变化。”(新闻类选手周瑜) 与妻子一同前来参赛的田靖华,在大赛中率先登场,分享了夫妻俩身为电视人的苦与乐。两人从大学同学到同事,再到同一个节目的搭档,虽然曾因工作时间不一致而见不到面,也遇到除夕夜要工作而不得不“寄养”两岁儿子的情况,却依然怀揣着“双倍”梦想坚守岗位。他们携手同行的特殊经历,让鲁健也感同身受:“我是晚间节目,我爱人是早间新闻,所以我们也经常有这种‘白天不懂夜的黑’的感觉。”(新闻类选手田靖华) 采访过不少工匠和企业家的白影,在自我展示环节谈起最打动她的一个少年——年仅岁的世界技能大赛冠军蒋应成。比起他一举夺冠的事迹,白影更关心这位“笨小孩”荣誉背后一千八百多个日夜的追梦历程。听完白影细腻温情的表述,董卿同样深有感触:“你总结了一句话叫‘命运给过这个少年哪一样好东西?那就是梦想吧’,我觉得可能是本场我记忆最深刻的一句话。”(新闻类选手白影) 岁妈妈获赞“有天真气质”,“芝麻哥哥”从业年实现“逆生长” 今年岁的蔡紫,同时是一名六岁孩子的妈妈。三分钟的自我展示环节,她从自身对孩子的引导和教育出发,以深入浅出的讲述,赋予传统文化更具时代特色的灵动内涵。蔡紫身上与生俱来的亲和力如春风拂面,让人印象深刻,获董卿称赞:“我认为一个好的文艺节目主持人,应该有天然、天真的气质,我觉得这一点你赢了。”作为第四位上场的“赛点”选手,她的登台同样意味着将有一位文艺类选手止步赛场,令“暂时”排名第一的选手也倍感压力:“蔡紫很强。”(文艺类选手蔡紫) 本场比赛年龄最大的选手刘洋顶着熟悉的爆炸头,生动有趣的主持风格感染全场。作为有着超过年少儿节目主持经验的“芝麻哥哥”,刘洋纯真的表现让董卿感慨:“这是一种真正的逆生长,是出于一种对职业的敬畏和热爱,才能让自己这么多年如一日地保持一种相对纯粹的状态。”(文艺类选手刘洋) 选手挑战“不可能”终与偶像同台,撒贝宁再唱年前比赛歌曲 身上洋溢着青春气息的齐岱泽,结合《挑战不可能》分享以自己为主人公的故事,并透露正是因为从小收看撒贝宁主持的节目,促使他走上主持人的道路。齐岱泽挑战人生中数次“不可能”,终于来到《主持人大赛》的舞台与偶像并肩而立。听完齐岱泽讲述这段“不解之缘”,撒贝宁笑着问道:“你一会儿《挑战不可能》,一会儿《今日说法》的,你要干什么?我还没打算退休。”(文艺类选手齐岱泽、主持人撒贝宁) 还是在校大学生的非科班选手孔皓身高,刚上场就吸引不少人的目光。虽是以“新手”的身份首次参赛,孔皓依然通过精准的时间把控、流畅的语言表达,带全场感受姥姥王希玲对于国粹戏曲的匠心,以及手中纸扇的重要意义。值得一提的是,孔皓还邀请年前曾在《主持人大赛》唱过《曲苑杂坛》主题曲的撒贝宁,现场共同合唱这首歌,获得阵阵掌声。(文艺类选手孔皓、主持人撒贝宁) 除此之外,其他选手也通过敏锐的观察力、洞察力和对生活的感知力,带来各具特色的精彩表现,让人耳目一新。担任夜间新闻主播工作的商亮,常在播报新闻时遇到一些诸如火灾、地震、泥石流的突发事件。为了致敬和平时代的守护者和逆行者,他选择从另一个角度叙述,将热血英雄私下有笑有泪的真实一面娓娓道来;年仅岁的张楚雪,在自我展示环节轻松切换粤语、英语,通过讲述掌握国语言的柬埔寨网红男孩沙利,聚焦到“一带一路”普惠世界的初心上;来自陕西广播电视台的沙海波,则带来了一场与众不同的“颁奖典礼”,轻松幽默的风格引得现场众人忍俊不禁。(新闻类选手商亮)(新闻类选手张楚雪)(文艺类选手沙海波) 康辉再谈主持人成长之路,董卿专业点评温和有力 本着对优秀新闻人的高期待与高要求,康辉和董卿以独到的见解输出专业“干货”,既有妙语连珠的精准点评,也有温和有力的价值传递。 来自上海广播电视台的高嵩十分善于捕捉新闻中的细节,发掘日常生活中有价值的信息。某次工作中,记者同事分享的案例让高嵩触动不已,他通过搜寻资料、前往相关地点等实际行动,感受新闻中的“故事”。康辉表示:“高嵩的从业经历虽然不长,但是他真的是在做一线的记者。我们在上一场的比赛过程中曾经提到过,从记者到好记者,再到主持人,接着变成好主持人这条路是走得最踏实和最扎实的。”(新闻类选手高嵩)(点评嘉宾康辉) 来自湖北卫视的刘熙烨在自我展示环节,把曾侯乙编钟原件的复制件“搬”来现场为她“助力”。她以自豪的口吻再现曾侯乙编钟磅礴的历史,讲到动情处,还上前奏响了编钟。在如此丰富的形式设计下,她稳定的发挥得到董卿认可:“刘熙烨真的是借了一件我们民族文化五千年文明的彩衣来助力,整个设计也非常巧妙。”白影通过三分钟的讲述,带全场了解世界冠军蒋应成的人生故事,董卿说:“这个少年赢了,赢在他曾经的梦想,更赢在他一直在不懈地奋斗和努力。”(文艺类选手刘熙烨)(点评嘉宾董卿) 十二位选手才思敏捷,高招频出,他们讲述动人故事,分享奋斗人生,以点滴积累绽放青春光彩。本期节目中,谁能顺利晋级,谁又将遗憾止步?本周六晚点档,《中央广播电视总台主持人大赛》,看他们手执话筒,用声音传递梦想。士曝郎堤负湛悼儋擞阶党偶谜次拥柿辽肥煽厍季永率苍渤位牌屏守菜翰迅侥仝栏https://github.com/trailofbits/polytracker/issues/4685?16039 <br />https://github.com/trailofbits/polytracker/issues/4665?16688 <br />https://github.com/trailofbits/polytracker/issues/4645?06071 <br />https://github.com/trailofbits/polytracker/issues/4658 <br />https://github.com/trailofbits/polytracker/issues/4650?43498 <br />uqevmnjwzteedpvbhnhirkqterlchljsats |
ncortines/morgan-stanley-pl | 581800819 | Title: Wrong sale date with limited order
Question:
username_0: When a sale originates from a limited order, a wrong date is used for the sale. The date when the order was placed is used. The actual sale date should be used instead.
Answers:
username_1: @username_0, unfortunately I don't have access to this sort of data. I'd need you to help me by providing some JSON files.
username_0: //...
"order" : {
"exchangeFriendlyTzShortName" : "EST",
"fillsIfNotTooMany" : [ {
**"fillDateTime" : "2019-11-25T11:11:09",**
"fillPrice" : {
"amount" : "CENSORED",
"currency" : "USD"
},
"fillQuantity" : "CENSORED"
}, {
**"fillDateTime" : "2019-11-25T11:11:09",**
"fillPrice" : {
"amount" : "CENSORED",
"currency" : "USD"
},
"fillQuantity" : "CENSORED"
}, {
**"fillDateTime" : "2019-11-25T11:11:09",**
"fillPrice" : {
"amount" : "CENSORED",
"currency" : "USD"
},
"fillQuantity" : "CENSORED"
} ],
"formattedSymbolAndExchange" : "CENSORED",
"goodTillCancel" : true,
"limitOrder" : true,
"limitPrice" : {
"amount" : "CENSORED",
"currency" : "USD"
},
**"orderDate" : "2019-11-12",**
"orderQuantity" : "CENSORED",
"orderStatusDescription" : "Order is done",
"totalFillQuantity" : "CENSORED",
**"uniqueFillDate" : "2019-11-25",**
"weightedAverageFillPrice" : {
"amount" : "CENSORED",
"currency" : "USD"
}
},
//...
}
The order was placed on 2019-11-12. Then it waited until 2019-11-25 when the stock price finally grew high enough to match the ordered price. Then, in this particular case, it was sold in three parts (because not immediately there were enough buyers to buy all the stock units at the price I ordered). It's possible that a sale like this could extend over a few days (fortunately in my case all parts were sold on the same day), and that's where I **think** uniqueFillDate becomes helpful.
Status: Issue closed
username_0: It works fine for me now. Thanks @username_1 . |
JasperFx/jasper | 255984025 | Title: Make the default retry channel selection be configurable.
Question:
username_0: Today I'm going to hard code it to the loopback, but later we might wanna opt into making it possible to use local, durable queues or some other kind of queues.
I'm calling this pretty low priority
Answers:
username_0: The only place this matters is for the loopback provider that's meant to be fire and forget or the inline callbacks, and that could be controlled by the default queue
Status: Issue closed
|
FXyz/FXyz | 538017941 | Title: OrbitControls sample
Question:
username_0: I am back to implementing JavaFX back-end for 3D visualization in physics. I already have working three-js back-end and I want for experience to be similar. Right now I need something similar to https://threejs.org/docs/index.html#examples/en/controls/OrbitControls. Meaning that right mouse button moves the center of view around the central plane of the 3D scene and left mouse button allows to rotate around this point in two directions.
Are there any samples or tools to do so in FXyz or I need to build it from scratch?
Answers:
username_1: Let me make sure I understand clearly the effect you want...
1. When a user right clicks and drags you would like the position of the camera to change... ie "pan"?
2. When panning the direction... ie the "rotation" of the camera would remain the same?
3. When a user left clicks and drags you would like the position of the camera to change in an arc?
4. The camera arc would be governed by a distance... "zoom"... and the direction of the drag?
5. Direction of arc would be some combination of changing Pitch and Yaw?
6. Finally while left clicking... the rotation of the camera would update to always be aimed at a fix point?
username_0: Something like that. The idea is that "look at" point is always in the same zero x-y plane. Righ mouse button shifts the point in this plane and moves camera alongside the point so the camera angle is not changed. Left mouse button changes the camera angle without changing the camera target so camera is rotated around that point. The wheel changes the distance.
Here is an online example: https://threejs.org/examples/#webgl_animation_cloth
It tried to do it myself, but JavaFX rotation composition is a little bit complicated. Also there is a problem with centering the view. In Javafx the zero coordinate is located in the upper left point of the screen and I need the "look at" point to be in the center. I did not manage to keep it in the center when the panel is resized.
username_1: What you want is totally doable. I don't think we have that exact demo but all the parts have been implemented here and there. I'll work with you to get it working.
But first let's first address this coordinate problem you mentioned. That doesn't sound correct for a JavaFX 3D subscene. 0,0 is upper left for a 2D scene but for a 3D subscene the 0,0,0 is in the center relatively speaking.
How are you defining your subscene and camera?
Do you have this code or a simplified version of it posted online somewhere where I can check it?
Are you performing mouse coordinate conversions from screen to 3D manually? (that shouldn't be necessary.)
username_0: Here you are: https://github.com/mipt-npm/dataforge-vis/blob/dev/dataforge-vis-spatial/src/jvmMain/kotlin/hep/dataforge/vis/spatial/fx/FXCanvas3D.kt.
The whole application is fully open-source, so you can try it yourself. While our primary target is JS, it would be really nice to finally fix FX version, because it looks nice and much easier to debug.
username_0: My attempt to implement controls is [here](https://github.com/mipt-npm/dataforge-vis/blob/dev/dataforge-vis-spatial/src/jvmMain/kotlin/hep/dataforge/vis/spatial/fx/OrbitControls.kt), but it does not work in a way, I want it to.
username_1: I'm not currently setup to run kotlin on this machine but looking at your code it appears you are instantiating your subscene and camera in FXCanvas3D.kt correct. Even though you call it a canvas really its not a javafx Canvas object per say, but a SubScene object right?
username_0: Yes, I use subscene. I call it canvas to make internal naming more sensible and to use similar naming across platforms.
Also you do not need to set anything up for kotlin, Gradle downloads the plug-in as a part of the build process. I am not sure that dev branch is build able though.
username_1: @username_0 How did this go? Did you resolve your issues?
username_0: @username_1 Not yet. We are focused on JS target and the general release is planned until the end of the year. I do not have time to work on the controls problem, but is more or less the main blocker for JavaFX target (not a lot of work, but I just do not have time for it right now).
username_0: Implemented in VisionForge dev branch: https://github.com/mipt-npm/visionforge/blob/dev/visionforge-fx/src/main/kotlin/space/kscience/visionforge/solid/OrbitControls.kt |
horsicq/x64dbg-Plugin-Manager | 569287029 | Title: The “Visual Studio 2013 community” is not free + bugs
Question:
username_0: Dear username_1,
I have installed "**Qt 5.6.3 for VS2013**" and the "**7-Zip**" as instructed. But I can't download the “**Visual Studio 2013 community**” !? In fact, after I have signed-in with my GitHub account (same result with my Microsoft account), I get the following page stating at:
[https://my.visualstudio.com/Downloads?q=visual%20studio%202013&wt.mc_id=o~msft~vscom~older-downloads](https://my.visualstudio.com/Downloads?q=visual%20studio%202013&wt.mc_id=o~msft~vscom~older-downloads)
_"Sorry, we couldn't find any downloads for you."_
_"To continue, please join 'Visual Studio Dev Essentials' or_
_PURCHASE a 'Visual Studio Subscription'."_
The "**Visual Studio Dev Essentials**" requires a monthly payment. Therefore, I suggest github.com to host the download.
So, I've searched for the “_Visual Studio 2013 community_” on the web and I've installed an ".iso" version. Then, once I've edited the paths in the "build_win32.bat" and added a 'pause' at the end to copy and paste the execution before it exits, I've launched it. The results showed the following errors:
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From the "**Qt5.6.3**" (default options):
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
`E:\x64dbg-Plugin-Manager-master\gui_source>"C:\Qt\Qt5.6.3\5.6.3\msvc2013"\bin\qmake.exe gui_source.pro -r -spec win32-msvc2013 "CONFIG+=release"`
"Cannot read E:/x64dbg-Plugin-Manager-master/XArchive/xarchive.pri: No such file or directory"
— The XArchive folder is still empty in your latests zip files.
Same error later in "console_source"
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From **utils.h(32)** : (twice: \gui_source> & \console_source> folders)
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
5 "fatal error C1083: Cannot open include file: 'xzip.h': No such file or directory."
…after each process :
createmoduleprocess.cpp, getfilefromserverprocess.cpp, installmoduleprocess.cpp, removemoduleprocess.cpp and utils.cpp
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(twice: \gui_source> & \console_source> folders)
`NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\cl.EXE"' : return code '0x2'`
Stop.
`NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\BIN\nmake.exe"' : return code '0x2'`
Stop.
— Both files are there.
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From "**build_win32.bat**":
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Other errors are not a big deal, but quite puzzling...
"Impossible to find..." a lot of files to delete, even though they DO exist in their respective paths.
—————————————————————————————————————————
My CMD version is [10.0.18363.449] on Microsoft Windows 10 build 1909.
Version of the system : 10.0.18363.657
According to the command>CMD /? , the registry is set to the following default :
_HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\EnableExtensions_ REG_DWORD "0x1"
_HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\CompletionChar_ REG_DWORD "0x9"
_HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\PathCompletionChar_ REG_DWORD "0x9"
I can send you the execution copy of your "build_win32.bat" with my comments to locate quickly those errors.
Sincerely,
username_0
Answers:
username_1: Thanks a lot for the testing!
Just install GIT if you did not got it https://git-scm.com/download/win
Write in terminal: **git clone --recursive https://github.com/username_1/x64dbg-Plugin-Manager.git** It will create a folder with the last sources
Option **--recursive** does download all modules
Edit build_win32.bat ( check VS_PATH, SEVENZIP_PATH, QT_PATH variables)
Run build_win32.bat
It is sad to hear that VS2013 is not free anymore. |
kimitashoichi/kimis-app | 703978355 | Title: 【投稿一覧表示】検索機能の実装
Question:
username_0: 検索機能を実装する
【機能要件】
- [ ] タイトルの文字列を検索対象にする
- [ ] 文字列は部分一致していれば結果として表示する
Answers:
username_0: firestoreの使用上、Firestoreから直接文字列検索はできない?可能性がある
多分やり方を検索すれば出てくるだろうが、Firstリリースに間に合わなそうなのでまたの機会に実装する
【アイディアベースの検索機能(イケ) |
konimex/kiss-llvm | 832904240 | Title: [Request] Install guide
Question:
username_0: This looks very interesting. Is there any chance we could get a guide for a LLVM based kiss install?
Best wishes,
Answers:
username_1: First of all, apologies for the late reply.
Step by step, from KISS installation from ground up up to changing the base to LLVM it should be identical to KISS. However, I'll see what I can do (by modifying KISS' instruction and save it (an `INSTALL` file, perhaps?), or adding a wiki page for KISS, I'm not sure), for now I'm keeping this issue open as a reminder for myself too.
Status: Issue closed
username_1: Added in https://github.com/username_1/kiss-llvm/blob/35b5a005c200defdbb6ead5461ec769c761c5244/INSTALL |
DefinitelyTyped/DefinitelyTyped | 27202095 | Title: node.d.ts gives error "node.d.ts (98,9): Expected ';'"
Question:
username_0: Hi Guys,
I am using node.d.ts in my project and I get an error when I referenced node.d.ts.
Here is how I compiled it:
tsc app.ts
and there are errors that I got:
node.d.ts (98,9): Expected ';'
node.d.ts (1,8): modifiers can not appear before an expression statement or label
Please give me pointer on this issue.
Regards,
<NAME>
Status: Issue closed
Answers:
username_1: Hi thread, we're moving DefinitelyTyped to use [GitHub Discussions](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/53377) for conversations the `@types` modules in DefinitelyTyped.
To help with the transition, we're closing all issues which haven't had activity in the last 6 months, which includes this issue. If you think closing this issue is a mistake, please pop into the [TypeScript Community Discord](https://discord.gg/typescript) and mention the issue in the `definitely-typed` channel. |
steven-king/660-storytelling-vr | 174424403 | Title: Apple and Wireless VR?
Question:
username_0: Things change fast! During our lecture today, Apple was awarded a patent for wireless VR glasses. Interesting the diagrams don't show full immersion. This could be Apple not tipping their hand or it could be more of an Augmented Reality solution. Not sure at the moment but things are looking good!
[Apple Wireless VR](http://uploadvr.com/apple-recieves-patent-vr-headset/) |
docker-library/php | 233376287 | Title: No errors being reported in error_log?
Question:
username_0: I can't seem to get errors to get sent to the error log.
I've added a php.ini file, and verified that error_reporting is set to E_ALL and error_log is set to `/dev/stderr`.
I've also tried another file with 777 permissions with no luck.
If I exec into the container and run `php -a` and cause an error there, an error get is successfully sent to the error log.
Answers:
username_0: No it's just me being a complete idiot by setting error_reporting(0) within the scripts...
Status: Issue closed
|
M4t1ss/SoftAlignments | 245643362 | Title: Remember which navigation bars are open and which are closed
Question:
username_0: Clicking on "Translation", "Confidence", "CDP", "APout", "APin" show the navigation bars. But when clicking on an item in a navigation bar shows the corresponding sentence but disable all navigation bars (they should be kept visible).<issue_closed>
Status: Issue closed |
Project-OSRM/osrm-backend | 306535193 | Title: [Question] Is instance of the libosrm thread safe ?
Question:
username_0: For example, the same instance of OSRM, we invoke Route method from 2 different thread simultaneously.
Answers:
username_1: @username_0 Yes, it should be thread safe (the `osrm-routed` server that ships with OSRM is multi-threaded).
Status: Issue closed
|
shenwei356/seqkit | 859714078 | Title: Fails: undefined: opentype.Font AND ft.GlyphBounds undefined (type *sfnt.Font has no field or method GlyphBounds)
Question:
username_0: seqkit-0.16.0 fails:
```
github.com/go-latex/latex/internal/tex2unicode
# gonum.org/v1/plot/font
vendor/gonum.org/v1/plot/font/font.go:93:8: undefined: opentype.Font
vendor/gonum.org/v1/plot/font/font.go:184:18: undefined: opentype.Font
vendor/gonum.org/v1/plot/font/font.go:200:28: undefined: opentype.Font
vendor/gonum.org/v1/plot/font/font.go:210:25: undefined: opentype.Font
vendor/gonum.org/v1/plot/font/font.go:221:28: undefined: opentype.Font
vendor/gonum.org/v1/plot/font/font.go:263:35: undefined: opentype.Font
golang.org/x/image/font/gofont/gobold
golang.org/x/image/font/gofont/gobolditalic
golang.org/x/image/font/gofont/goitalic
golang.org/x/image/font/gofont/goregular
github.com/go-latex/latex/token
github.com/go-latex/latex/ast
github.com/go-latex/latex/mtex/symbols
github.com/go-latex/latex/tex
github.com/go-latex/latex
github.com/golang/freetype/raster
golang.org/x/image/math/f64
gonum.org/v1/plot/vg/fonts
golang.org/x/image/draw
golang.org/x/image/font/basicfont
golang.org/x/image/ccitt
golang.org/x/image/tiff/lzw
github.com/go-latex/latex/font/ttf
github.com/phpdave11/gofpdf
github.com/golang/freetype/truetype
# github.com/go-latex/latex/font/ttf
vendor/github.com/go-latex/latex/font/ttf/ttf.go:131:20: ft.GlyphBounds undefined (type *sfnt.Font has no field or method GlyphBounds)
github.com/ajstarks/svgo
golang.org/x/image/tiff
gonum.org/v1/plot/palette
github.com/cespare/xxhash
github.com/cznic/sortutil
github.com/dustin/go-humanize
github.com/fsnotify/fsnotify
github.com/iafan/cwalk
github.com/logrusorgru/aurora
github.com/mitchellh/go-homedir
github.com/pkg/errors
crypto/x509
github.com/klauspost/compress/flate
net/textproto
os/user
github.com/fogleman/gg
vendor/golang.org/x/net/http/httpproxy
github.com/twotwotwo/sorts
vendor/golang.org/x/net/http/httpguts
github.com/username_1/bpool
crypto/tls
github.com/klauspost/pgzip
github.com/username_1/util/byteutil
github.com/edsrzf/mmap-go
github.com/twotwotwo/sorts/sortutil
github.com/username_1/bio/util
github.com/username_1/bio/seq
github.com/username_1/bio/seqio/fai
github.com/username_1/bwt
[Truncated]
github.com/username_1/natsort
github.com/username_1/bwt/fmi
gopkg.in/yaml.v2
github.com/username_1/util/stringutil
github.com/mattn/go-runewidth
github.com/spf13/pflag
github.com/tatsushid/go-prettytable
github.com/smallfish/simpleyaml
github.com/spf13/cobra
net/http/httptrace
net/http
github.com/username_1/xopen
github.com/username_1/breader
github.com/username_1/bio/seqio/fastx
github.com/username_1/bio/featio/gtf
*** Error code 2
```
OS: FreeBSD 12.2
go-1.16.2,1
Answers:
username_1: Sorry to hear that. what's the command?
username_1: Oh, I see, you're using FreeBSD. I can cross-compile for it:
[seqkit_freebsd_amd64.zip](https://github.com/username_1/seqkit/files/6324571/seqkit_freebsd_amd64.zip)
username_0: But how to make it build on FreeBSD?
I am trying to create a port and it has to build from sources. Framework pre-downloads all dependencies of same versions that the project needs. But his fails for seqkit.
```
GH_TUPLE= \
ajstarks:svgo:644b8db467af:ajstarks_svgo/vendor/github.com/ajstarks/svgo \
biogo:biogo:v1.0.3:biogo_biogo/vendor/github.com/biogo/biogo \
biogo:hts:v1.2.2:biogo_hts/vendor/github.com/biogo/hts \
bsipos:thist:v1.0.0:bsipos_thist/vendor/github.com/bsipos/thist \
cespare:xxhash:v1.1.0:cespare_xxhash/vendor/github.com/cespare/xxhash \
cznic:sortutil:f5f958428db8:cznic_sortutil/vendor/github.com/cznic/sortutil \
dustin:go-humanize:v1.0.0:dustin_go_humanize/vendor/github.com/dustin/go-humanize \
edsrzf:mmap-go:v1.0.0:edsrzf_mmap_go/vendor/github.com/edsrzf/mmap-go \
fogleman:gg:0403632d5b90:fogleman_gg/vendor/github.com/fogleman/gg \
fsnotify:fsnotify:v1.4.9:fsnotify_fsnotify/vendor/github.com/fsnotify/fsnotify \
go-yaml:yaml:v2.4.0:go_yaml_yaml/vendor/gopkg.in/yaml.v2 \
golang:crypto:f99c8df09eb5:golang_crypto/vendor/golang.org/x/crypto \
golang:freetype:e2365dfdc4a0:golang_freetype/vendor/github.com/golang/freetype \
golang:image:cff245a6509b:golang_image/vendor/golang.org/x/image \
golang:sys:c6e025ad8005:golang_sys/vendor/golang.org/x/sys \
iafan:cwalk:dd7f505d2f66:iafan_cwalk/vendor/github.com/iafan/cwalk \
inconshreveable:mousetrap:v1.0.0:inconshreveable_mousetrap/vendor/github.com/inconshreveable/mousetrap \
jung-kurt:gofpdf:24315acbbda5:jung_kurt_gofpdf/vendor/github.com/jung-kurt/gofpdf \
klauspost:compress:v1.11.4:klauspost_compress/vendor/github.com/klauspost/compress \
klauspost:pgzip:v1.2.5:klauspost_pgzip/vendor/github.com/klauspost/pgzip \
logrusorgru:aurora:v2.0.3:logrusorgru_aurora/vendor/github.com/logrusorgru/aurora \
mattn:go-colorable:v0.1.8:mattn_go_colorable/vendor/github.com/mattn/go-colorable \
mattn:go-isatty:v0.0.12:mattn_go_isatty/vendor/github.com/mattn/go-isatty \
mattn:go-runewidth:v0.0.9:mattn_go_runewidth/vendor/github.com/mattn/go-runewidth \
mitchellh:go-homedir:v1.1.0:mitchellh_go_homedir/vendor/github.com/mitchellh/go-homedir \
pkg:errors:v0.9.1:pkg_errors/vendor/github.com/pkg/errors \
username_1:bio:v0.1.0:username_1_bio/vendor/github.com/username_1/bio \
username_1:bpool:f9e0ee4d0403:username_1_bpool/vendor/github.com/username_1/bpool \
username_1:breader:v0.1.0:username_1_breader/vendor/github.com/username_1/breader \
username_1:bwt:v0.5.1:username_1_bwt/vendor/github.com/username_1/bwt \
username_1:go-logging:c6b9702d88ba:username_1_go_logging/vendor/github.com/username_1/go-logging \
username_1:natsort:600d539c017d:username_1_natsort/vendor/github.com/username_1/natsort \
username_1:util:v0.3.0:username_1_util/vendor/github.com/username_1/util \
username_1:xopen:f4f16ddd3992:username_1_xopen/vendor/github.com/username_1/xopen \
smallfish:simpleyaml:a32031077861:smallfish_simpleyaml/vendor/github.com/smallfish/simpleyaml \
spf13:cobra:v1.1.3:spf13_cobra/vendor/github.com/spf13/cobra \
spf13:pflag:v1.0.5:spf13_pflag/vendor/github.com/spf13/pflag \
tatsushid:go-prettytable:ed2d14c29939:tatsushid_go_prettytable/vendor/github.com/tatsushid/go-prettytable \
twotwotwo:sorts:bf5c1f2b8553:twotwotwo_sorts/vendor/github.com/twotwotwo/sorts \
gonum:plot:v0.9.0:plot/vendor/gonum.org/v1/plot \
go-fonts:liberation:v0.1.1:go_fonts_liberation/vendor/github.com/go-fonts/liberation \
go-latex:latex:b3d85cf34e07:go_latex_latex/vendor/github.com/go-latex/latex \
phpdave11:gofpdf:v1.4.2:phpdave11_gofpdf/vendor/github.com/phpdave11/gofpdf \
golang:text:v0.3.4:golang_text/vendor/golang.org/x/text
```
username_0: I was able to build the port with the other method.
Thank you for your help.
Status: Issue closed
|
algorithmiaio/api-docs | 252993394 | Title: Fix links for algorithm documentation
Question:
username_0: In the http://docs.algorithmia.com/#algorithm-development section, there are a bunch of links that are supposed to be for algorithm development. But the links are going to client guides (where the focus is calling algorithms outside of algorithmia). I think the dev-center needs a new section on algorithm creation for each language |
paritytech/substrate | 536313280 | Title: decl_storage: storage the key alongside the value to be able to iterate on key/value
Question:
username_0: as follow up of https://github.com/paritytech/substrate/pull/4185#issuecomment-562122935
It would be interesting to introduce a way in decl_storage to tell to store the key alongside the value.
at the end the storage will have the data: `(value, key)`.
This way when iterating on prefix we could actually iterate on `(key, value)` instead of just `value`.
This could be just a new attribute.
Some difficulties are:
* for encode_append, we can't append on the raw data anymore as encode_append support only having the data as input.
we could extend encode_append to be able to append on some value while keeping the data in suffix.
or we could also add the size of the key (or value, whatever) at the very end of the data.
* for swap method we can't swap the raw data anymore, either we decode the stuff and swap the value only, or same as above we add the size of the key at the very end of the data.
Answers:
username_1: we could extend encode_append to be able to append on some value while keeping the data in suffix.
I don't understand this. Could you explain it better? Why can we not append `(key, value)`?
username_0: indeed I updated the message for more clarity, the initial idea was to store `(value, key)` so we don't have to decode the key on operation such as `get` and `mutate`.
If we store indeed `(key, value)` then append operation can work easily just decoding key and working the raw value.
username_1: Ahh, yeah, now I understand :) We could maybe thing of a `skip()` method for skipping data in the encoded blob. So, we could use `(key, data)`, skip the `key` and use `data` to append the stuff. Or we just prepend the length of the key.
Status: Issue closed
username_0: I think this should be close as we are now storing keys in hash |
botmakers-net/bot-templates | 641865086 | Title: Бронювання чатів для студій звукозапису
Question:
username_0: Скористайтеся найважливішим стратегічно важливим каналом соціальних медіа, щоб отримати своє ім’я та розповсюдити слово про вашу студію звукозапису
#pryznachennya-bronyuvannya
https://botmakers.net/chatbot-templates/bronyuvannya-chativ-dlya-studij-zvukozapysu |
openiddict/openiddict-core | 216745902 | Title: How to get token id from database
Question:
username_0: Hello! I see [#136](https://github.com/openiddict/openiddict-core/issues/136#issuecomment-225068490) about revoke token and i not understand how find current token id witch I want to revoke.
```
var token = await _tokenManager.FindByIdAsync(id);
```
Can you help me please? Thank you!
Answers:
username_1: You can use `OpenIddictTokenManager.FindBySubjectAsync("user identifier")` to retrieve the tokens associated with a specific user.
username_0: Yes, I know this. But if I need a specific token, how to get it id? Example one user can log in from multiple locations and I want revoke one of his tokens.
username_1: The token identifier associated with a specific token can be retrieved via the introspection endpoint (`jti`).
Alternatively, you can also directly use ASP.NET Core's `TicketDataFormat` to unprotect a refresh token and extract the corresponding `AuthenticationTicket`, whose identifier can be retrieved using `ticket.GetTicketId()`. Here's how we instantiate it in ASOS: https://github.com/aspnet-contrib/AspNet.Security.OpenIdConnect.Server/blob/dev/src/AspNet.Security.OpenIdConnect.Server/OpenIdConnectServerMiddleware.cs#L122-L126
username_0: Thank you for answer! As I understand under` jti` you meat `Logout `end point? That is, I have to implement something like:
`public async Task<IActionResult> Logout(OpenIdConnectRequest request)`
If so, what parameters should be passed there?
username_1: No no, I mean the introspection endpoint (which is transparently implemented for you by OpenIddict, so no need to provide an action for this endpoint).
For more info, don't miss: https://tools.ietf.org/html/rfc7662#section-2.1.
username_0: Ok. but I not understand how this help me get token id? I did not find there token id param.
I'm interested in the token id

Example revoke token by subject I do so
```
[HttpPost]
public async Task<IActionResult> RevokeToken()
{
var id = await GetCurrentUserId();
var tokens = await _tokenManager.FindBySubjectAsync(id, CancellationToken.None);
foreach (var token in tokens)
{
await _tokenManager.RevokeAsync(token, CancellationToken.None);
}
return Ok();
}
```
I need to do the same only with the token ID.
username_1: I don't understand. Are you trying to retrieve the token identifier associated with a token, or remove a token corresponding to a given identifier?
For the later, use `OpenIddictTokenManager.FindByIdAsync("8895")`.
Status: Issue closed
username_0: I want that when a user logs out of the system the token is removed from the database, but only a specific token, and not all tokens associated with this user. To do this, I need to find the identifier of the token to remove it from the database. But I do not know how to get this identifier and whether this is possible in principle. That is, I want to get the token ID associated with the user who will log off the system.
username_1: You can't do that, as there's no direct relationship between the user session (= represented by an authentication cookie managed by Identity) and the refresh tokens (that are usually not revoked after an interactive logout, BTW).
username_0: Thank you! I understood.
username_1: @username_0 note: if you want to manually implement such a scenario, you can use the new `OpenIddictAuthorizationManager<TAuthorization>` class, create a new ad-hoc "authorization" per login using `CreateAsync()`, store the authorization identifier as a claim in the Identity authentication cookie and bind the `AuthenticationTicket` you return from `CreateTicketAsync()` to the authorization you created:
```
ticket.SetProperty(OpenIddictConstants.Properties.AuthorizationId,
value: await _authorizationManager.GetIdAsync(authorization));
```
During the signout process, retrieve the authorization using the identifier stored in the Identity cookie and revoke it. All the associated tokens will be revoked.
Unfortunately, there's no sample that demonstrates how to do that, so you're pretty much on your own.
username_0: Ok. Thanks again. I will try to do this. |
osmlab/name-suggestion-index | 1089170514 | Title: Please add German union "GEW" in brands/office/union
Question:
username_0: brand=GEW
brand:wikidata=Q325778
brand:wikipedia=de:GEW
office=union
official_name=Gewerkschaft Erziehung und Wissenschaft
matchNames:
geh, erziehung und wissenschaft
Status: Issue closed
Answers:
username_1: You don't need to create so much issue, you can add them as task in 1 issue |
riscv/riscv-isa-manual | 332367973 | Title: Clearing of load-reservation during context switch is not specified
Question:
username_0: The user-level ISA manual states "The privileged architecture specifies the mechanism by which the implementation yields a load reservation during a preemptive context switch." I can find no such description in the privileged manual when searching for "reservation", "load-reserved", "context switch" and similar terms.
This mechanism is surely important for correctness. e.g. if a context switch occurs from thread A to thread B, both of which are executing instructions inside LR/SC pairs. If the same address range is reserved by both threads and the reservation isn't cleared you could still see the SC in thread B succeed, even if its original reservation had been cleared.
Answers:
username_1: This is an editing error from when the memory model was merged in. SC *is* the mechanism for forcibly yielding a load reservation: "an SC must fail if there is another SC between the LR and the SC in program order."
So, the privileged architecture doesn't need to specify anything here.
Status: Issue closed
|
dotnet/docs | 383491723 | Title: No documentation about what the data types represent
Question:
username_0: How can I choose the data type with no information about what each one of those represent? What is UG what is R4?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: fbe7af6f-d0db-6da8-5e2c-301750a3695d
* Version Independent ID: b67dfcf5-5d21-d81f-37d1-76d8d7111111
* Content: [DataKind Enum (Microsoft.ML.Runtime.Data)](https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.runtime.data.datakind?view=ml-dotnet)
* Content Source: [dotnet/xml/Microsoft.ML.Runtime.Data/DataKind.xml](https://github.com/dotnet/ml-api-docs/blob/live/dotnet/xml/Microsoft.ML.Runtime.Data/DataKind.xml)
* Product: **dotnet-ml-api**
* GitHub Login: @sfilipi
* Microsoft Alias: **johalex**
Answers:
username_1: [https://github.com/dotnet/docs/issues/9278](url)
you can see this url
username_2: Thanks for your feedback @username_0.
@sfilipi @JRAlexander the duplicate item of this issue had quite a few thumbs-up. Is that something you can add /// comments for?
username_3: A small info:
U means unsigned => in c# U1:byte, U2:ushort, U4:uint, U8:ulong UG or U16:Microsoft.ML.Data.RowId
I means signed Integer => in c# I1:sbyte, I2:short, I4:int, I8: long
R means real numbers => in c# R4 or Num:float, R8:double
TX, TXT, Text => in c# string or ReadOnlyMemory<char>
BL,Bool => in c# bool
TS,TimeSpan => in c# TimeSpan
DT => in c# DateTime
DZ => in c# DateTimeOffset
username_2: Closing this one since issue opened by @sfilipi also got closed.
API docs is now at https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.data.datakind?view=ml-dotnet.
Status: Issue closed
|
zorko-io/zorko-designer | 542460498 | Title: config.yml:2-3: Integrate with CI with trigger for push...
Question:
username_0: The puzzle `52-42548d9b` from #52 has to be resolved:
https://github.com/zorko-io/zorko-designer/blob/71509f3b6d005a0dea79784a8bf61036b22fcba3/.circleci/config.yml#L2-L3
The puzzle was created by nesterone on 26-Dec-19.
Estimate: 30 minutes, role: DEV.
If you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
Status: Issue closed
Answers:
username_0: The puzzle `52-42548d9b` has disappeared from the source code, that's why I closed this issue.
username_0: @username_0 the puzzle [#133](https://github.com/zorko-io/zorko-designer/issues/133) is still not solved. |
arista-eosplus/puppet-eos | 457555837 | Title: Default value for staging_file not used for eos_switchconfig
Question:
username_0: Using this example puppet config `example1.pp`:
```
$running_config = ... # Config is a string
eos_switchconfig { 'running-config':
content => $running_config,
}
```
```
-bash-4.3# puppet apply example1.pp --trace
...
Error: /Stage[main]/Main/Eos_switchconfig[running-config]: Could not evaluate: no implicit conversion of nil into String
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/errors.rb:106:in `fail'
/opt/puppetlabs/puppet/modules/eos/lib/puppet/provider/eos_switchconfig/default.rb:102:in `rescue in flush'
/opt/puppetlabs/puppet/modules/eos/lib/puppet/provider/eos_switchconfig/default.rb:88:in `flush'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/type.rb:1006:in `flush'
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/transaction/resource_harness.rb:25:in `evaluate'
...
```
Error disappears when `staging_file` is set to `'puppet-config'`, which should be the default value. |
gchq/gaffer-experimental | 856732467 | Title: UI Add URL to federated store manually
Question:
username_0: When creating a federated store, the user should be able enter a url of another graph and add it to the list of graphs they want to proxy within the federated store.
Answers:
username_0: In terms of UI, this could be a text entry field above the table described in #68. When the URl is added manually it can be added to the list of graphs in the table.
username_1: @username_0 When the user is manually adding a URL do they also need to enter a graphID and description or should they just be some autogenerated values?
username_0: The cleanest solution is probably just to make the graphId and description of proxies the same as the graphId and description of what they are proxying to.
username_2: Added a URL text field, which adds a graph to the graph table in the Add Graphs page.
Right now, it stubs a graph id and description, not sure if we'll be able to retrieve the actual name and description of the graph from the cluster.
This stubbed graph, with the entered url is sent as part of the ProxyStores in the request.
username_0: @username_2 Graph description can be obtained from Gaffer using the `/graph/config/description` REST endpoint. I am working on making a `/graph/config/graphId` endpoint as described in the mentioned issue above. This should mean that when adding a graph to a federated store via URL, the proxy's graphId and description will be able to be set from the graph it is proxying to.
username_2: Added the work for this ticket so far to #71, will update the graph id and description for the proxy store once the second endpoint is done, Right now it uses the URL entered to create a stub graph id and description. |
wso2/analytics-apim | 611786207 | Title: Make widget header/title configurable
Question:
username_0: **Description:**
Currently the widget header is defined by the property _name_ in the _widget.config_ file as follows.
```
{
"name": "APIM Api Availability",
"id": "APIMApiAvailability",
"thumbnailURL": "",
"configs": {
...,
"options": [
{
"id": "header",
"title": "Header",
"type": {
"name": "BOOLEAN",
"possibleValues": [
true,
false
]
},
"defaultValue": true
},
}
}
```
Displaying the widget header for all the widgets in a dashboard will take up unnecessary space. However, we are unable to remove the header using the _options_ config for _header_ because the widget export feature is embedded in the header, and removing the header will result in removing the widget export feature for that widget. A solution to this is to make the widget header/title configurable and to provide an empty string for the widget header when it does not need to be displayed.
Answers:
username_0: The _options_ config for providing the widget title is as follows:
```
{
"id": "headerTitle",
"title": "Overall Api Info",
"type": {
"name": "TEXT",
},
"defaultValue": "Title1"
}
```
username_1: By default golden layout doesn't allow giving an empty string in the widget configuration for the widget title and if we provide an empty string it uses the default react component name "lm-react-component".
Hence we need to do a fix in carbon-dashboards side [1] to support empty strings for titles.
[1] https://github.com/wso2/carbon-dashboards/blob/master/components/dashboards-web-component/src/viewer/components/DashboardRenderer.jsx#L138-L147
Status: Issue closed
username_0: Reopening since the carbon-dashboard fix has not been implemented yet.
username_0: **Description:**
Currently the widget header is defined by the property _name_ in the _widget.config_ file as follows.
```
{
"name": "APIM Api Availability",
"id": "APIMApiAvailability",
"thumbnailURL": "",
"configs": {
...,
"options": [
{
"id": "header",
"title": "Header",
"type": {
"name": "BOOLEAN",
"possibleValues": [
true,
false
]
},
"defaultValue": true
},
}
}
```
Displaying the widget header for all the widgets in a dashboard will take up unnecessary space. However, we are unable to remove the header using the _options_ config for _header_ because the widget export feature is embedded in the header, and removing the header will result in removing the widget export feature for that widget. A solution to this is to make the widget header/title configurable and to provide an empty string for the widget header when it does not need to be displayed.
username_2: The same can be achieved by providing a space char (' ') instead of an empty string in the dashboard JSON files. Since we can achieve this from our side, better not to change the empty string check unless really needed.
username_0: The widget header has been removed from the dashboards with https://github.com/wso2/analytics-apim/pull/1408. Hence closing this issue.
Status: Issue closed
|
Swiss-Polar-Institute/project-application | 1002086801 | Title: Allow to save proposals as draft with missing required fields
Question:
username_0: From the beginning of Nestor when an applicant is filling a proposal the applicant needs to fill-in all the required fields in order to save it (even as a draft).
Every now and then someone comments that would be good to be able to be able to save proposals as draft with missing fields.
In Nestor this means:
* Decouple the form validation with the database structure (stop using ModelForm?)
* Save the proposal as JSON in a field when is a draft
* Load a proposal from JSON (draft) or database (submitted) |
NervJS/taro | 341533731 | Title: 无法使用MovableView
Question:
username_0: **问题描述**
在项目中有一个手势放大的需求,发现微信小程序中有个组件movable-view是可以比较轻松高效地做到的。taro文档中也说了支持movable-view,可是在实际引用就会报错。
```js
// 这里可以贴代码
<MovableArea>
<MovableView></MovableView>
</MovableArea>
```
**报错信息**
在编辑器中eslint会报:不支持自定义组件嵌套
保存之后会运行失败

Status: Issue closed
Answers:
username_1: 原因在于 `MovableArea` 和 `MovableVIew` 没有加入到自定义组件列表。晚点发个新版本应该就行了。
username_0: 请问新版本的发布时间大概是在什么时候呢? 因为现在项目需求比较急 或者可以告知有没有什么自行解决的办法吗
username_1: 已经发了0.0.70
username_2: @taro/cli 是 0.0.7 但是 components 还是0.0.69
username_0: 请问怎么能比较方便的更新到0.0.70呢? 目前还没更新过
username_1: `taro update self` 和 `taro update project`
username_0: 更新了 谢谢,这更新体验非常好! 试了一下发现eslint还是报错 但是编译时没发生错误 是eslint没更新规则吗? 另外,h5上的navigateBack函数炸了,用不了,尝试用history.go(-1)发现不行,层级会叠起来。 |
blackbaud/skyux2-docs | 457587081 | Title: Document "New" button for select field component
Question:
username_0: Document dev work under way in https://blackbaud.visualstudio.com/Products/_workitems/edit/1141641. Update the UX guidelines; see https://blackbaud.invisionapp.com/share/4GR2C4QSRUE#/screens.
Answers:
username_1: Closing in favor of https://github.com/blackbaud/skyux2-docs/issues/595 where the PR for contribution is referenced.
Status: Issue closed
|
JoepVanlier/Hackey-Patterns | 379476498 | Title: Rightmost column for patterns could get much smaller
Question:
username_0: Now this column with pattern names occupies too wide space, as the title is so long, "Track patterns/items". I guess this is anyway clear or this could be just named as 'patterns', so it gets thinner, I would prefer even no title at all, and the patterns showing only numbers 00..99. Or offering two options.
(wide) as it is now
(thin) showing no title, names as 00..99
Just another visuals improvement idea, trying to make everything more elegant, removing distracting elements, as much as possible.
Answers:
username_1: In the latest commit, you can set tinyPatterns in the config to 1 to get this behaviour.
Also, you can drag the line to move the pattern position further to the right.
username_0: If you want you can close this, as the rightmost column is very thin for some time, thanks a lot.
Status: Issue closed
|
Jkim-Hack/cse3902 | 844328981 | Title: compass/minimap reset
Question:
username_0: Currently the minimap/compass resets by going away when the reset command is executed - this should not happen. After the compass and map is collected, the minimap displays the rooms and triforce forever, even after any kind of reset.
Status: Issue closed
Answers:
username_0: I fixed this in my PR |
angered-ghandi/OpenAOE | 164997078 | Title: Implement player color handling when loading SLP shapes
Question:
username_0: Currently, the code that reads SLP files skips over the player color pixels (so that they're always black):
https://github.com/username_0/OpenAOE/blob/master/crates/file_formats/slp/src/slp.rs#L300
The SLP interface will need to be modified to take in a player color palette index as a parameter. AOE had several player colors spaced apart by powers of two, so only the index of the first player color really matters. The rest of the colors are made by doing a binary OR against the first color index.
The easiest way to test this (visually) is probably to add a command-line parameter to the [slp_viewer tool](https://github.com/username_0/OpenAOE/blob/master/tools/slp_viewer/src/main.rs) to take the palette index (optionally), and pass that to the SLP crate.
Answers:
username_1: I have [a branch for parsing player color](https://github.com/username_1/OpenAOE/tree/player-color).

Discussion regarding my opinion on how we should approach player color:
```
username_1: Xinayder: just read your bit about turtlepack and it made me remember what we're missing
Xinayder: what?!
username_1: the aoe engine overlayed player-color-only sprites onto other sprites
username_1: so none of the base sprites contain player color
Xinayder: so, how do we fix that?
username_1: We figure out which sprites should be used together and write some metadata so we can render multiple slps
username_1: structures do the overlay I mentioned, units don't, the color is embed directly in units
username_1: test 683
username_1: Xinayder: http://i.imgur.com/sbljsxz.png
username_1: cargo run --release -- 683 --drs "/Users/thill/Desktop/GAME/DATA/GRAPHICS.DRS" --interfac "/Users/thill/Desktop/GAME/DATA/INTERFAC.DRS" --player 6
username_1: we should read as player 1 and apply remaps a single time on load (and apply the color ramp as necessary)
Xinayder: I mean, how to color the structures?
username_1: We'd have metadata about which graphic overlays the structure. we parse the overlay as player 1 and apply the real player remap ramp later when creating the spritesheet
username_1: (we can merge the structure and overlay when creating the spritesheet)
```
Applying `player-color` on top of [`makefiles`](https://github.com/username_1/OpenAOE/commit/055189e430c3da36762096f9bbfbeb69ae3c3e6a) this can easily be tested by:
`cargo run --release -- 683 --drs GAME/DATA/GRAPHICS.DRS --interfac GAME/DATA/INTERFAC.DRS`
Let me know what you think of the approach I've taken.
If you're happy with it I'll open a PR.
Status: Issue closed
|
gradle/gradle | 761243910 | Title: Javadoc uses the same output directory for different source sets
Question:
username_0: When adding a feature which uses its own source set with Javadoc like this:
```
plugins {
id('java-library')
}
sourceSets {
mongodbSupport {
java {
srcDir 'src/mongodb/java'
}
}
}
java {
registerFeature('mongodbSupport') {
usingSourceSet(sourceSets.mongodbSupport)
withJavadocJar()
}
}
```
then the destination directory of the Javadoc tasks `javadoc` (main source set) and `mongodbSupportJavadoc` are the same (`build/docs/javadoc`). If you publish the library with the feature variants, then the contents of the Javadoc jars may contain the wrong things, depending on the order you run the javadoc and jar tasks.
---
cc: @gradle/execution
Answers:
username_0: Fixed via #15396.
Status: Issue closed
|
WebDeveloperAndrew/quasimodallogger | 279000901 | Title: Returns curDir on error
Question:
username_0: There could be other errors besides EEXIST. For example, a permissions error. If this happens, it fails silently. I think you should always `throw err` and never `return curDir` when any `err` occurs.
```
try {
fs.mkdirSync(curDir);
} catch (err) {
if (err.code !== 'EEXIST') {
throw err;
}
return curDir;
}
return curDir;
```
Status: Issue closed
Answers:
username_0: Oh I misread that. `!==` ... |
datalad/datalad | 728374490 | Title: ENH: python -m datalad.install [COMPONENT[=version] [--adjust-bashrc] [-u|--upgrade] [-s|--scheme SCHEME]] [COMPONENT...]
Question:
username_0: We have https://github.com/datalad/datalad/blob/HEAD/tools/ci/install-annex.sh which came handy already on many occasions and used in our CI. But it is a bash script, so CLI parsing is too basic, would not work on Windows natively, harder to unit-test etc. As its names says, it is only about `annex`.
For all means and purposes all systems we would care about would have `python` (be installed as part of some distribution, or "natively"), so I think it would be beneficial to redo it in "pure" (no 3rd party libs outside of standard libraries) Python
First stage TODOs:
- [ ] redo in Python using only standard libs (so argparse, no click)
- [ ] place it as `datalad.install` submodule which would also support
- a script (thus `if __name__ == "__main__"`) mode
- has a `__main__` with callout to the same as script mode `main`
- [ ] support only `git-annex` as `COMPONENT`
Use case: I could just `git clone git://github.com/datalad/datalad; python datalad/datalad/install.py git-annex` (adjust paths `/` to `\` if on windows), and have appropriate to the platform most recent (compatible) release of git-annex installed
Additional features to expand with:
- [ ] add `--upgrade` -- detect how `COMPONENT` is installed given environment (conda, native, apt) and upgrade accordingly
- [ ] add `miniconda` component: I keep copy pasting from the handbook... If with this helper I could automate -- that would be awesome! (Having `--env NAME` to mint a new/enter conda environment would also be a nice feature)
- [ ] add `datalad` component (for `--upgrade` would use `apt-get` or `pip` depending on how was installed)
- [ ] `--deps` (`conda install --only-deps`, `apt-get build-dep` -- not the same, but AFAIK closest we can do ATM) -- install build dependencies for the component (if conda it is, `conda install --only-deps -c conda-forge datalad` to get myself all dependencies installed)
- [ ] support of [future](https://github.com/datalad/datalad-extensions/issues/46) "datalad-extensions-artifacts distribution" (in particular for Windows ATM)
- [ ] add `venv` component (may be, as an alternative to `conda`)
Use case: so ultimately `python -m datalad.install git-annex miniconda --env datalad-dev datalad --deps` would install git-annex (as appropriate for OS, before considering miniconda), produce miniconda environment with datalad dependencies installed so I would be ready to "hack". Order dependent, so `miniconda --env datalad-dev git-annex datalad --deps` would first establish miniconda env, and then install `git-annex` via `conda`, ...
Notes:
- may be argparsing should be `[--options-applied-to-all] COMPONENT1[=version] [--options] COMPONENT2[=version] [--options] ...` so `--options` applied per component to remove ambiguity, make more flexible etc
- having it as a part of `datalad` package would allow us to use it internally, so we could offer automatic installation and/or upgrade of git-annex or datalad itself!
- neurodocker (and reproman somewhat to do `conda activate ...` and propagate env changes to the next call ) sound like a big brother to this ;) here we should aim for "standard library only script", but CLI interface ideas/implementation could be borrowed
WDYT @datalad/developers ?
Answers:
username_1: @username_0 The script currently requires that it be `source`d in order for environment variables to be updated. How should that be handled when converted to Python? One option would be to run the script via `eval $(python3 install.py)` and have the script print out the updated environment variable assignments, but this would require suppressing (or redirecting to stderr) all output from subcommands.
username_0: What about an option with a path to the script to populate which external tool would source after executing python script?
username_0: @username_1 would you mind to implement next step of `add miniconda`? by default should just install in some (announced t othe user) temporary directory but should have command line option (e.g. `--path-miniconda`) to be specified explicitly. On linux should be smth as simple as
https://github.com/ReproNim/reproman/blob/26ed2f6eba361f579662c519d8d667b5efaa0378/Dockerfile-redhat
```
wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh \
&& bash Miniconda2-latest-Linux-x86_64.sh -b \
&& rm Miniconda2-latest-Linux-x86_64.sh
```
but may be (don't remember if does by default or not), by default shouldn't add itself to the `.bashrc`, but should provide instructions to the user on how to "enable". In my case I typically just extract that portion it adds to `.bashrc` into a separate file I then source upon demand, but most likely it would be just enough to instruct user to
```
source "$path_miniconda/etc/profile.d/conda.sh"
```
username_1: @username_0 Should the script be run interactively, in which the user has to agree to a license and go through some prompts, or in batch/noninteractive mode, in which case no instructions are emitted on how to configure `.bashrc`?
username_0: Let's add `-b|--batch` option to install.py script itself. If specified -- would pass `-b` to miniconda installer (so we could use it on CI etc).
Re conda-forge -- indeed, like that but probably it is time now to decouple conda installer from git-annex installer, so we could install miniconda but then test working with system wide install of git-annex etc.(like in those "ultimate" examples in the description, just without datalad unless you just want to have it done right away). And yeap, please add `--path-miniconda` option.
username_1: @username_0 So, to be clear:
* You want a `miniconda` schema that just installs miniconda and nothing else; it should take `-b` and `--path-miniconda` options
* The existing conda schemata should acquire `-b` and `--path-miniconda` options.
Is that what you're requesting?
username_0: @username_1 let's `add datalad component (for --upgrade would use apt-get or pip depending on how was installed)` so the script could be used to install datalad (with miniconda) to have one command invocation of datalad.
username_1: @username_0 Should the `datalad` component install git-annex as well? Also, I assume the `--upgrade` feature is for sometime later.
username_0: I elaborated a bit more in #5181 on larger RF and thought to not bother with establishing "dependencies" and just rely on distributions to figure it all out. if I request to install datalad in venv, it would not care/be able to decide how to install git-annex. If I request to install in conda or via apt -- those have dependencies setup so they will install git-annex (where they can).
May be later we could add some `--if-absent` option per each component, so e.g. `install.py datalad git-annex --if-absent` will install git-annex "native to current environment" if neither it has `git-annex` in the PATH available prior, nor got it installed by installing `datalad`. |
node-red/node-red-nodes | 317545574 | Title: [node-red-node-xmpp] xmpp in node isn't working
Question:
username_0: ### Which node are you reporting an issue on?
node-red-node-xmpp
### What are the steps to reproduce?
I've added xmpp-in node, and linked it to debug.
If I send a message to node-red XMPP user nothing happens. There are no any messages on debug tab.
Also, if only xmpp-in node is present (without xmpp-out), node-red XMPP user is in offline state.
### Please tell us about your environment:
Node-RED version: v0.18.4
node.js version: v8.11.1
npm version: 6.0.0
Platform/OS: Raspberry Pi 3 model B
Browser: Google Chrome 66.0.3359.117
Answers:
username_1: the **out node** is a flawless replacement for my *hack* _( an exec node with `sendxmpp`)_
the **in node** shows connected, but output nothing on either port ?
nodeRed: _0.18.1_
node.js: _8.9.4_
npm: _6.0.0_
node-red-node-xmpp: _0.1.7_
platform: _Ubuntu 16.04.4 LTS_
xmpp server: _talk.google.com_
username_0: I've uncommented this line (42) `xmpp.setPresence('online', node.nick+' online');` in `.node-red/node_modules/node-red-node-xmpp/92-xmpp.js` and, it seems, that in node now working fine.
username_2: Currently re-working this node - so will update soon
Status: Issue closed
username_3: Hello, I need to create an xmpp server via Node-red I am not getting any tutorial or example. |
actions/checkout | 532145218 | Title: workflows that relied on a tag for versioning no longer work
Question:
username_0: Yesterday I had a anothrNick/github-tag-action based workflow which incremented the last tag on master upon PR-merge. (I use that tag during the terraform apply to tag AWS resources with whatever version of my repo) This morning's changes for V2 appears to always checkout with --no-tags, which breaks that workflow. From the V2 PR changes, I don't see how I can use the V2 actions/checkout and retain the last tag. In the meantime I'll pin actions/checkout to yesterday's version so my workflows work again
Answers:
username_1: I too am using tags, pulling them in manually with `git fetch --tags`
username_2: @username_0 rather than yesterdays version, consider binding to `actions/checkout@v1` (basically yesterdays version, but will get future updates for the v1 channel)
username_0: Thank you @username_2, I'd bound to v1.2.0 but will change to v1 as per your suggestion.
username_2: @username_0 regarding the tags... we made perf improvements for the v2 version so that only a single commit is fetched. However another advantage of the v2 version is that it makes it easier to script authenticated git commands (auth token persisted in the git config). So as @username_1 mentioned you can `git fetch --tags` to get all of the tags. hth and sorry for the inconvenience.
username_0: Thanks for the followup (and work!).
Status: Issue closed
|
mattermost/mattermost-plugin-gitlab | 496145745 | Title: Private hosted GitLab path is wrong
Question:
username_0: Thanks for nice plugin.
I have integrated it with my self hosted git, it worked and gave me issues etc.
New icons are added on the bottom left with proper count. But when I click on them it opens gitlab.com link instead of my self hosted URL.
Answers:
username_1: Would you please test if reloading solves your problem?
username_0: Oh right,, it worked. Thanks
Status: Issue closed
|
irazasyed/telegram-bot-sdk | 602270307 | Title: getwebhookupdate work with postman just ()
Question:
username_0: Hello,
I'm using irazasyed:dev-master
work on local server xampp and using ngrok to get https url
I have a problem with getting updates by getwebhookupdate method, noting that it is work successfully when i test with postman, Whereas when I send text from Telegram I don't get response from bot.
Also an important note that the commandsHandler is working successfully and the bot response automatic.
and this is my rout code:
`Route::post('/webhook', 'BotController@webhookHandler');`
and this is my webhookHandler code:
` public function webhookHandler()
{
$update=$this->telegram->commandsHandler(true);
try{
$update=$this->telegram->getWebhookUpdate();
$message=$update[0]['message']['text'];
$chat_id=$update[0]['message']['chat']['id'];
$this->telegram->sendMessage(['chat_id'=>$chat_id,'text'=>"Hello"]);
return 'ok';
}catch (\Exception $e ){
var_dump($e->getMessage());
}
}`
and this is the result of setting webhook:
{"ok":true,"result":{"url":"https://d7eae21b.ngrok.io/SawaBot/public/webhook","has_custom_certificate":false,"pending_update_count":0,"max_connections":40}}
I hope to help solve this problem
Best Regards
Answers:
username_1: you are right. I am same problem. |
GoogleChrome/lighthouse | 290859116 | Title: DevTools Error: Cannot read property 'innerWidth' of null
Question:
username_0: **Initial URL**: http://www.t-online.de/nachrichten/ausland/krisen/id_83097508/bundesregierung-wegen-tuerkischer-offensive-gegen-kurden-in-der-kritik.html
**Chrome Version**: 62.0.3202.94
**Error Message**: Cannot read property 'innerWidth' of null
**Stack Trace**:
```
TypeError: Cannot read property 'innerWidth' of null
at Emulation.AdvancedApp._onSetInspectedPageBounds (chrome-devtools://devtools/bundled/inspector.js:7758:78)
at Emulation.InspectedPagePlaceholder.dispatchEventToListeners (chrome-devtools://devtools/bundled/inspector.js:477:23)
at Emulation.InspectedPagePlaceholder.update (chrome-devtools://devtools/bundled/inspector.js:7855:237)
at Audits2.Audits2Panel._stopAndReattach (chrome-devtools://devtools/bundled/audits2/audits2_module.js:167:372)
at <anonymous>
```
Answers:
username_1: from #4939 by @FlippingKnight
**Initial URL**: https://tobiasdokken.com/
**Chrome Version**: 65.0.3325.181
**Error Message**: Cannot read property 'innerWidth' of null
**Stack Trace**:
```
TypeError: Cannot read property 'innerWidth' of null
at Emulation.AdvancedApp._onSetInspectedPageBounds (chrome-devtools://devtools/bundled/inspector.js:8057:78)
at Emulation.InspectedPagePlaceholder.dispatchEventToListeners (chrome-devtools://devtools/bundled/inspector.js:482:23)
at Emulation.InspectedPagePlaceholder.update (chrome-devtools://devtools/bundled/inspector.js:8154:237)
at Audits2.Audits2Panel._stopAndReattach (chrome-devtools://devtools/bundled/audits2/audits2_module.js:177:372)
```
username_2: FWIW, here's the source:

username_3: I've tested these urls on chrome 69.0.3497.100 and did not get any error.
http://www.t-online.de/nachrichten/ausland/krisen/id_83097508/bundesregierung-wegen-tuerkischer-offensive-gegen-kurden-in-der-kritik.html
https://tobiasdokken.com/
@username_2 perhaps did got fixed in chromium source?
username_3: I'm marking this as pending-closed. Looks like it's not happening anymore
username_4: last saw this may / chrome 73 so probably still an issue.
username_2: in #8122 this was repro'd in m73 which is pretty recent.. so likely still open.
username_5: I'm getting this error
username_4: Doesn't happen for Canary w/ any of the reported urls. And we haven't seen this error in awhile. So let's close.
Status: Issue closed
|
aueangpanit/jsdoc-proptypes | 639492915 | Title: Some documentation and examples?
Question:
username_0: ```
Not JSDoc comments shows up on VSCode:

Thanks
Answers:
username_1: Hi,
Yes unfortunately, it doesn't work with interfaces yet. For now, it just generates jsdoc comments based on .propTypes of a component
username_0: Ok thanks  |
defeo/hhs-keyex | 356436678 | Title: Better explain GHS algorithm
Question:
username_0: The purpose is to represent the walk in subexponential space.
Status: Issue closed
Answers:
username_2: I think I may have clobbered your fix to this a few commits ago. I'd rather not risk collateral damage in unwinding it all---would you mind checking/re-fixing?
username_2: The purpose is to represent the walk in polynomial space.
username_1: I think I didn't get the problem... the text is fine.
username_0: Both of your edits are in the HEAD, and they well complement each other.
Status: Issue closed
|
summernote/summernote | 1167258604 | Title: Summernote.org still Supports Bootstrap 3.x.x to 4.x.x
Question:
username_0: I was about to give up to use this when I sow this in `Summernote.org` because I am using `Bootstrap 5` in my homepage.
Answers:
username_1: You know, as this is an Open Source Project, we do accept Pull Requests to update, and fix things, and for things like this it's a good way to learn how the process works, and contribute to a project.
Anyway, that said, I'll have a look at updating those when I can.
username_1: Resolved.
Status: Issue closed
|
devynspencer/puppet-workstation | 957419029 | Title: Refactor module entrypoints to follow best practices
Question:
username_0: Meaning parameters, naturally.
- [ ] https://puppet.com/docs/puppet/7/style_guide.html#style_guide_classes-param-defaults
Answers:
username_0: Use data binding pattern, `params.pp` is way outdated, ugh.
puppet.com/docs/puppet/7/hiera_automatic.html
Status: Issue closed
username_0: Resolved by 3c829a96444a182ba5bf7c0661c5f47997f3be95 |
beerjs/abq | 230459970 | Title: May 31st
Question:
username_0: Taking requests for a location, but am thinking Red Door on Candelaria. 6:30pm. All in favor?
Answers:
username_1: That sounds good to me.
username_2: I'll be there. I mentioned it to the RubiABQ group last night. A few others said they'd go.
username_3: Count a few of us in!
username_4: In for 2.
username_0: See y'all tonight at Red Door: https://goo.gl/maps/JsDAcVGeCbz
username_5: GREAT SUCCESS
MUCH MEETING
Status: Issue closed
|
Amstelland-Software-Development/FRONTEND-ESSENTIALS | 782641835 | Title: Suggesties
Question:
username_0: FE – Readme
Onder ##Inhoud staat geen tekst. Hier uitleggen wat studenten gaan doen in deze module.
0101
Het verwart me een beetje dat je begint met uitleg over nested tags, terwijl de taak over klikbare plaatjes gaat.
0102
-
0103
-
0104
Er wordt een voorbeeld beloofd, maar dat staat er niet
0105
“Om te kunnen navigeren op een website heeft een andere developer voor jou een navbar ontwikkeld.” Die zin kan ik niet goed plaatsen.
0106
In de opdracht nog even herinneren aan de <nav>-tag?
0107
Nog vermelden dat het verwijzen naar het css-bestand in de <head> gebeurt en het js-bestand onderaan in de <body>?
0201
Wat me hier in het begin niet helder was is welke code waar staat. Wat staat in style.css en wat in index.html?
Opdracht uitbreiden met:
Maak 4 paragraafteksten op een webpagina met een verwijzing naar style.css
Maak een bestand style.css
0202
Heet box-model, maar gaat het daar ook over?
Hier komt het <div>-element voor mij uit de lucht vallen (in FE Basics?)
Wat me hier in het begin niet helder was, is welke code waar staat. Wat staat in style.css en wat in index.html?
Deze of volgende opdracht uitbreiden met?
Maak een webpagina met een verwijzing naar style.css
Maak een bestand style.css
Maak op pagina 4 paragraafteksten
Maak de paragraafteksten op 4 verschillende manieren op door je style.css aan te passen
1. Maak een index.html en neem de code hierboven in zijn geheel over. Ik zie die code niet.
Ontbreekt hier niet dat een <span> en een <div> default inline of block zijn
Dit filmpje <NAME>
0203
Eerst studenten block laten veranderen in inline en vice versa alvorens onzichtbaar te maken?
Wat is het nu van onzichtbaar maken? Waarom haal je het niet weg?
0204
“Dit werkt gewoon zoals het hoort.” Deze opmerking komt voor mij uit de lucht vallen (in FE Basics?)
#child-2 begint met een # en niet met een punt. Waarom is dat?
Is parent-child verhouding al eerder aan de orde geweest? Aparte taak aan wijden?
0301
-
[Truncated]
-
0504
-
0505
-
0506
-
0507
Je schrijft:
“Die gehele film-HTML-pagina is van het datatype Object. Elke HTML-document is een Document Object Model (DOM).
Zodra een HTML-document is ingeladen in de browser dan wordt het een Document Object. Dit document heeft net als het andere Object-properties (eigenschappen)”
Dit kan ik niet goed volgen. Het begrip DOM wordt me niet helder.
Algemene opmerking
• Geef de Readme’s een nummer mee van de taak: bijvoorbeeld readme01. Dan zie je beter wat je open hebt staan in VS Code |
sachaos/todoist | 689173643 | Title: Using completed parameters?
Question:
username_0: I'm sure I'm missing something completely obvious, but is it possible to make use of these API parameters when pulling the list of completed tasks?
https://developer.todoist.com/sync/v8/#get-all-completed-items
Ideally, I'd like to be able to run something like `completed/get_all?limit=100&since=2020-8-15T00:00&project=PROJECT_ID`
Thanks! 😊 |
galaniprojects/liveblog | 225642476 | Title: Provide a way to override liveblog pusher credentials
Question:
username_0: Currently the following is not possible:
```
$config['liveblog.notification_channel.liveblog_pusher'] = [
'app_id' => $overriden_pusher_app_id,
'key' => $overriden_pusher_key,
'secret' => $overriden_pusher_secret,
];
```
It's very important for us to not store sensitive data in repository so we would like to use config overrides instead in settings.php. So content of liveblog.notification_channel.liveblog_pusher.yml will be something symbolic like:
{code}
app_id: 'app_id_placeholder'
key: 'key_placeholder'
secret: 'secret_placeholder'
{code}
And then we override pusher credentials by variables, which are injected to settings.php and are not part of project repository. For this we need the pusher to provide a way to override config. With current set up it's not possible since config variables are loaded from yaml only and ignoring config overrides in settings.php
Answers:
username_0: Currently I am using the following quick patch to get config overrides working. https://www.drupal.org/files/issues/2874796-2-liveblog-let-config-override.patch |
DonJayamanne/pythonVSCode | 245867261 | Title: Auto Indent by indentation rules
Question:
username_0: ## Environment data
VS Code version: 1.14
Python Extension version: Latest
Python Version: Any
OS and version: macOS
We implemented auto indent by bringing indentation rules (which was originally invented by TextMate https://manual.macromates.com/en/appendix). It would be great if we add this feature to Python extension. A good starting point is Atom's [Python Indentation Rules](https://github.com/atom/language-python/blob/master/settings/language-python.cson) but note it has some [potential issues](https://github.com/atom/language-python/issues?q=is%3Aopen+is%3Aissue+label%3Aauto-indent)
Answers:
username_1: I believe https://github.com/username_3/pythonVSCode/issues/1087 might be related
username_2: Don tried this before if I recall, but I don't remember what the issues he encountered were. Will see if I can dig up the issue number
Status: Issue closed
|
GTi-Jr/sistema-de-assinatura | 193048617 | Title: Iugu::AuthenticationException: Chave de API não configurada. Utilize Iugu.api_key = ... para configurar.
Question:
username_0: View details in Rollbar: [https://rollbar.com/gtijr/Caixa_cegonha/items/70/](https://rollbar.com/gtijr/Caixa_cegonha/items/70/)
```
Iugu::AuthenticationException: Chave de API não configurada. Utilize Iugu.api_key = ... para configurar.
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/iugu-1.0.8/lib/iugu/api_request.rb", line 12, in request
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/iugu-1.0.8/lib/iugu/plan.rb", line 9, in fetch_by_identifier
File "/home/deploy/caixa_cegonha/releases/20161202070003/app/models/plan.rb", line 22, in iugu_object
File "/home/deploy/caixa_cegonha/releases/20161202070003/app/models/plan.rb", line 57, in save_in_iugu
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 432, in block in make_lambda
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 164, in block in halting
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 504, in block in call
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 504, in each
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 504, in call
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 92, in __run_callbacks__
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 778, in _run_save_callbacks
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/callbacks.rb", line 302, in create_or_update
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/persistence.rb", line 120, in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/validations.rb", line 37, in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/attribute_methods/dirty.rb", line 21, in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 286, in block (2 levels) in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 351, in block in with_transaction_returning_status
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/connection_adapters/abstract/database_statements.rb", line 213, in block in transaction
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/connection_adapters/abstract/transaction.rb", line 184, in within_new_transaction
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/connection_adapters/abstract/database_statements.rb", line 213, in transaction
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 220, in transaction
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 348, in with_transaction_returning_status
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 286, in block in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 301, in rollback_active_record_state!
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activerecord-4.2.6/lib/active_record/transactions.rb", line 285, in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/bundler/gems/rails_admin-9e1c66ee036c/lib/rails_admin/adapters/active_record/abstract_object.rb", line 23, in save
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/bundler/gems/rails_admin-9e1c66ee036c/lib/rails_admin/config/actions/edit.rb", line 32, in block (2 levels) in <class:Edit>
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/bundler/gems/rails_admin-9e1c66ee036c/app/controllers/rails_admin/main_controller.rb", line 22, in instance_eval
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/bundler/gems/rails_admin-9e1c66ee036c/app/controllers/rails_admin/main_controller.rb", line 22, in edit
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/action_controller/metal/implicit_render.rb", line 4, in send_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/abstract_controller/base.rb", line 198, in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/action_controller/metal/rendering.rb", line 10, in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/abstract_controller/callbacks.rb", line 20, in block in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 117, in call
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 555, in block (2 levels) in compile
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 505, in call
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 92, in __run_callbacks__
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 778, in _run_process_action_callbacks
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/callbacks.rb", line 81, in run_callbacks
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/abstract_controller/callbacks.rb", line 19, in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/action_controller/metal/rescue.rb", line 29, in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/actionpack-4.2.6/lib/action_controller/metal/instrumentation.rb", line 32, in block in process_action
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/notifications.rb", line 164, in block in instrument
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/active_support/notifications/instrumenter.rb", line 20, in instrument
File "/home/deploy/caixa_cegonha/shared/bundle/ruby/2.3.0/gems/activesupport-4.2.6/lib/act<issue_closed>
Status: Issue closed |
gehring/fax | 625179174 | Title: Provide basic example of how to use
Question:
username_0: Example: light house example of <NAME> and <NAME> with Newton's method. Add to README.md
Answers:
username_0: @irenezhang30 has an implementation which would be nice to merge
username_0: See #18
username_0: Adressed in 2d492247a103f1b83afc05ac0efb32e3e62ff658
Status: Issue closed
|
CCob/Jboss-Wilfly-Hashes-to-Hashcat | 607241493 | Title: Repo name / Filename all over the place
Question:
username_0: ✓ JBoss / Wildfly
✗ Jboss-Wilfly-Hashes-to-Hashcat
✗ jbosswidlyfly_to_hashcat.py
✗ `help="JBoss/Wilfly user properties file containing hashes"`
Answers:
username_0: edited, feel free to take changes, if you like:
```
#!/usr/bin/env python3 -B -u
# -*- coding: utf-8 -*-
"""convert JBoss/Wildfly user properties list to hashcat mode 20
USAGE:
python code.py --userlist mgmt-users.properties >jbosswildfly.hashes
hashcat -O -m 20 --username --hex-salt jbosswildfly.hashes rockyou.txt"""
import re
import sys
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--userlist", help="JBoss/Wildfly user properties file containing hashes")
parser.add_argument("--realm", default="ManagementRealm", nargs='?')
args = parser.parse_args()
hash_format = r'^(.*)=([0-9a-f]{32})\s*$'
with open(args.userlist,"r") as f:
for line in f:
line = line.strip()
if not line: continue
res = re.search(hash_format,line)
if res is not None:
enc = ("%s:%s:"%(res.group(1),args.realm)).encode().hex()
print("%s:%s:%s" %(*res.groups(),enc))
``` |
minishift/minishift | 369000599 | Title: When preflight checks are skipped hyperv virtual switch (by default) is not set to Default Switch
Question:
username_0: ### Steps to reproduce
1. Do not set the config variable `hyperv-virtual-switch` and env `MINISHIFT_HYPERV_VIRTUAL_SWITCH`
2. run `minishift start --skip-startup-checks`
### Expected
Since no values for the virtual switch was provided _minishift_ should choose the **`Default Switch`**
### Actual
An external switch is chosen instead by libamachine
Since we set the value for `hyperv-virtual-switch` in the func `checkHypervDirver` and it gets skipped when the preflight checks are skipped...
https://github.com/minishift/minishift/blob/58aff7e80b83b291e06b35b423511c1eab50ce69/cmd/minishift/cmd/start_preflight.go#L412-L432
Answers:
username_0: resolved via #2642
Status: Issue closed
|
gaearon/react-hot-loader | 57680844 | Title: How to quiet the console.log outputs?
Question:
username_0: This kind of output in the browser console is really hard to work with. Can I disable these outputs?
```
[HMR] App is up to date.
[WDS] App updated. Recompiling...
[WDS] App hot update...
[HMR] Checking for updates on the server...
```
Status: Issue closed
Answers:
username_1: You can copy paste `webpack/hot/only-dev-server` to your project, strip logs from it and use your version instead.
Closing, let me know if I missed something!
username_2: Silent option in cofiguratiion would be nice.
username_1: It's not really something we can control in RHL. You can file an issue with Webpack.
username_3: It's a bit hacky, but I currently have this and it helps:
```
if (process.env.NODE_ENV === 'development') {
console.clear();
}
```
The `NODE_ENV` variable is set by WebPack in the config.
username_1: I'd use `if ('production' !== process.env.NODE_ENV)` in this case.
You'd need to set it to `true` using `DefinePlugin` for production anyway so React is properly compressed.
username_4: How disable console.log for webpack/hot/only-dev-server ??
` const server = new WebpackDevServer(compiler, {
stats: {colors: true},
hot: true,
clientLogLevel: 'none',
//quiet: true,
watchOptions: {
ignored: /node_modules/
}
/*
historyApiFallback: {
index: '/'
}*/
});`
quiet: true not working!
username_5: Add this to your apps `index.js`:
```
// Clear after module reload
window.addEventListener('message', e => {
if ('production' !== process.env.NODE_ENV) {
console.clear();
}
});
```
username_6: This is really annoying, could anyone tell me where these logs are located in the `react-hot-reload` folder? Did not see them.
username_5: @username_6 They're sprinkled liberally througout the code.
Go to your `/node_modules/webpack` folder and run a search and replace:
```
find . -type f -name "*.js" -exec sed -i '' 's/console.log.*$//g' {} \;
```
username_6: @username_5 ah thanks! I was looking in the react-hot-reload folder.
username_5: There's also this:
https://github.com/webpack/webpack-dev-server/pull/579#issuecomment-244406936
username_7: A quick way to silencing could be filtering in devtools with regex. i.e. ~ `/^((?!(\[HMR|\[WDS)).)*$/`
username_8: Add this to your apps index.js or index.ts:
var _log = console.log;
console.log = function () {
if (arguments[0].indexOf('[HMR]') == -1) //remove hmr logging
return _log.apply(console, arguments);
};
username_9: Import `log` module directly in your bundle, and change logLevel:
```
import { setLogLevel } from 'webpack/hot/log';
setLogLevel('none');
```
username_10: @username_9 set `var logLevel = "none";` by default OMG
ALL PEOPLE HATE IT ((((
username_11: For those who can't make it work with `import { setLogLevel } from 'webpack/hot/log';`, see this comment: https://github.com/webpack/webpack/pull/6265#issuecomment-430932442
username_12: None of these suggestions worked, except for the CLI option!!
```
webpack --client-log-level warning
// or
webpack --client-log-level error
```
It defaults to `info` which is where all the [HMR] AND [WDS] spam comes from.
username_13: Hey guys, none of these suggestions worked for me, except the CLI option does!!
```
webpack --client-log-level warning
// or
webpack --client-log-level error
```
It defaults to `info` which is where all the [HMR] AND [WDS] spam comes from.
username_14: You can also just set `clientLogLevel: none` in your webpack `devServer` config:
```
devServer: {
clientLogLevel: 'none'
}
```
https://webpack.js.org/configuration/dev-server/#devserverclientloglevel
username_15: If you're using `webpack-hot-middleware`, you can [configure it](https://github.com/webpack-contrib/webpack-hot-middleware#client) like this: `webpack-hot-middleware/client?reload=true&noInfo=true`
username_16: Webpack config
```
devServer: {
clientLogLevel: "silent",
```

username_17: I love you guys. Finally found this thread after raging hard for like two years |
Meituan-Dianping/Logan | 423223198 | Title: 关于日志捞取问题
Question:
username_0: 目前用的是1.2.1版本,看代码是通过xxxx-xx-xx这种格式来捞取对应日志的日志文件,这个对应的是0点的时间戳吧?那么问题来了,尽管是同一天,但是时间戳不一样,那不是捞不回日志了?
Answers:
username_0: 或者说在什么情况下会出现日志文件名不是0点的时间戳呢
username_1: 这个是精确到天,不是具体点
username_0: @username_1 是这样的,项目有在海外运营,但是发现拉不回来,一看时间点是1点钟的时间戳而不是0点的,导致拉不回来。
username_1: 是不是因为时差导致了天数的变化?
username_0: 怀疑过,就刚好就相差一个小时的时差。那像这种情况要怎么处理呢?自己去实现日志查找那块?
username_1: 建议给Logan.send传日期的时候,要计算好时差,是否有隔天的情况
username_0: 尝试过,但是怎么都拉不回来,后来通过人肉回来的就发现日志文件名是1点钟的时间戳。还有一个问题是,有时候日志文件夹里面,同一天存在两个日志文件
username_1: 这个我看下,是不是有可能存在这种时差导致的问题
username_0: 好的,麻烦了
username_1: 问下,你在传给send函数时间戳的时候,是精确到几点的吗?因为我们format的时候使用的是new SimpleDateFormat("yyyy-MM-dd"),只精确到天,然后这个方法默认会使用Locale.getDefault()作为Locale参数,按理说不应该存在时差的问题
username_0: 我们这边是直接发送的xxxx-xx-xx这种格式的,精确到几点也没用吧?我看源码里面是直接格式化到天。但是目前海外的日志我们都拉不回来,人肉回来的不是0点的时间戳。如果日志文件名改用yyyyMMdd这种命名是不是会更好?
username_0: 你好,我尝试调整设备的时区,调整为曼谷时间,比北京时间晚一个小时。发现日志文件名确实不会是0点的时间戳,而是一点的。
username_1: 冥冥之中我感觉是Locale.getDefault()有坑……
username_0: 那要怎么破,惆怅。。。但是如果是以20190410这种格式来命名呢?
username_1: 我想想
username_2: 触发上报时,会拷贝一份日志,如果此时出现异常,就会有2份日志存在
username_3: 请问这个问题找到原因了吗?
username_1: 只有当天的会拷贝
username_1: 还没。
username_4: 不是系统的bug
long getTime ()
Returns the number of milliseconds since January 1, 1970, 00:00:00 GMT represented by this Date object
这个方法返回是格林威治的1970年7月1日到现在的时间
楼上设置成曼谷时间后 然后用的北京时间转换 有一个小时的时差 需要减去一个小时
Status: Issue closed
|
github1586/nuxt-bnhcp | 397668453 | Title: 本地运行 sql文件 缺少course表
Question:
username_0: SELECT i.institutionsName,e.gradeId,w.gradeTwoId,t.teacherName,t.evalPerson, p.campusesName, c.*,
date_format(open_date,'%Y-%m-%d') open_date1,
date_format(end_date,'%Y-%m-%d') end_date1
from course c
left join teacher t on c.teacher_id = t.thacherId
left join campuses p on t.schoolId = p.campusesId
left join institutions i on p.campusesParentId = i.institutionsId
left join gradeThree h on c.course_id = h.gradeThreeId
left join gradeTwo w on h.pid = w.gradeTwoId
left join gradeOne e on w.pid = e.gradeId
where e.gradeId=15963591
ORDER BY RAND()
limit 0,15
=> `Table 'bnhcp.course' doesn't exist`
Answers:
username_1: @username_0 抽时间 我重新上传一份
username_2: 对 好久能上传一份 就那个表少了
username_2: @username_1 user 表也少了
username_1: 表都上传了
Status: Issue closed
|
Subsets and Splits