repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
paritytech/substrate | 453487000 | Title: state_queryStorage is too verbose
Question:
username_0: The `state_queryStorage` RPC method enumerates storage entries that were changed between the given blocks. However, the resulting list size is proportional to the number of blocks not the number of changes. It would be great if we could omit the storage entries that were unchanged.
cc @svyatonik
Answers:
username_1: @username_0 Wouldn't it be better to introduce a different RPC method for that then? like `state_diffStorage(blockA, blockB)`?
AFAIR the point of `queryStorage` was to be able to trace all changes that happened since given block.
username_0: What I was trying to convey is that it encodes no changes as an empty array whereas it could just omitted.
username_1: Oh, is it? Then yeah, we could simplify that. I think we don't do that if changes trie is disabled, so probably the behaviour differs there.
Status: Issue closed
|
PyCQA/isort | 718503704 | Title: tests/unit/test_pylama_isort.py::TestLinter::test_run fails from the PyPI tarball
Question:
username_0: E AssertionError: assert not [{'col': 0, 'lnum': 0, 'text': 'Incorrectly sorted imports.', 'type': 'ISORT'}]
E + where [{'col': 0, 'lnum': 0, 'text': 'Incorrectly sorted imports.', 'type': 'ISORT'}] = <bound method Linter.run of <isort.pylama_isort.Linter object at 0x7fbcd56e1d30>>('/isort-5.6.1/isort/api.py')
E + where <bound method Linter.run of <isort.pylama_isort.Linter object at 0x7fbcd56e1d30>> = <isort.pylama_isort.Linter object at 0x7fbcd56e1d30>.run
E + where <isort.pylama_isort.Linter object at 0x7fbcd56e1d30> = <tests.unit.test_pylama_isort.TestLinter object at 0x7fbcd57c85e0>.instance
E + and '/isort-5.6.1/isort/api.py' = <function join at 0x7fbcd6656550>('/isort-5.6.1/isort', 'api.py')
E + where <function join at 0x7fbcd6656550> = <module 'posixpath' from '/usr/local/lib/python3.8/posixpath.py'>.join
E + where <module 'posixpath' from '/usr/local/lib/python3.8/posixpath.py'> = os.path
tests/unit/test_pylama_isort.py:15: AssertionError
=========================== short test summary info ============================
FAILED tests/unit/test_pylama_isort.py::TestLinter::test_run - AssertionError...
============================== 1 failed in 0.09s ===============================
ERROR: /isort-5.6.1/isort/api.py Imports are incorrectly sorted and/or formatted.
```
Reproducer Dockerfile:
```
FROM python:3.8
RUN wget https://files.pythonhosted.org/packages/source/i/isort/isort-5.6.1.tar.gz && \
tar xf isort-5.6.1.tar.gz && \
pip install pytest pylama
WORKDIR isort-5.6.1
RUN pytest -s -vv tests/unit/test_pylama_isort.py::TestLinter::test_run
```
Status: Issue closed
Answers:
username_1: I believe this is due to the test as it was written being too reliant on living within the exact source tree. If that's the cause the release of 5.6.2 should resolve this.
Thanks!
~Timothy
username_0: Works fine with 5.6.2, thanks! |
canjs/can-connect | 173294753 | Title: Syntax error being swallowed by CanJS
Question:
username_0: I have the following code.
```
connect([
connectConstructor,
connectMap,
connectDataParse,
connectStore
], {
Map: Player,
getData: function (params) {
var playerStats = PlayerStats.get({
options: params.options,
id: params.id,
year: params.year
});
var playerStat186 = PlayerStat186.get({
options: params.options,
id: params.id
});
return Promise.all([playerStats, playerStat186]);
},
parseInstanceData: function (data) {
const personalStats = data[1];
const personalStat186 = data[2];
return {
**playerId: xxx.playerId,**
name: personalStats.name
};
}
});
```
The bold line, above, is invalid with xxx not being defined. When I run the code, JS stops at the error, but no message appears on the console.
Answers:
username_1: What does this have to do with can-connect? can-connect isn't preventing errors surely.
username_0: parseInstance is being called by can-connect, is it not?
username_1: Ah, so you are saying that errors in parseInstanceData do not bubble to the Promise when calling .get() or .getList()?
Status: Issue closed
username_0: Yes, the error is being captured and returned in the higher level catch. Issue closed.
username_1: So is this not a bug? Did you just forget to handle the .catch() in the promise?
username_0: That is correct. |
lexbor/lexbor | 991971912 | Title: Regression: head and body selection broken on Git master
Question:
username_0: Head and body selection is broken on Git, but works in 2.1.0. This is a regression introduced in 0c4010b10a922a45513d67beec0ddd29c2951f85.
Reproduction example:
```c
#include <lexbor/html/html.h>
void serialize(lxb_dom_element_t* node, lxb_dom_document_t* document) {
lexbor_str_t* html_str = lexbor_str_create();
lxb_html_serialize_tree_str(lxb_dom_interface_node(node), html_str);
printf("%s\n", html_str->data);
lexbor_str_destroy(html_str, document->text, true);
}
int main(void) {
static const lxb_char_t html[] = "<!doctype html><html><head></head><body></body></html>";
lxb_html_document_t* html_document = lxb_html_document_create();
lxb_html_document_parse(html_document, html, sizeof(html) - 1);
lxb_dom_document_t* document = &html_document->dom_document;
printf("HEAD:\n");
lxb_dom_element_t* head = lxb_dom_interface_element(lxb_html_document_head_element(html_document));
serialize(head, document);
printf("\nBODY:\n");
lxb_dom_element_t* body = lxb_dom_interface_element(lxb_html_document_body_element(html_document));
serialize(body, document);
return 0;
}
```
Expected result:
```
HEAD:
<head></head>
BODY:
<body></body>
```
Result after 0c4010b10a922a45513d67beec0ddd29c2951f85:
```
HEAD:
[1] 368728 segmentation fault (core dumped) ./test
```
Head is empty and body crashes. If you comment out the lines where the head is selected and serialized, it won't crash anymore, but you will get the head instead of the body:
```
BODY:
<head></head>
```
Answers:
username_1: @username_0
Something I am doing wrong:
```HTML
HEAD:
<head></head>
BODY:
<body></body>
```
Current GitHub code.
username_0: Make sure you are using the correct version of the library also at runtime. When I set `LIBRARY_PATH` and `LD_LIBRARY_PATH` path to the CMake build dir, I can reproduce the bug. As soon as I unset `LD_LIBRARY_PATH`, the bug is gone, because the 2.1.0 release installed via apt ist used.
username_0: I figured out what's going on here: Turns out, this happens if you have the old headers in your `CPATH` during compilation, but are linking against the new version. Are there some ABI incompatibilities between the versions?
username_2: Normally, this library has been released with a major version too early. The conventional naming scheme is :
* first release : 0.0.0 (in the form major_version.minor_version.micro_version)
* before a release :
- if only fixed bugs : micro_version++
- if API is added : minor_version++ and micro_version set to 0
- if API or API are broken : major_version++ and micro_version and minor_version set to 0
Note that setting a major version is an important step. It means that the API must be stable during the development. Imho, the current lexbor version should have been something like 0.*.*, like 0.2.4
I don't know if it is possible to restart from the beginning the versionning, but i think it is too late.
username_0: Sorry to nag, but would it be possible to make a new release soon? The develop branch has various fixes for issues with the current stable release for which no workaround exist as well as some issues for which workarounds exist that I would like to get rid of.
username_1: @username_0
If within a week I do not publish the parser of the CSS properties, then there will be a release.
If I publish a property parser, then there will be releases too :)
Anyway, give me a week.
username_0: How's the release coming along? :see_no_evil:
username_1: @username_0
Unfortunately, everything is dragging on. I rewrote not a small part of the CSS parser and this also affected the selectors.
I would love to make a release with these changes.
username_0: What are the implications of the rewrite? |
mdgriffith/style-elements | 235450091 | Title: How to compose across modules?
Question:
username_0: I have two modules, Header and Dashboard, and I can compose their HTML msg-es in my Main, by using `Html.map DashboardMsg` etc.
What I want is: change the signature of my Dashboard.view/Header.view etc to return Element.Element instead of Html.Html, but how do I compose them in my Main.view?
What works so far is return Html from Dashboard.view by calling Element.render, and then calling Element.html to re-wrap into Element so my overall layout works.
Answers:
username_0: Other alternative is to send a tagger from Main.view to Dashboard.view.
username_1: This is definitely an oversight. I've put this on the [map for `v3.1`](https://github.com/username_1/style-elements/issues/14)
Status: Issue closed
username_0: Cool, will close this one. The tagger approach works for me. |
private-octopus/picoquic | 375279138 | Title: Bug on server-side logging of version negotiation
Question:
username_0: <NAME> [11:11 PM]
What’s with this version negotiation packet being logged by picoquic?
```
7b5f578f641c1b8: Sending packet type: 1 (version negotiation), S0,
7b5f578f641c1b8: <1067263059ae891d>, <07b5f578f641c1b8>
7b5f578f641c1b8: versions: 551067, 263059ae, 891d07b5, f578f641, c1b85043, 51315043, 5130ff00,
```
May be partially a logging error, but our client (f5_test) is parsing the version list as:
```
versions: 0x50435131, 0x50435130, 0xff00000f,
```
Status: Issue closed
Answers:
username_0: Fixed by PR #382 |
JamitLabs/ProjLint | 346573877 | Title: Write Getting Started Documentation for Contributors
Question:
username_0: Similar to [this](https://github.com/realm/SwiftLint/blob/master/CONTRIBUTING.md) document on SwiftLint, ProjLint should have a starting point for contributors with explanations what to conform to in order to introduce a new rule. Also it could include a checklist of actions to take before a new rule can be merged:
- [ ] Make sure SwiftLint and ProjLint are both installed and passing.
- [ ] Make sure to write extensive tests for your new rule.
- [ ] Document rule in Rules.md including options and example config(s).
- [ ] Add Changelog entry under section "Added".<issue_closed>
Status: Issue closed |
AugurProject/augur | 419680049 | Title: Error when user has one pending open order
Question:
username_0: 
to repro:
user has no open orders, then user creates pending order, then goes to portfolio page.
order confirmation comes in and UI gets error.
After refresh the user's order shows up in `Open Orders` section.
The issue is building userOpenOrders in market selector when the pending orders gets removed and before the open order actually gets retrieved from augur-node.
Answers:
username_0: @username_1 let's chat on a solution to this issue. It might be an issue with all. I put in a PR with a quick fix, we might need to think about it more.
Status: Issue closed
username_0: 
to repro:
user has one open order on a market, then goes to portfolio page to cancel it. Once the cancel transaction is confirmed and the pending order is removed the UI explodes.
Issue is that filtered data doesn't get updated when the market collection has been updated. In the screenshot below the markets and marketObj have been updated by container.

`marketObj` is empty object
Status: Issue closed
|
godotengine/godot | 298105887 | Title: ProjectSettings.load_resource_pack odd behavior (breaks virtual filesystem a bit)
Question:
username_0: <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** 3.0 Release
**OS/device including version:** Windows
**Issue description:**
When loading resource packs using ProjectSettings.load_resource_pack one would assume the content from the resource pack is added to the virtual filesystem, this does work but it also breaks other things, I now present three cases.
In each of the following cases the pck has a folder structure of Content/ModTest and two files in that folder, modtest.png and project.json, modtest is a picture of Earth-Chan and project.json is some text data.
The game itself has two folders inside the Content folder, BaseContent and TestContent.
**1st case: Directly loading resource using load():** This works exactly as expected, no issues about it.

**2nd case: Iterating a directory using Directory.list_dir**: It doesn't work, only the files and folders from the newly loaded pck file seem to be found, this is my method:
`func test_pck_issue():
var dir = Directory.new()
if dir.open("res://Content") == OK:
dir.list_dir_begin(true)
var directory_name = dir.get_next()
while (directory_name != ""):
if dir.current_is_dir():
print(directory_name)
file_name = dir.get_next()`
In this case only ModTest is found, this doesn't seem right to me.
Output when ProjectSettings.load_resource_pack is used:
`ModTest`
Output when it's not used:
`BaseContent
TestContent`
**3rd case: trying to find files using the file browser**: This only seems to find the game's base content, but none of the newly loaded pck content.

I have also tried removing the project configuration from the package, it doesn't change anything.
Answers:
username_0: username_1 said in the discord that while load_resource_pack is undocumented it uses PackedData::add_path which is supposed to be additive, so it seems like it is indeed a bug.
username_0: I figured out what's causing this, the problem is that the ProjectSettings singleton sets DirAccess::make_default<DirAccessPack>(DirAccess::ACCESS_RESOURCES);
Because it makes all newly created DirAccess instances use DirAccessPack it only loads from resource packs, and since we are running in the editor we don't really have a resource pack.
load() doesn't use DirAccess, that's why it has no issues loading from both
I am not sure how this can be tackled, if it even can, if I knew I would fix it myself.
username_1: Might be useful to add that it works in export mode, where all access to files is through Packs; where it breaks is if you try to use both Packs and the filesystem, ie. in the editor.
username_0: I originally proposed to use a hybrid of all three if we are using a mix, but I am not sure if there's a better way.
username_2: If `load` work with pck on debug mode it can be used for testing while the other is used on release with conditionals, or make `load_resource_pack` _optionally_ behave differently on debug mode.
username_0: @username_2 `load` actually works with multiple sources regardless of being in debug or in release mode, because `load` does not use `DirAccess`, it does it directly instead, so even if you mix fileystem resources and packages it still works, as long as you supply it the proper path.
The problem comes when mixing them both, once a package is loaded `Directory` can't find any resource that was outside the pck because `Directory` stops using `DirAccess `and starts using `DirAccessPack`, which only looks at files in pcks.
username_0: I have changed the title to a more descriptive one
username_3: @username_0 Can you still reproduce this bug in Godot 3.2.1 or [3.2.2beta4](https://godotengine.org/article/dev-snapshot-godot-3-2-2-beta-4)?
username_0: Yes, this is still an issue, the engine's design still has this problem.
username_4: Still a problem with Godot 3.2.3 and 3.2.2 from what I found. I made a minimal test case to report it as bug, before I found this issue.
```gd
func _ready():
var packer = PCKPacker.new();
packer.pck_start("res://pack.pck");
packer.flush(true);
ProjectSettings.load_resource_pack("pack.pck")
var dir = Directory.new()
assert(dir.open("res://dir") == OK, "Could not open the folder 'dir'.")
``` |
kefniark/Fatina | 312421940 | Title: Timing accuracy
Question:
username_0: ## Description
For the moment, the library is based on the deltatime provided by `requestAnimationFrame`.
Some post seems to say that this dt could be slightly inaccurate and they recommend to use:
* Browser: `performance.now()`
* Node: `process.hrtime()`
https://www.stefanjudis.com/today-i-learned/measuring-execution-time-more-precisely-in-the-browser-and-node-js/
Need to test in different browser how big is the difference and if it's still relevant.
Answers:
username_0: Based on first tests,
chrome 65:
```
update [perf: 250 ms. | raf: 0 ms.]
update [perf: 238.401 ms. | raf: 666.694 ms.]
update [perf: 19.2 ms. | raf: 33.334 ms.]
update [perf: 15.401 ms. | raf: 16.667 ms.]
update [perf: 14.401 ms. | raf: 16.672 ms.]
update [perf: 13.601 ms. | raf: 16.653 ms.]
update [perf: 19.6 ms. | raf: 16.672 ms.]
update [perf: 15.301 ms. | raf: 16.659 ms.]
update [perf: 13.501 ms. | raf: 16.655 ms.]
update [perf: 17 ms. | raf: 16.67 ms.]
update [perf: 16.901 ms. | raf: 16.67 ms.]
update [perf: 18.8 ms. | raf: 16.69 ms.]
update [perf: 14.2 ms. | raf: 16.677 ms.]
```
edge:
```
update [perf: 63.901 ms. | raf: 63.89 ms.]
update [perf: 41.4 ms. | raf: 41.342 ms.]
update [perf: 668.5 ms. | raf: 667.666 ms.]
update [perf: 355.101 ms. | raf: 355.317 ms.]
update [perf: 446 ms. | raf: 446.066 ms.]
update [perf: 213.101 ms. | raf: 213.179 ms.]
update [perf: 249.4 ms. | raf: 249.221 ms.]
update [perf: 167.7 ms. | raf: 167.87 ms.]
update [perf: 149.401 ms. | raf: 149.367 ms.]
update [perf: 154.701 ms. | raf: 154.665 ms.]
update [perf: 127.8 ms. | raf: 127.976 ms.]
update [perf: 109.8 ms. | raf: 109.992 ms.]
update [perf: 116.5 ms. | raf: 116.524 ms.]
update [perf: 90.801 ms. | raf: 90.585 ms.]
update [perf: 109.1 ms. | raf: 109.204 ms.]
```
Status: Issue closed
|
etcd-io/etcd | 397379418 | Title: Multiple endpoints in ETCDCTL_ENDPOINTS lead to rejected connection from "172.16.58.3:5678" (error "EOF", ServerName "") spam in server log
Question:
username_0: We have a Cluster of 3 etcd nodes and use it from various shell scripts running in systemd services which all use etcdctl. Two nodes run on Debian stretch, the third on Centos 7. All nodes use etcd downloaded directly from Github.
etcdctl version: 3.3.10
API version: 3.3
etcd Version: 3.3.10
Git SHA: 27fc7e2
Go Version: go1.10.4
Go OS/Arch: linux/amd64
A lot of the systemd services check some system parameter (e.g. an IP or whether a service is running) and write data about it to etcd. All of the scripts use leases to ensure no outdated information remains in etcd. Some use put, some use txn with a check whether the key exists followed by put.
etcdctl is configured via environment variables (IPs changed for privacy)
`
ETCDCTL_API=3
ETCDCTL_CACERT=/etc/etcd/client_certs/ca.crt
ETCDCTL_CERT=/etc/etcd/client_certs/client.crt
ETCDCTL_DIAL_TIMEOUT=3s
ETCDCTL_ENDPOINTS=https://172.16.58.3:2379,https://172.16.17.32:2379,https://1.2.3.6:2379
ETCDCTL_KEY=/etc/etcd/client_keys/client.key
`
etcd is started by a systemd unit we wrote (adjusted heavily)
`
[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd
Conflicts=etcd2.service
[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=60s
User=etcd
Group=etcd
ExecStart=/opt/etcd/etcd \
--name node1 \
--data-dir /var/lib/etcd \
--listen-client-urls https://172.16.58.3:2379,https://127.0.0.1:2379 \
--cert-file=/etc/etcd/server_certs/server_client.crt \
--key-file=/etc/etcd/server_keys/server_client.key \
--client-cert-auth \
--trusted-ca-file=/etc/etcd/server_certs/ca.crt \
--advertise-client-urls https://172.16.58.3:2379 \
--listen-peer-urls https://172.16.58.3:2380 \
--peer-cert-file=/etc/etcd/server_certs/server_peer.crt \
--peer-key-file=/etc/etcd/server_keys/server_peer.key \
--peer-client-cert-auth \
--peer-trusted-ca-file=/etc/etcd/server_certs/ca.crt \
--initial-advertise-peer-urls https://172.16.58.3:2380 \
--initial-cluster node1=https://1.2.3.4:2380,node2=https://1.2.3.5:2380,node3=https://1.2.3.6:2380 \
--initial-cluster-token "<PASSWORD>" \
--initial-cluster-state new \
--auto-compaction-mode=revision \
--auto-compaction-retention=1000
[Truncated]
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4202d2a20
INFO: 2019/01/09 14:42:15 balancerWrapper: handle subconn state change: 0xc420198750, CONNECTING
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4202d2a20
INFO: 2019/01/09 14:42:15 balancerWrapper: handle subconn state change: 0xc420198700, READY
INFO: 2019/01/09 14:42:15 clientv3/balancer: pin "1.2.3.5:2379"
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4202d2a20
INFO: 2019/01/09 14:42:15 balancerWrapper: got update addr from Notify: [{1.2.3.5:2379 <nil>}]
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: removing subconn
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: removing subconn
INFO: 2019/01/09 14:42:15 balancerWrapper: handle subconn state change: 0xc4201986b0, SHUTDOWN
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4202d2a20
INFO: 2019/01/09 14:42:15 balancerWrapper: handle subconn state change: 0xc420198750, SHUTDOWN
INFO: 2019/01/09 14:42:15 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4202d2a20
/internet/elected_router
node1
`
it looks like etcdctl connects to all three endpoints and only uses one connection, presumably it races them and uses the first one that responds. I suspect the EOF errors in the logs are caused by those two other connection attempts.
Sadly there seems to be no way to suppress the logging of TLS handshake failures like this.
Answers:
username_0: The closed issue (for lack of information from submitter) #10040 seems to show a similar problem.
username_1: This is a duplicate of https://github.com/etcd-io/etcd/issues/9949
Status: Issue closed
username_2: yeah closing lets track #9949 |
flutter/flutter | 571902612 | Title: Can't get sign in with Google
Question:
username_0: D/EGL_emulation(20914): eglMakeCurrent: 0xe4748060: ver 2 0 (tinfo 0xd9145fb0)
D/eglCodecCommon(20914): setVertexArrayObject: set vao to 0 (0) 1 0
I/OpenGLRenderer(20914): Davey! duration=1158ms; Flags=1, IntendedVsync=48083836367320, Vsync=48083869700652, OldestInputEvent=9223372036854775807, NewestInputEvent=0, HandleInputStart=48083883689500, AnimationStart=48083883739290, PerformTraversalsStart=48083883763590, DrawStart=48084959298150, SyncQueued=48084960766380, SyncStart=48084962160370, IssueDrawCommandsStart=48084962515300, SwapBuffers=48084976209690, FrameCompleted=48084996475790, DequeueBufferDuration=17378000, QueueBufferDuration=216000,
I/Choreographer(20914): Skipped 65 frames! The application may be doing too much work on its main thread.
D/EGL_emulation(20914): eglMakeCurrent: 0xe4749f80: ver 2 0 (tinfo 0xd915e180)
D/eglCodecCommon(20914): setVertexArrayObject: set vao to 0 (0) 1 0
W/ActivityThread(20914): handleWindowVisibility: no activity for token android.os.BinderProxy@<PASSWORD>
D/EGL_emulation(20914): eglMakeCurrent: 0xe4748060: ver 2 0 (tinfo 0xd9145fb0)
D/EGL_emulation(20914): eglMakeCurrent: 0xe4748060: ver 2 0 (tinfo 0xd9145fb0)
E/flutter (20914): [ERROR:flutter/lib/ui/ui_dart_state.cc(157)] Unhandled Exception: PlatformException(sign_in_failed, com.google.android.gms.common.api.ApiException: 10: , null)
E/flutter (20914): #0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:569:7)
E/flutter (20914): #1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:156:18)
E/flutter (20914): <asynchronous suspension>
E/flutter (20914): #2 MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:329:12)
E/flutter (20914): #3 MethodChannel.invokeMapMethod (package:flutter/src/services/platform_channel.dart:356:48)
E/flutter (20914): #4 MethodChannelGoogleSignIn.signIn (package:google_sign_in_platform_interface/src/method_channel_google_sign_in.dart:45:10)
E/flutter (20914): #5 GoogleSignIn._callMethod (package:google_sign_in/google_sign_in.dart:230:42)
E/flutter (20914): <asynchronous suspension>
E/flutter (20914): #6 GoogleSignIn._addMethodCall (package:google_sign_in/google_sign_in.dart:285:18)
E/flutter (20914): #7 GoogleSignIn.signIn (package:google_sign_in/google_sign_in.dart:356:9)
E/flutter (20914): #8 _LoginPageState._signInWithGoogle (package:the_hunter_app/login/login.dart:314:64)
E/flutter (20914): #9 _LoginPageState.build.<anonymous closure> (package:the_hunter_app/login/login.dart:237:27)
E/flutter (20914): #10 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:705:14)
E/flutter (20914): #11 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:788:36)
E/flutter (20914): #12 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:182:24)
E/flutter (20914): #13 TapGestureRecognizer.handleTapUp (package:flutter/src/gestures/tap.dart:486:11)
E/flutter (20914): #14 BaseTapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:264:5)
E/flutter (20914): #15 BaseTapGestureRecognizer.handlePrimaryPointer (package:flutter/src/gestures/tap.dart:199:7)
E/flutter (20914): #16 PrimaryPointerGestureRecognizer.handleEvent (package:flutter/src/gestures/recognizer.dart:470:9)
E/flutter (20914): #17 PointerRouter._dispatch (package:flutter/src/gestures/pointer_router.dart:76:12)
E/flutter (20914): #18 PointerRouter._dispatchEventToRoutes.<anonymous closure> (package:flutter/src/gestures/pointer_router.dart:117:9)
E/flutter (20914): #19 _LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:379:8)
E/flutter (20914): #20 PointerRouter._dispatchEventToRoutes (package:flutter/src/gestures/pointer_router.dart:115:18)
E/flutter (20914): #21 PointerRouter.route (package:flutter/src/gestures/pointer_router.dart:101:7)
E/flutter (20914): #22 GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:218:19)
E/flutter (20914): #23 GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:198:22)
E/flutter (20914): #24 GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:156:7)
E/flutter (20914): #25 GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:102:7)
E/flutter (20914): #26 GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:86:7)
E/flutter (20914): #27 _rootRunUnary (dart:async/zone.dart:1138:13)
E/flutter (20914): #28 _CustomZone.runUnary (dart:async/zone.dart:1031:19)
E/flutter (20914): #29 _CustomZone.runUnaryGuarded (dart:async/zone.dart:933:7)
E/flutter (20914): #30 _invoke1 (dart:ui/hooks.dart:274:10)
E/flutter (20914): #31 _dispatchPointerDataPacket (dart:ui/hooks.dart:183:5)
E/flutter (20914):
D/EGL_emulation(20914): eglMakeCurrent: 0xe4748060: ver 2 0 (tinfo 0xd9145fb0)
Here the error message coming out when I run the function in my app.
May I know how to fix it?
Answers:
username_1: Hi @username_0
You need to add `SHA` key in your firebase console [Stackoverflow Solution](https://stackoverflow.com/a/54696963)
username_1: Hi @username_0
Without additional information, we are unfortunately not sure how to resolve this issue. We are therefore
reluctantly going to close this bug for now. Please don't hesitate to comment on the bug if you have any
more information for us; we will reopen it right away!
Thanks for your contribution.
Status: Issue closed
|
TurboPack/Abbrevia | 292507694 | Title: Error creating new cab using TAbCabArchive
Question:
username_0: When creating a new cab file with TAbCabArchive a EAbFCIAddFileError is thrown with message "FCI cannot add file". This is due to FCICreate not having been called. Here is simple code to reproduce the issue:
`fn := 'somefile.cab'; // use whatever filename you want`
`cab := TAbCabArchive.Create(fn, fmCreate);`
`cab.AddFiles('*.*', faAnyFile); // add whatever files you want (exception here)`
`cab.Save;`
To workaround the issue I can add a call to Load which will make a call to FCICreate but this deviates from how other archives are created (such as TAbGzipArchive).
`fn := 'somefile.cab';`
`cab := TAbCabArchive.Create(fn, fmCreate);`
`cab.Load; // forces call to FCICreate`
`cab.AddFiles('*.*', faAnyFile);`
`cab.Save;`
Answers:
username_0: One proposed solution is to add a call to CreateCabFile in TAbCabArchive.Add. I don't know if this is the correct solution but it works.
```
if FFCIContext = nil then
CreateCabFile;
```
username_1: In which line should I add this?
Status: Issue closed
username_1: Fixed.
username_1: When creating a new cab file with TAbCabArchive a EAbFCIAddFileError is thrown with message "FCI cannot add file". This is due to FCICreate not having been called. Here is simple code to reproduce the issue:
```
fn := 'somefile.cab'; // use whatever filename you want
cab := TAbCabArchive.Create(fn, fmCreate);
cab.AddFiles('*.*', faAnyFile); // add whatever files you want (exception here)
cab.Save;
```
To workaround the issue I can add a call to Load which will make a call to FCICreate but this deviates from how other archives are created (such as TAbGzipArchive).
```
fn := 'somefile.cab';
cab := TAbCabArchive.Create(fn, fmCreate);
cab.Load; // forces call to FCICreate
cab.AddFiles('*.*', faAnyFile);
cab.Save;
```
Status: Issue closed
|
github-vet/rangeloop-pointer-findings | 771546862 | Title: mattbostock/opentsdb-promql-frontend: vendor/k8s.io/client-go/1.5/pkg/apis/extensions/types.generated.go; 5 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/mattbostock/opentsdb-promql-frontend/blob/41074fdf1db049ea45636df02fa7cf2913bbbf10/vendor/k8s.io/client-go/1.5/pkg/apis/extensions/types.generated.go#L16618-L16622)
<details>
<summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary>
```go
for _, yyv1346 := range v {
z.EncSendContainerState(codecSelfer_containerArrayElem1234)
yy1347 := &yyv1346
yy1347.CodecEncodeSelf(e)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: <PASSWORD>bbbf10<issue_closed>
Status: Issue closed |
jdbi/jdbi | 711685168 | Title: KotlinMapper: Handling of @PropagateNull on the class level does not use the registered mapper prefix
Question:
username_0: for example, if I have a class:
@PropagateNull("id")
data class TestClass(val id: Long, ...)
if I register a KotlinMapper instance with some prefix, e.g. "aa", then it is not used when testing for column null... So I need to use @PropagateNull("aa_id") - which is not great, as one class can be mapped in different contexts with different prefixes.
Answers:
username_1: This isn't specific to the KotlinMapper, I'm getting this with a ConstructorMapper as well.
username_2: I just encountered this problem as well, which is unfortunate since I feel like `@PropagateNull` would be the perfect solution to our use case. |
stevekrouse/WoofJS | 231829935 | Title: More explicitly tell users not to use their names or identifying information
Question:
username_0: In addition to "Pick a username" can we tell users to not use any identifying information like Scratch does? I wasn't able to see where in pull request https://github.com/username_0/WoofJS/pull/286 we did that. Can you take this on @joebeachjoebeach?<issue_closed>
Status: Issue closed |
aws/aws-cli | 542316991 | Title: --cli-input-json doesn't work with "http://" and "https://" protocol
Question:
username_0: From what I learned in `aws ... --from-input-cli <value>` \<value\> requires protocol (usually `file://`) but when I try to use `http://` it doesn't work. I find it usefull to fetch json from remote server. When I enter incorrect URL it fails with correct HTTP error (e.g. `received non 200 status code of 403`) but when when correct URL is filled this happens:
- AWS CLI version: `aws-cli/1.16.300 Python/3.7.5 Darwin/19.2.0 botocore/1.13.36`
- Command executed: `aws ecs --region eu-west-1 register-task-definition --cli-input-json http://localhost:8080/taskdef --debug`
- Output:
```
2019-12-25 11:17:10,655 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/1.16.300 Python/3.7.5 Darwin/19.2.0 botocore/1.13.36
2019-12-25 11:17:10,655 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['ecs', '--region', 'eu-west-1', 'register-task-definition', '--cli-input-json', 'localhost:8080/taskdef', '--debug']
2019-12-25 11:17:10,656 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_scalar_parsers at 0x1034dca70>
2019-12-25 11:17:10,656 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function register_uri_param_handler at 0x102ef0050>
2019-12-25 11:17:10,656 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x102f1c440>
2019-12-25 11:17:10,664 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function attach_history_handler at 0x103384950>
2019-12-25 11:17:10,665 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/data/ecs/2014-11-13/service-2.json
2019-12-25 11:17:10,672 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecs: calling handler <function inject_commands at 0x1034254d0>
2019-12-25 11:17:10,672 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecs: calling handler <function add_waiters at 0x1034ec680>
2019-12-25 11:17:10,685 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/data/ecs/2014-11-13/waiters-2.json
2019-12-25 11:17:10,686 - MainThread - awscli.clidriver - DEBUG - OrderedDict([('family', <awscli.arguments.CLIArgument object at 0x1047a4b90>), ('task-role-arn', <awscli.arguments.CLIArgument object at 0x1047a4d50>), ('execution-role-arn', <awscli.arguments.CLIArgument object at 0x1047a4d90>), ('network-mode', <awscli.arguments.CLIArgument object at 0x1047a4dd0>), ('container-definitions', <awscli.arguments.ListArgument object at 0x1047a4c90>), ('volumes', <awscli.arguments.ListArgument object at 0x1047a4e10>), ('placement-constraints', <awscli.arguments.ListArgument object at 0x1047a4c10>), ('requires-compatibilities', <awscli.arguments.ListArgument object at 0x1047a4e50>), ('cpu', <awscli.arguments.CLIArgument object at 0x1047a4f50>), ('memory', <awscli.arguments.CLIArgument object at 0x1047a4fd0>), ('tags', <awscli.arguments.ListArgument object at 0x1047aa810>), ('pid-mode', <awscli.arguments.CLIArgument object at 0x1047aa850>), ('ipc-mode', <awscli.arguments.CLIArgument object at 0x1047aa950>), ('proxy-configuration', <awscli.arguments.CLIArgument object at 0x1047aa690>), ('inference-accelerators', <awscli.arguments.ListArgument object at 0x1047aa890>)])
2019-12-25 11:17:10,686 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_streaming_output_arg at 0x1034dce60>
2019-12-25 11:17:10,686 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_cli_input_json at 0x102f25b00>
2019-12-25 11:17:10,687 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function unify_paging_params at 0x10345bef0>
2019-12-25 11:17:10,704 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/data/ecs/2014-11-13/paginators-1.json
2019-12-25 11:17:10,704 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_generate_skeleton at 0x10343b710>
2019-12-25 11:17:10,705 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.ecs.register-task-definition: calling handler <bound method OverrideRequiredArgsArgument.override_required_args of <awscli.customizations.cliinputjson.CliInputJSONArgument object at 0x1047aa9d0>>
2019-12-25 11:17:10,705 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.ecs.register-task-definition: calling handler <bound method GenerateCliSkeletonArgument.override_required_args of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x1047aaf50>>
2019-12-25 11:17:10,706 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.family: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,706 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.task-role-arn: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,707 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.execution-role-arn: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,707 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.network-mode: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,707 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.container-definitions: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,707 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.volumes: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,707 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.placement-constraints: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.requires-compatibilities: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.cpu: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.memory: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.tags: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.pid-mode: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,708 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.ipc-mode: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,709 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.proxy-configuration: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,709 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.inference-accelerators: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,709 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.cli-input-json: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:10,710 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): localhost:8080
2019-12-25 11:17:11,033 - MainThread - urllib3.connectionpool - DEBUG - http:localhost:8080 "GET /taskdef HTTP/1.1" 200 1068
2019-12-25 11:17:11,038 - MainThread - botocore.hooks - DEBUG - Event load-cli-arg.ecs.register-task-definition.generate-cli-skeleton: calling handler <awscli.paramfile.URIArgumentHandler object at 0x102ee3850>
2019-12-25 11:17:11,039 - MainThread - botocore.hooks - DEBUG - Event calling-command.ecs.register-task-definition: calling handler <bound method CliInputJSONArgument.add_to_call_parameters of <awscli.customizations.cliinputjson.CliInputJSONArgument object at 0x1047aa9d0>>
2019-12-25 11:17:11,039 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/awscli/customizations/cliinputjson.py", line 72, in add_to_call_parameters
input_data = json.loads(retrieved_json)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
[Truncated]
parsed_globals=parsed_globals
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/awscli/clidriver.py", line 607, in _emit_first_non_none_response
name, **kwargs)
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/session.py", line 677, in emit_first_non_none_response
responses = self._events.emit(event_name, **kwargs)
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/Cellar/awscli/1.16.300/libexec/lib/python3.7/site-packages/awscli/customizations/cliinputjson.py", line 76, in add_to_call_parameters
% (e, retrieved_json))
awscli.argprocess.ParamError: Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0)
JSON received: http://localhost:8080/taskdef
2019-12-25 11:17:11,046 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
Error parsing parameter 'cli-input-json': Invalid JSON: Expecting value: line 1 column 1 (char 0)
JSON received: http://localhost:8080/taskdef
```
Answers:
username_0: My guess is wrong call in `cliinputjson.py:65` - `retrieved_json = get_paramfile(input_json, LOCAL_PREFIX_MAP)`.
username_1: It looks like we used to support `http` and `https://` as protocols for `--cli-input-json`, but around two years ago, we removed that ability in this PR: https://github.com/aws/aws-cli/pull/3402. I'm still looking as to whether this was intentional. But in the meantime, could you elaborate on how you were planning to use the remote protocol with `--cli-input-json`?
username_0: From my point of view #3402 introduced this behaviour and it is a bug. User is supposed to be able to disable resolving strings including http protocol intentionally but by default it should work. part of `cliinputjson.py` is able to resolve http procol when it is enabled by `cli_follow_urlparam` (validation part) but when it gets to `get_paramfile` this function is ignoring `cli_follow_urlparam` and consider only `LOCAL_PREFIX_MAP` from parameter (i think remote prefixes should be added inside this function in a same way based on `cli_follow_urlparam`.
Our usecase for consuming input json from http is generating of ECS task definitions. We have implemented AWS lambda which is able to generate ready-to-use task definitions for all our services with only few override parameters. Currently we have to fetch JSON using `curl` and send it to `--cli-input-json`. If this bug will be fixed we will use similar AWS lambda JSON generators for other usecases.
username_2: Hi @username_0,
I see that you are experiencing this with the v1 client. I should note that this behavior goes away in the v2 client:
https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-paramfile
So, the pattern of fetching the file with `curl` or `wget` is the preferred approach.
Status: Issue closed
username_0: Thanks @username_2 I missed that. I'm closing this issue then. |
pcm-dpc/COVID-19 | 592560414 | Title: Web App
Question:
username_0: Salve,
attraverso i vostri dati [json](https://github.com/pcm-dpc/COVID-19/tree/master/dati-json) ho creato una web app per consultare i dati dell'Italia sia a livello nazionale che regionale.
sono presenti anche [dati internazionali](https://github.com/NovelCOVID/API) per una consultazione di tutti i paesi.
per gli Stati Uniti la consultazione è anche a livello statale.
### Interesse pubblico
web app responsive facilmente utilizzabile sia su smartphone che su tablet.
http://username_0.github.io/covid-19/
Answers:
username_1: grazie mille per il contributo @username_0
Status: Issue closed
|
Codeception/Codeception | 250600197 | Title: Acceptance test for React app button click doesn't work
Question:
username_0: 
The button of registration works in real browser
Answers:
username_1: It looks like your button is making request to a wrong port:
```
[Selenium browser Logs]
...
11:10:24.121 SEVERE - http://localhost:8080/tester/register - Failed to load resource: net::ERR_CONNECTION_REFUSED
11:10:24.144 SEVERE - http://localhost:8080/tester/register - Failed to load resource: net::ERR_CONNECTION_REFUSED
```
Status: Issue closed
|
RasaHQ/rasa | 862836671 | Title: Crypto dependency is not installed on 1.10.x
Question:
username_0: <!-- THIS INFORMATION IS MANDATORY - YOUR ISSUE WILL BE CLOSED IF IT IS MISSING. If you don't know your Rasa version, use `rasa --version`.
Please format any code or console output with three ticks ``` above and below.
If you are asking a usage question (e.g. "How do I do xyz") please post your question on https://forum.rasa.com instead -->
**Rasa version**: 1.10.x
**Rasa SDK version** (if used & relevant):
**Rasa X version** (if used & relevant):
**Python version**: 3.7
**Operating system** (windows, osx, ...): linux
**Issue**:
This fix https://github.com/RasaHQ/rasa/commit/d7731faa0172824b07bd47b20d861dab03a68beb should also be included on the 1.10.x branch of Rasa, the missing algorith error also happen there.
**Error (including full traceback)**:
```
```
**Command or request that led to error**:
```
```
**Content of configuration file (config.yml)** (if relevant):
```yml
```
**Content of domain file (domain.yml)** (if relevant):
```yml
```
Answers:
username_0: Adding `cryptography = ">=3.2.1"` to the `[tool.poetry.dependencies]` section at pyproject.toml solved the issue
username_1: Thanks for the issue, @degiz will get back to you about it soon!
###### You may find help in the [docs](https://rasa.com/docs/) and the [forum,](https://forum.rasa.com/) too 🤗
Status: Issue closed
|
Azure/azure-cli | 434224083 | Title: az webapp --help failure
Question:
username_0: ### **This is an autogenerated template. Please review and update as needed.**
## Describe the bug
**Command Name**
`az webapp`
**Errors:**
:open_mouth:
```The command failed with an unexpected error. Here is the traceback:
cannot import name 'SnapshotRecoveryRequest'Traceback (most recent call last):
File "/opt/az/lib/python3.6/site-packages/knack/cli.py", line 206, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 273, in execute
parsed_args = self.parser.parse_args(args)
File "/opt/az/lib/python3.6/site-packages/knack/parser.py", line 256, in parse_args
return super(CLICommandParser, self).parse_args(args)
File "/opt/az/lib/python3.6/argparse.py", line 1730, in parse_args
args, argv = self.parse_known_args(args, namespace)
File "/opt/az/lib/python3.6/argparse.py", line 1762, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/opt/az/lib/python3.6/argparse.py", line 1950, in _parse_known_args
positionals_end_index = consume_positionals(start_index)
File "/opt/az/lib/python3.6/argparse.py", line 1927, in consume_positionals
take_action(action, args)
File "/opt/az/lib/python3.6/argparse.py", line 1836, in take_action
action(self, namespace, argument_values, option_string)
File "/opt/az/lib/python3.6/argparse.py", line 1133, in __call__
subnamespace, arg_strings = parser.parse_known_args(arg_strings, None)
File "/opt/az/lib/python3.6/argparse.py", line 1762, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/opt/az/lib/python3.6/argparse.py", line 1968, in _parse_known_args
start_index = consume_optional(start_index)
File "/opt/az/lib/python3.6/argparse.py", line 1908, in consume_optional
take_action(action, args, option_string)
File "/opt/az/lib/python3.6/argparse.py", line 1836, in take_action
action(self, namespace, argument_values, option_string)
File "/opt/az/lib/python3.6/argparse.py", line 1020, in __call__
parser.print_help()
File "/opt/az/lib/python3.6/argparse.py", line 2362, in print_help
self._print_message(self.format_help(), file)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/parser.py", line 156, in format_help
super(AzCliCommandParser, self).format_help()
File "/opt/az/lib/python3.6/site-packages/knack/parser.py", line 246, in format_help
is_group)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/_help.py", line 148, in show_help
super(AzCliHelp, self).show_help(cli_name, nouns, parser, is_group)
File "/opt/az/lib/python3.6/site-packages/knack/help.py", line 664, in show_help
else self.group_help_cls(self, delimiters, parser)
File "/opt/az/lib/python3.6/site-packages/knack/help.py", line 219, in __init__
child.load(options)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/_help.py", line 242, in load
loader.versioned_load(self, options)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/_help_loaders.py", line 153, in versioned_load
super(CliHelpFile, help_obj).load(parser) # pylint:disable=bad-super-call
File "/opt/az/lib/python3.6/site-packages/knack/help.py", line 163, in load
description = getattr(options, 'description', None)
[Truncated]
azure-cli 2.0.62
Extensions:
aks-preview 0.3.0
front-door 0.1.1
storage-preview 0.1.6
webapp 0.2.6
Python location '/opt/az/bin/python3'
Extensions directory '/home/username_0/.azure/cliextensions'
Python (Linux) 3.6.5 (default, Apr 4 2019, 22:51:52)
[GCC 5.4.0 20160609]
```
## Additional Context
Add any other context about the problem here.
<!-- Please do not remove these markdown comments -->
<!--auto-generated-->
Answers:
username_1: CC: @Nking92
@username_0 looks like you have an older version of a web app extension can you delete this & re-try
run
1. az extension list to confirm you have extensions installed
2. az extension remove -n webapp
& then re-try.
username_0: Sure thing, on it, will report back when done!
username_1: the error here is you had an older version of webapp extension installed, & the core cli had some SDK updates that caused the error you are seeing. Is there a specific command from extension you are trying to use? Some of the extension commands did get moved to core.
this command however is still an extension command If you need this one please install the latest extension az extension add -n webapp. Thank you.
Status: Issue closed
|
fullcalendar/fullcalendar | 474791060 | Title: show events of child resources on main resource if child are collapsed
Question:
username_0: Hi,
maybe there is already a solution to this and I cant find it, so please excuse if this is a duplicate.
What I want to do is show the events of child resources on the main resource if the resource is collapsed. I think that could be problamatic, since event-container is a child of the resource tr and this gets display:none.
From the data side I managed to associate every event with its child and master resource so at the moment the event is displayed twice in my calender.
I hope you can understand what i'm trying to do.
Answers:
username_1: @username_2 Is this function removed in V4? I am unable to find it.
username_2: what function? `eventRender`? still exists
username_1: Sorry i meant dateToCoord |
spring-projects/spring-boot | 197854956 | Title: Some logs are dismissed during context creation
Question:
username_0: The default logging system is used (Logback).
For instance, the warning logged [here](https://github.com/spring-projects/spring-boot/blob/v1.4.3.RELEASE/spring-boot/src/main/java/org/springframework/boot/env/SpringApplicationJsonEnvironmentPostProcessor.java#L95) is dismissed.
This behavior appears because at the time the warning is logged, the Logback context contains a [filter](https://github.com/spring-projects/spring-boot/blob/v1.4.3.RELEASE/spring-boot/src/main/java/org/springframework/boot/logging/logback/LogbackLoggingSystem.java#L102) that denies everything.
This issue is similar to #7758 but it's with Logback.
Here's a quick and dirty demo:
```java
package demo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class DemoApplication {
private static final Logger LOGGER = LoggerFactory.getLogger(DemoApplication.class);
@Value("${demo.value:empty}")
private String demoValue;
@Bean
public CommandLineRunner commandLineRunner() {
return new CommandLineRunner() {
@Override
public void run(String... args) throws Exception {
LOGGER.warn("value = {}", demoValue);
}
};
}
public static void main(String[] args) throws Exception {
// Valid JSON:
// System.setProperty("SPRING_APPLICATION_JSON", "{\"demo\":{\"value\":\"demo\"}}");
// Invalid JSON:
System.setProperty("SPRING_APPLICATION_JSON", "{");
SpringApplication.run(DemoApplication.class, args);
}
}
```
```xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>demo</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.4.3.RELEASE</version>
</parent>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
</dependencies>
</project>
```
Answers:
username_1: I am not sure what we can do about it. It's a chicken and egg problem. When that code runs, the `Environment` has not been prepared yet. We want the environment to contribute to the initialization of the logging system (pattern, loggers, etc).
The filter is there on purpose to avoid an early init of the logging system. |
ElemeFE/react-amap | 520823409 | Title: 你用jsfiddle是仅供海外服务还是什么意思呢
Question:
username_0: <!--
请确保阅读下面的内容并勾选。没有勾选的 Issue 将被关闭。
-->
+ [ ] 我已经搜索过 issue,没有类似的问题,或者类似的问题仍然没有解决方案。
+ [ ] 我已经搜索过[文档](https://elemefe.github.io/react-amap/articles/start),并且仍然没有找到解决方案。
+ [ ] 我写了个问题重现的例子,链接或者代码将会贴在下面。
<!--
请确保阅读上面的内容并勾选。没有勾选的 Issue 将被关闭。
-->
#### Reproduce Example Link or Code Fragment
#### What is Expected?
#### What is actually happening?<issue_closed>
Status: Issue closed |
bluss/matrixmultiply | 386834699 | Title: Use optimal kernel parameters (architectures, matrix layouts)
Question:
username_0: I am trying to figure out what to use as optimal kernel parameter for different architectures.
For example, it looks like blis is using 8x4 for Sandy Bridge, but 8x6 for Haswell. Why? What lead them to this setup? Specifically, because operations are usually on 4 doubles at a time, how does the 6 fit in there. Is Haswell able to separately execute a `_mm256` and a `_mm` operation *at the same time*?
Furthermore, if we have non-square kernels like for dgemm, is there a scenario where choosing 4x8 over 8x4 is better?
Answers:
username_1: You must also include src/archparams.rs in this
username_0: Another interesting bit is the choice of 8x6 over 6x8 (or 8x4 over 4x8 for Sandy Bridge, which is our current implementation), which prefers column- vs row-storage in the C matrix. This then ties in with my question here: https://github.com/username_1/matrixmultiply/issues/31
username_2: I also keep some [extra links that I didn't have time to sort](https://github.com/numforge/laser/blob/723299ea439bd10ffae0fda0601d5b153892303f/research/matrix_multiplication_optimisation_resources.md).
Anyway, in terms of performance I have a generic multi-threaded BLAS (float32, float64, int32, int64) that reaches between 97~102% of OpenBLAS on my Broadwell (AVX2 and FMA) laptop depending if multithreaded/serial/float32 or float64:
- [float32 bench](https://github.com/numforge/laser/blob/723299ea439bd10ffae0fda0601d5b153892303f/benchmarks/gemm/gemm_bench_float32.nim#L187-L260)
- [float64 bench](https://github.com/numforge/laser/blob/723299ea439bd10ffae0fda0601d5b153892303f/benchmarks/gemm/gemm_bench_float64.nim#L186-L258)
## Kernel parameters
- mr * nr: 6 * 16.
- Why 16?: one-dimension must be a multiple of the vector size. With AVX it's 8. Also since CPU can issue 2 Fused-Multiply-Add in parallel (instruction-level parallelism) 16 is an easy choice.
- Why 6?: We have 16 general purposes register on SSE2~AVX2. We need to keep a pointer to packed A, packed B and the loop index over kc. So with 6 I can unroll twice, using 12 registers, and have 4 left for bookeeping.
- Why not 16x6?: C will be a MxN matrix, and very often C is a contiguous matrix that has been allocated. As by default I use row-major, 6x16 avoids transposing during the C update step and allow me to specialize when C as a unit stride along the N dimension.
## Panel parameters
Proper tuning of mc and kc is very important as well.
Currently I use an arbitrary 768 bytes for mc and 2048 bytes for kc. In my testing I lost up to 35% performance with other values.
There are various constraint for both, the Goto paper goes quite in-depth into them:
- micropanel of packed B of size kc * nr should remain in L1 cache, and so take less than half of it to not be evicted by other panels.
- Panel of packed A of size kc * mc should take a considerable part of the L2 cache, but still stay addressable by the TLB.
I had an algorithm to choose them, but as I can only query cache but no TLB information at the moment, I removed it and decided to tune manually.
username_2: I forgot to add. As mentioned in the paper [Automating the last mile](https://arxiv.org/pdf/1611.08035.pdf). You need to choose your SIMD for C updates.
You can go for shuffle/permute or broadcast and balance them to a varying degree.
To find the best you need to check from which port those instructions can be issued. Also interleave them with FMA to hide data fetching latencies.
In my code I use full broadcast and no shuffle/permute but mainly because it was simpler to reason about, I didn't test other config.
username_1: @username_2 Wow, cool to hear from you! Thanks for the links to the papers and for sharing your knowledge!
username_1: Issue #59 allows tweaking the NC, MC, KC variables easily at compile-time which is one small step and a model for further compile time tweakability.
username_3: Another idea which libsxmm uses is an autotuner, such as https://opentuner.org/. OpenTuner automatically evaluates the best parameters for the architecture the code is compiled on. |
argoproj/argo-cd | 987015144 | Title: Tooltips run up into top bar
Question:
username_0: If you are trying to resolve an environment-specific issue or have a one-off question about the edge case that does not require a feature then please consider asking a question in argocd slack [channel](https://argoproj.github.io/community/join-slack).
Checklist:
* [ ] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
* [x] I've included steps to reproduce the bug.
* [x] I've pasted the output of `argocd version`.
**Describe the bug**
The tooltips on application tiles will persist even when the tile is not visible. It will run up over the top bar.
**To Reproduce**
See video for visual example but basically hover over anything with a tooltip on the application tiles and then scroll up
**Expected behavior**
Would expect tooltips to disappear once their associated component is out of view.
**Screenshots**
https://user-images.githubusercontent.com/50851526/131893632-daa52375-f48f-49cf-8671-3089f1dddf34.mp4
**Version**
v2.2.0+9025318
Answers:
username_1: Can I work on this bug @mayzhang2000
username_2: Sure, Thank you @username_1 !
Status: Issue closed
|
ibm-openbmc/dev | 423479565 | Title: Redfish: Code Update - Support new RedfishHttpPushUri
Question:
username_0: Per https://github.com/DMTF/Redfish/pull/3296, a new URI property to push the software image at, RedfishHttpPushUri, is on its way. This is to track the work to move over to this new property. Should be minimal effort, seems like mostly a conformance thing.
Answers:
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_1: refresh
username_0: This was replaced by MultipartHttpPushUri so closing out.
Status: Issue closed
|
Colin-Jay/BlogsComment | 553012058 | Title: Test2 - Colin-Jay's Blogs
Question:
username_0: https://colin-jay.cn/2020/01/22/Test2/
Answers:
username_0: Welcome to visit.
username_1: # 向徐巨低头

Status: Issue closed
username_0: https://colin-jay.cn/2020/01/22/Test2/ |
mholt/PapaParse | 674379326 | Title: Feature: Flatten in Json to CSV
Question:
username_0: Hi,
Can we have an option in `config` to flatten the JSON before converting to CSV. Yes, I can use a lib for that, I thought it would be nice if it is a built-in, making this library an all-around json to csv vice versa
Eg.
JSON
```
[
{
"Id": "1",
"Account": {
"Id": "A1"
},
"Contact": {
"Id": "C1"
}
},
{
"Id": "2",
"Account": {
"Id": "A1"
},
"Contact": {
"Id": "C2"
}
}
]
```
Flatten
```
[
{
"Id": "1",
"Account.Id": "A1",
"Contact.Id": "C1"
},
{
"Id": "2",
"Account.Id": "A1",
"Contact.Id": "C2"
}
]
```
CSV
```
"Id", "Account.Id", "Contact.Id"
"1","A1","C1"
"2","A1","C2"
```
Answers:
username_1: Hi,
Thanks for your suggestion but I think we should reject it. PapaParse is a library for parsing and unprasing CSV files but it is not designed to transform data. It is up to the user of the library to pass the data in the right format.
I may understand that you think it will be great to have this feature for your use case, but we can not add all the transform functions that our users may require otherwise we will become a blotated library and this will for sure have a penanlty on our performance.
Thanks for your comprehension!
Status: Issue closed
|
sysrepo/sysrepo | 260244742 | Title: How to use xpath expressions such as starts-with, ends-with or wildcards in sysrepo?
Question:
username_0: I want to delete all ip address nodes which starts with fe80. I was able to use contains without any issues but it deletes ip address such as 12:fe80:1 if present.
I tried the following but i am getting validation failed messages.
/ietf:interfaces/interface/inet-ip:ipv6/address[contains(.,'fe80')] ->not the thing i want.
/ietf:interfaces/interface/inet-ip:ipv6/address[ip='fe80')] ->not working
Regards,
Swathin
Answers:
username_1: Hi Swathin,
for example, the expression
```
/ietf:interfaces/interface/inet-ip:ipv6/address[starts-with(ip, 'fe80')]
```
should do what you want, simple enough. The `ip` node gets implicitly converted to `string`. As long as it is a leaf/leaf-list, it is performed exactly as you would expect, its value is used.
Generally, you can use patterns as well, the predicate would then look like this
```
[re-match(ip, 'fe80*')]
```
because this functions performs implicit anchoring of the beginning and also the end ([RFC ref](https://tools.ietf.org/html/rfc7950#section-10.2.1)).
Regards,
Michal
Status: Issue closed
username_0: Thanks Michal for the quick response. This is exactly what i want.
Regards,
Swathin
username_1: Also, `@ip` works with attributes, under normal circumstances you will never need to use this in YANG.
username_0: Is there any way to supress the callback functions for an internal datastore operation? I tried usinh the session in the init callback function but it is making the changes in the startup datastore.
Regards,
Swathin
username_0: I tried with @ as in normal xpath, but it was throwing validation failed error.
Regards,
Swathin
username_1: Hi Swathin,
firstly, you are not talking about the original issue anymore, please do not mix issues next time.
Secondly, normal XPath and YANG XPath differ minimally and `@` refers to attributes in both.
Finally, regarding your issue, I do not know what you mean by internal datastore or what callback you want to suppress. Please say what exactly you are doing, what behavior you see, and what would you like to happen instead (preferably in a new issue).
Regards,
Michal
username_0: Hi Michal,
I will start a new issue/thread for the new issue that I am facing. Thanks for the quick response.
Regards,
Swathin |
sinonjs/sinon | 178219707 | Title: Matcher being called when setting up stub behavior
Question:
username_0: * Sinon version : 1.17.6
* Environment : Node
* Other libraries you are using: `chai`, `sinon-chai`
**How to reproduce**
I am setting up two behaviors for a stub using a custom matcher:
```js
fs.readFile = sinon.stub();
fs.readFile
.withArgs(sinon.match(function (fileName) { return endsWith(fileName, 'suffixa'); }))
.yields(null, 'contents A');
fs.readFile
.withArgs(sinon.match(function (fileName) { return endsWith(fileName, 'suffixb'); }))
.yields(null, 'contents B');
```
For some reason, when that second `.withArgs` is called, that first matcher's function is invoked and is passed the matcher object. This is before the stub itself is ever called.
Answers:
username_1: Could you provide a code example that shows how this fails?
We have an issue template for good reasons.
username_0: That was my code example that fails. Here it is slightly reworded with some blanks filled in:
```es6
const { match, stub } = require('sinon');
const readFile = stub();
readFile
.withArgs(match(fileName => endsWith(fileName, 'suffixA')))
.yields(null, 'contents A');
readFile
.withArgs(match(fileName => endsWith(fileName, 'suffixB')))
.yields(null, 'contents B');
function endsWith(str, suffix) {
return str.indexOf(suffix) + suffix.length === str.length;
}
```
Error:
```
return str.indexOf(suffix) + suffix.length === str.length;
^
TypeError: str.indexOf is not a function
at endsWith (/Users/jpage/Code/sinon-1154-repro/repro.js:14:14)
at Object.readFile.withArgs.fileName [as test] (/Users/jpage/Code/sinon-1154-repro/repro.js:6:31)
at deepEqual (/Users/jpage/Code/sinon-1154-repro/node_modules/sinon/lib/sinon/util/core.js:194:26)
at Object.deepEqual (/Users/jpage/Code/sinon-1154-repro/node_modules/sinon/lib/sinon/util/core.js:243:26)
at Function.matches (/Users/jpage/Code/sinon-1154-repro/node_modules/sinon/lib/sinon/spy.js:287:27)
at matchingFake (/Users/jpage/Code/sinon-1154-repro/node_modules/sinon/lib/sinon/spy.js:50:30)
at Function.withArgs (/Users/jpage/Code/sinon-1154-repro/node_modules/sinon/lib/sinon/spy.js:249:33)
at Object.<anonymous> (/Users/jpage/Code/sinon-1154-repro/repro.js:10:4)
at Module._compile (module.js:556:32)
at Object.Module._extensions..js (module.js:565:10)
```
Again, what's happening is the matcher function is being called during setup, even though the stub itself wasn't actually called.
username_0: Just reproduced with that code using the reported version and the latest, `1.17.6`, as well.
username_1: Right, that is confusing indeed!
I think you've uncovered a bug that we introduced when we merged https://github.com/sinonjs/sinon/pull/873
@username_2 you worked on this a while back, do you think we need a case in `deepEqual.use`, that examines if both `a` and `b` are matchers, and compares them, instead of trying to execute one to match the other?
https://github.com/sinonjs/sinon/blob/master/lib/sinon/util/core/deep-equal.js#L97-L99
username_2: @username_0 What is `str` in this instance? Can you provide a failing test case? I'm looking at the code and I don't see why it's failing, but I'll look into it.
username_0: I'm not 100% sure; str resembles a matcher object to me, but I don't know for sure, only that it's not a string and it's not expected that that `withArgs` is even calling the matcher function.
username_1: It's a matcher, the second call to `withArgs` passes it along, and it ends up in `deepEqual.use` at https://github.com/sinonjs/sinon/blob/master/lib/sinon/util/core/deep-equal.js#L97-L99
username_1: I've restructured the example a bit, to try and reduce it further.
Here's my version
```javascript
const { match, stub } = require('sinon');
const readFile = stub();
function endsWith(str, suffix) {
return str.indexOf(suffix) + suffix.length === str.length;
}
function suffixA(fileName) {
return endsWith(fileName, 'suffixa');
}
function suffixB(fileName) {
return endsWith(fileName, 'suffixb');
}
const argsA = match(suffixA);
const argsB = match(suffixB);
readFile
.withArgs(argsA);
readFile
.withArgs(argsB);
```
Running that, gives the same error
```
return str.indexOf(suffix) + suffix.length === str.length;
^
TypeError: str.indexOf is not a function
```
username_1: Anyway, it is near midnight here, I should go to bed
Status: Issue closed
|
middleman/middleman | 223893615 | Title: Build error using liquid
Question:
username_0: I'm using middleman 4.0 with the liquid template engine. Running the middleman server everything is fine and I get no errors. As soon as I want to build the website I get this error:
`Template local 'data' tried to overwrite an existing context value. Please rename the key when passing to 'locals'`.
To be honest I don't know where the error could be since I'm using everything as it shipped, so no extra config except the `set :liquid, :layout_engine => :liquid` in the config.rb. I only got a index.html.liquid file with some includes.
I hope somebody can help me out on this one. Thank you!
Answers:
username_1: @username_3 @username_0
Verified this.
Seems a raw liquid usage is producing this error
http://www.rubydoc.info/github/middleman/middleman/Middleman/Renderers/Liquid
```
def manipulate_resource_list(resources)
return resources unless app.extensions[:data]
resources.each do |resource|
next if resource.file_descriptor.nil?
next unless resource.file_descriptor[:full_path].to_s =~ %r{\.liquid$}
# Convert data object into a hash for liquid
resource.add_metadata locals: {
data: stringify_recursive(app.extensions[:data].data_store.to_h)
}
end
end
```
data is reference here
I have tested with and without the data folder and its producing this issue.
username_1: @username_3
Just looking into liquid more, definite issue here I think, if you want to include partials from another directory - it will fail
Should be an easy fix, but just wondered if you have some magic for this.
```
# File 'middleman-core/lib/middleman-core/renderers/liquid.rb', line 14
def read_template_file(template_path)
file = app.files.find(:source, "_#{template_path}.liquid")
raise ::Liquid::FileSystemError, "No such template '#{template_path}'" unless file
file.read
end
```
username_2: I have run into this and can confirm it happens out of the box.
### Steps to reproduce:
* `middleman init`
* Add 'liquid' to Gemfile and `bundle install`
* Rename `source/index.html.erb` to `source/index.html.liquid`
* `middleman build`
Produces the error:
<details>
<summary>
uncaught throw "Template local `data` tried to overwrite an existing context value. Please rename the key when passing to `locals`"</summary>
```
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/template_renderer.rb:140:in `throw'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/template_renderer.rb:140:in `block in render'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/template_renderer.rb:134:in `each'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/template_renderer.rb:134:in `render'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_reference.rb:43:in `send_to'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/call_with.rb:76:in `call_with'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_handler.rb:138:in `block in redefine_method'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/sitemap/resource.rb:154:in `render'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_reference.rb:43:in `send_to'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/call_with.rb:76:in `call_with'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_handler.rb:138:in `block in redefine_method'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/rack.rb:112:in `process_request'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/rack.rb:66:in `block in call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/rack.rb:65:in `catch'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/rack.rb:65:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/urlmap.rb:74:in `block in call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/urlmap.rb:58:in `each'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/urlmap.rb:58:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-autoprefixer-2.10.1/lib/middleman-autoprefixer/extension.rb:55:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/head.rb:12:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/lint.rb:50:in `_call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/lint.rb:38:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/builder.rb:244:in `call'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/mock.rb:84:in `request'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/rack-2.2.3/lib/rack/mock.rb:57:in `get'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/builder.rb:232:in `block in output_resource'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/activesupport-5.2.4.3/lib/active_support/notifications.rb:170:in `instrument'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/util.rb:21:in `instrument'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-core-4.3.8/lib/middleman-core/builder.rb:225:in `output_resource'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_reference.rb:43:in `send_to'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/call_with.rb:76:in `call_with'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/contracts-0.13.0/lib/contracts/method_handler.rb:138:in `block in redefine_method'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:508:in `call_with_index'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:472:in `process_incoming_jobs'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:454:in `block in worker'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:445:in `fork'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:445:in `worker'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:436:in `block in create_workers'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:435:in `each'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:435:in `each_with== Finishing Request: javascripts/site.js (0.0s)
_index'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:435:in `create_workers'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:374:in `work_in_processes'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/parallel-1.19.2/lib/parallel.rb:278:in `map'
[Truncated]
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/middleman-cli-4.3.8/bin/middleman:70:in `<top (required)>'
/Users/username_2/.rbenv/versions/2.7.1/bin/middleman:23:in `load'
/Users/username_2/.rbenv/versions/2.7.1/bin/middleman:23:in `<top (required)>'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli/exec.rb:63:in `load'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli/exec.rb:63:in `kernel_load'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli/exec.rb:28:in `run'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli.rb:476:in `exec'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/vendor/thor/lib/thor.rb:399:in `dispatch'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli.rb:30:in `dispatch'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/vendor/thor/lib/thor/base.rb:476:in `start'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/cli.rb:24:in `start'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/bundler-2.1.4/exe/bundle:46:in `block in <top (required)>'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/site_ruby/2.7.0/bundler/friendly_errors.rb:123:in `with_friendly_errors'
/Users/username_2/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/bundler-2.1.4/exe/bundle:34:in `<top (required)>'
/Users/username_2/.rbenv/versions/2.7.1/bin/bundle:23:in `load'
/Users/username_2/.rbenv/versions/2.7.1/bin/bundle:23:in `<main>'
```
</details>
username_3: Pushing v4.3.9 out. This was already fixed on the main branch, but not the 4.x series
Status: Issue closed
username_2: Wow, that's a fast fix. Thank you! |
OatmealDome/DolphiniOS-Issue-Tracker | 608526116 | Title: Sonic Colors Works, But Doesn’t have all GUI
Question:
username_0: Sonic Colors Works Perfectly, Exept it shows a black screen every now and then & the GUI is totally invisible
Answers:
username_0: It only happens on DolphiniOS, I’ve tried sonic colors on normal pc dolphin, it has the GUI
username_1: Now tracking at #94. Thanks for your report.
Status: Issue closed
username_2: Cool |
manga-download/hakuneko | 561811121 | Title: Japscan problem downloading and reading
Question:
username_0: **Version:** [6.1.7@86dbf8](https://github.com/manga-download/hakuneko/commits/86dbf8)
Hello,
On Japscan, the images do not load correctly and it is impossible to download the chapter (Martial Arts Reigns chapter 1). Sometimes the image is loaded when I start reading, but it remains empty in the preview, while other times the images don't load at all.
The problem also occurs for other mangas except when there are few pages to load.
I tried with different wi-fi and on Windows 10 / Linux and I have this problem no matter where.
The console returns 2 main errors :
GET https://www.japscan.co/lecture-en-ligne/martial-arts-reigns/1/40.html 503
and
GET connector://japscan/?payload=<KEY> net::ERR_NOT_IMPLEMENTED
Thank you in advance if you can solve the problem.
<details><summary>What has been done?</summary>
☑ Read the Quick Start Guide: https://git.io/hakuneko-quickstart
☑ Tried Troubleshooting: https://git.io/hakuneko-troubleshooting
☑ Done Website Interaction: https://git.io/hakuneko-websiteinteraction
☑ Searched for Similar Issues: https://git.io/hakuneko-issuesearch
</details>
Answers:
username_1: Try HakuNeko version 6.1.7
username_2: Thanks for your answer.
I'm already using version 6.1.7
I increased throttle to 2000 ms, downloading works now, thanks.
Will the live preview issue be fixed later or the problem comes from the website?
Thanks again for your awesome work !
Status: Issue closed
username_1: Fixing live preview will not be done, it would require a massive amount of work |
phusion/passenger | 944480773 | Title: File has unexpected size (7635 != 6683). Mirror sync in progress?
Question:
username_0: When running a new install with Ubuntu 20.04 apache and apt, im getting this error:
Failed to fetch https://oss-binaries.phusionpassenger.com/apt/passenger/dists/focal/main/binary-amd64/Packages.bz2 File has unexpected size (7635 != 6683). Mirror sync in progress?
Any clues on this?
Answers:
username_1: Same issue here:
```
E: Failed to fetch https://oss-binaries.phusionpassenger.com/apt/passenger/dists/focal/main/binary-amd64/Packages.gz File has unexpected size (7637 != 6595). Mirror sync in progress? [IP: 172.16.31.10 443]
Hashes of expected file:
- Filesize:6595 [weak]
- SHA512:4ebc1374c4ece8fa0f1a5c672d857f4c14fefb251ba4eaa7179ae941b412e2d72ccddbf0b1558244acfc0d5021e1f1a0f0ebca268cdd16e6675f9eeecbbb5dba
- SHA256:276fe3102e8b7f135a907f9eced90982cc5ba79260abdc6add22685bb34b2fc4
- SHA1:644e9d1ba3e4ae9b725fb331f710fb68aa2a2c96 [weak]
- MD5Sum:ef7fa07af528c68ff0bdb747f6034cc8 [weak]
Release file created at: Wed, 02 Jun 2021 18:27:43 +0000
```
username_2: I am also seeing this same issue. It is breaking my team's builds.
It looks like someone updated the package today (14-Jul-2021 07:29). My team has tried doing an `apt-get clean` before `apt-get update`, but that hasn't helped.
It seems like something is out of sync with that package which is causing the file size to not match what is expected.
username_3: Same here; this is a blocker for our build system as it stands.
username_4: fixed now, sorry.
Status: Issue closed
|
rust-lang/rust | 918718579 | Title: Confusing error when transparently dereferencing
Question:
username_0: <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
If you cannot produce a minimal reproduction case (something that would work in
isolation), please provide the steps or even link to a repository that causes
the problematic output to occur.
-->
Given the following code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f3f0df13f266b9a222fbeedc4a8f6168
```rust
struct X
{
pub a: i32,
pub b: i32,
}
fn main() {
let y = std::sync::Mutex::new(X{a:1, b: 2});
let g = &mut (y.lock().unwrap());
let a = &mut g.a;
let b = &mut g.b;
assert!(*a + *b == 3);
}
```
The current output is:
```
error[E0499]: cannot borrow `*g` as mutable more than once at a time
--> src/main.rs:14:18
|
13 | let a = &mut g.a;
| - first mutable borrow occurs here
14 | let b = &mut g.b;
| ^ second mutable borrow occurs here
15 |
16 | assert!(*a + *b == 3);
| -- first borrow later used here
error: aborting due to previous error
For more information about this error, try `rustc --explain E0499`.
error: could not compile `playground`
```
<!-- The following is not always necessary. -->
Ideally the output should tell the user that `g.` is not a simple "member access", which is why they can't borrow multiple members. It could even, in principle, tell them to capture `*g` first, then access it's `a` and `b` members, but I suspect that isn't simple (or even legal) in general. |
elastic/elasticsearch-net | 511042263 | Title: 7.4.1 meta ticket
Question:
username_0: Ticket to track progress on 7.4.1 client release.
## 7.4
### Enhancements
**Authorization**
- [ ] Add `manage_own_api_key` cluster privilege [#45897](https://github.com/elastic/elasticsearch/pull/45897) (issue: [#40031](https://github.com/elastic/elasticsearch/issues/40031))
* (see above) Consider `owner` flag when retrieving/invalidating keys with API key service [#45421](https://github.com/elastic/elasticsearch/pull/45421) (issue: [#40031](https://github.com/elastic/elasticsearch/issues/40031))
- [ ] REST API changes for manage-own-api-key privilege [#44936](https://github.com/elastic/elasticsearch/pull/44936) (issue: [#40031](https://github.com/elastic/elasticsearch/issues/40031))
- [ ] Simplify API key service API [#44935](https://github.com/elastic/elasticsearch/pull/44935) (issue: [#40031](https://github.com/elastic/elasticsearch/issues/40031))
**Geo**
- [ ] Support WKT point conversion to geo_point type [#44107](https://github.com/elastic/elasticsearch/pull/44107) (issue: [#41821](https://github.com/elastic/elasticsearch/issues/41821))
### Bug fixes
**Geo**
- [ ] Geo: add validator that only checks altitude [#43893](https://github.com/elastic/elasticsearch/pull/43893) - .NET client might need to remove range checks?
### HTTP changes
- [ ] Starting in version 7.4, a `+` in a URL will be encoded as `%2B` by all REST API functionality. Prior versions handled a `+` as a single space. If your application requires handling `+` as a single space you can return to the old behaviour by setting the system property `es.rest.url_plus_as_space` to `true`. Note that this behaviour is deprecated and setting this system property to `true` will cease to be supported in version 8.
## 7.4.1
Update when released: https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-7.4.1.html
Answers:
username_1: Closing this as 7.4.1 is now released
Status: Issue closed
|
home-assistant/core | 1126098758 | Title: [aiounifi.websocket] Websocket disconnected appears every ±20 seconds in the log
Question:
username_0: ### The problem
I was using Unifi Direct as a device tracker, then disabled that in configuration.yaml and installed UniFi Network. The integration appears to pick up all the entities, then after around 20 seconds, all device_tracker entities show as unavailable in Developer Tools. They reappear ±10 seconds later with all their details, then go back to unavailable a few seconds later. This repeats continously.
### What version of Home Assistant Core has the issue?
core-2022.2.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Supervised
### Integration causing the issue
Unifi Network
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/unifi/
### Diagnostics information
`2022-02-07 12:06:08 INFO (MainThread) [homeassistant.components.unifi] Will try to reconnect to UniFi Network
2022-02-07 12:06:08 DEBUG (MainThread) [aiounifi.controller] https://053.unificloud.co.uk:8443/api/login
2022-02-07 12:06:09 DEBUG (MainThread) [aiounifi.controller] 200 application/json <ClientResponse(https://053.unificloud.co.uk:8443/api/login) [200 ]>
<CIMultiDictProxy('Vary': 'Origin', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Expose-Headers': 'Access-Control-Allow-Origin,Access-Control-Allow-Credentials', 'Set-Cookie': 'unifises=REDACTED; Path=/; Secure; HttpOnly', 'Set-Cookie': 'csrf_token=REDACTED; Path=/; Secure', 'X-Frame-Options': 'DENY', 'Content-Type': 'application/json;charset=UTF-8', 'Content-Length': '30', 'Date': 'Mon, 07 Feb 2022 12:06:08 GMT')>
2022-02-07 12:06:09 DEBUG (MainThread) [aiounifi.websocket] Websocket starting
2022-02-07 12:06:09 DEBUG (MainThread) [aiounifi.websocket] Websocket running
2022-02-07 12:06:09 INFO (MainThread) [homeassistant.components.unifi] Connected to UniFi Network
2022-02-07 12:06:09 DEBUG (MainThread) [homeassistant.components.unifi.unifi_entity_base] Updating client entity device_tracker.appletv_study (REDACTED)
… 24 more device tracker entries for other entities
2022-02-07 12:06:32 DEBUG (MainThread) [aiounifi.websocket] Websocket disconnected
2022-02-07 12:06:32 WARNING (MainThread) [homeassistant.components.unifi] Lost connection to UniFi Network
2022-02-07 12:06:32 DEBUG (MainThread) [homeassistant.components.unifi.unifi_entity_base] Updating client entity device_tracker.appletv_study (REDACTED)
… 24 more device tracker entries for other entities
2022-02-07 12:06:47 INFO (MainThread) [homeassistant.components.unifi] Will try to reconnect to UniFi Network
2022-02-07 12:06:47 DEBUG (MainThread) [aiounifi.controller] https://053.unificloud.co.uk:8443/api/login
2022-02-07 12:06:48 DEBUG (MainThread) [aiounifi.controller] 200 application/json <ClientResponse(https://053.unificloud.co.uk:8443/api/login) [200 ]>
<CIMultiDictProxy('Vary': 'Origin', 'Access-Control-Allow-Credentials': 'true', 'Access-Control-Expose-Headers': 'Access-Control-Allow-Origin,Access-Control-Allow-Credentials', 'Set-Cookie': 'unifises=REDACTED; Path=/; Secure; HttpOnly', 'Set-Cookie': 'csrf_token=REDACTED; Path=/; Secure', 'X-Frame-Options': 'DENY', 'Content-Type': 'application/json;charset=UTF-8', 'Content-Length': '30', 'Date': 'Mon, 07 Feb 2022 12:06:46 GMT')>
2022-02-07 12:06:48 DEBUG (MainThread) [aiounifi.websocket] Websocket starting
2022-02-07 12:06:48 DEBUG (MainThread) [aiounifi.websocket] Websocket running
2022-02-07 12:06:48 INFO (MainThread) [homeassistant.components.unifi] Connected to UniFi Network
2022-02-07 12:06:48 DEBUG (MainThread) [homeassistant.components.unifi.unifi_entity_base] Updating client entity device_tracker.appletv_study (REDACTED)
… 24 more device tracker entries for other entities
2022-02-07 12:07:11 DEBUG (MainThread) [aiounifi.websocket] Websocket disconnected
[Truncated]
2022-02-07 12:07:27 INFO (MainThread) [homeassistant.components.unifi] Connected to UniFi Network`
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
```txt
unificloud.co.uk is a 3rd party hosted controller enviroment. I had the same issue with Unifi Network when I installed HA, that was on HA core-2021.12 on Ubuntu 20.4.3 LTS.
I've configured HA on a seperate VPN, using Nginx as a reverse proxy (for external access) and also have Mosquitto broker & MariaDB installed (in case those makes any difference).
I'm unsure if this issue is local, related to or caused by using unificloud.co.uk or how/where to identify what is causing the disconnects?
```
### Additional information
If necessary for debugging, I could maybe setup a locally hosted UniFi Controller (UniFi Network Application), without any devices for testing - that was going to be my next step to rule out unificloud or something local if there's not another easier way!?
Answers:
username_1: What version of controller are you running?
The only thing I can make out from the logs is that the websocket gets closed with 15 seconds intervals which is the same as the heart beat timer. I haven't gotten any other reports like this and you've had it on multiple setups so I guess it's related to your infrastructure
username_0: Network 6.5.55 is currently being used on unificloud.
Is there anywhere else I can look / log to verify the cause of the disconnects? - Do you think it's most likely something on the HA host device/OS or unificloud that's causing it? (Not sure where the heartbeat you reference is located?)
username_1: The heartbeat functionality is set here https://github.com/username_1/aiounifi/blob/b33c408147635c1de0f85c8535ac2decbde74d6d/aiounifi/websocket.py#L85
The best way to debug this is probably running the library directly then you can add logging and be more in control
username_0: I’ve just cleanly installed Ubuntu 20.04 LTS and HA Supervisor 2022.3 on a different device - I then only installed UniFi Network and got the same issue of disconnecting. (So that rules out my device and any other HA addons but it’s the same ver of Ubuntu)
Pretty new to HA, how do I run the library locally? I assume I need to copy it to the custom folder but not sure what git command to use or if that will disable the existing version? (Happy to keep trying to identify/resolve if I can get a little help?
I’ll remove HA from my test device and install UniFi Controller tomorrow to try and narrow it down further.
Status: Issue closed
|
empress/ember-showdown-prism | 686611812 | Title: Line Numbers in Code Blocks
Question:
username_0: How can I configure to not show the line numbers for my code blocks?
```js
posts: [
{
id: 1,
title: 'This is my first post',
body: 'Some description would go here'
}
]
```
When I view the page I see this

I opened a ticket with ember-prism, thinking maybe that comes from them, but they say on theirs the default is no line numbers |
seanhorgan98/fantasyhockey | 478737919 | Title: Mod ability to create games
Question:
username_0: Don't know how to structure data in this
Main problem with this will be keeping data integrity - need to update everywher.
Do we need records of the games? Or is just updating player stats enough?
Fixtures and Results looking like a future feature / patch atm
Answers:
username_0: @username_1
username_1: Think we should have a Collection for games as well as just updating all data so we can look at individual games later on.
username_0: Creating games is now Possible
Games created with unique ID's, list of involved players and opponent fields
Remaining task is adding data to these players
Status: Issue closed
username_0: This is now fully handled and as far as I know, reliable.
Will walk you through process in glasgow |
nwestfall/musicstand | 397189199 | Title: Proper Authorization
Question:
username_0: Support authorization through planning center oauth rather than require each organization to enter their application and key.
- [ ] Register application with planning center
- [ ] Support oauth login
- [ ] Replace API calls with oauth authentication |
learn-co-curriculum/CashMoneyBlocks | 47717667 | Title: None
Question:
username_0: Labeling this as icebox and will revisit the suggestion later
Status: Issue closed
Answers:
username_1: I'm actually just going to close this and remove that section of the lab. Static libraries are super passé... CocoaPods or actual Frameworks are the way forward. |
virtualmlnet/hackathon-2020 | 735304425 | Title: ML.NET Hackathon Idea Probably something gaming-related
Question:
username_0: ## Hackathon Idea
We want to do something with gaming, maybe integrating it with OpenRA or ManagedDoom, (Classic games implemented in C#). There is still an on-going discussion on the details, so I might change the details at a later time.
### Your name
Frank, (Dev)
Frank's Cohabitating Girlfriend, (Junior dev, and team lead)
Frank and his GF's Drinking Buddy, (Dev)
Frank and his GF's Other Drinking Buddy, (Senior-ish dev)
(Names will be added when I get a written consent; GDPR)
### Team name
Omega Force
We are 3 .net-devs, one Kotlin dev and beer!
### Brief Description
In a game like Red Alert, (or Open RA), there should be the possibility for introducing machine learning, and there are very few examples online of using ML.net for gaming-related stuff. Maybe a neural network to kill it in OpenRA's multiplayer? Or perhaps create a learning AI that uses the multiplayer matches' data to base a game AI on, that can be merged into the actual game. Or maybe we create a casino-game like blackjack with a dealer and 5 AI's all battling it out. Who knows. We will overcome and adapt!
### Other
**Are you looking for team members?**
- [ ] Yes
- [ ] No
- [X] If someone are being left out, and is a good personality-fit, we'll take them 😺
**Would you like to have a mentor assigned to your team?**
- [X] Yes
- [ ] No
Answers:
username_1: Very curious, I would love to keep a close eye on this project. 😊
Genetic Algorithms with ML.NET to combine exploratory algorithm with ML?
https://github.com/lhalsey/BlazorAI
In essence, you use a Genetic Algorithm to figure out what could be a good strategy via brute and ML.NET that learns from those brute force attempts to find patterns. Kinda patching up the weakness of each other.
Genetic Algorithms don't look for patterns per se, they just brute force different things and try to stick with what works.
ML.NET could look at the results and perhaps find patterns of why it was successful (or at least symptoms of success), which in theory would be better than the Genetic Algorithm which blindly followed a fitness function toward a better score.
username_0: We are here to have mostly fun and hopefully learn something! :-D
Status: Issue closed
username_0: After so much issues getting a model trained, and most of the team having other real-life concerns popping up, there is nothing worth posting an entry about, (nor do I have time to make a video).
There is no time to retrain the model, but the closest I've gotten, is this:

(The model is trained to identify power-ups and enemies, and it detects them, but all the regions are drawn on more or less the same area, so it needs a lot of re-training)
Thanks to @briacht and @username_2 for assisting with answering some basic technical questions, (that helped a lot)
The next time, (and please let there be more hackathons), I'll actually done some research beforehand, and not just look at @jwood803 's excellent videos: [https://www.youtube.com/playlist?list=PLl_upHIj19Zy3o09oICOutbNfXj332czx](https://www.youtube.com/playlist?list=PLl_upHIj19Zy3o09oICOutbNfXj332czx)
My team might have collapsed, but for my part, I've been motivated to really learn this stuff. I might work AI/ML-adjacent in my day-job, but I'm a complete noob on the AI/ML -topic, and I want that to change, so I will spend some time the next couple of months getting into more ML.net.
I once read that the best way to learn something is to teach it to others, so I think I'll do just that by creating some tutorials on youtube to summarize what has been learned
username_2: @username_0 5 days is a short time, definitely gotta give yourself and the team more credit for what you've done here! Happy to hear you're motivated to continue your AI / ML learning journey. We're always around and happy to help. I'm sure we'll see you around and look forward to your next hackathon project! 😉 |
TI-FIRST/MSP430-Gamepad | 179015838 | Title: Modify for BOOSTXL-EDUMKII
Question:
username_0: How hard would it be to modify this project to work with the [BOOSTXL-EDUMKII](http://www.ti.com/tool/boostxl-edumkii)?
Answers:
username_1: It wouldn't be too difficult, in fact some of the functions would work out of the box. Taking a quick look at it:
- Each accelerometer axis would come out as a joystick axis
- Each joystick axis would come out as a joystick axis on the pc
- Some of the buttons would come out as buttons on the pc, if you use the correct configuration
- Modification would be required for the temp/light sensors, UART, LCD, RGB LEDs, buzzer, servo, and any of the interrupt pins that need to be inputs
Before trying this you would probably want to make sure that none of the existing configurations could damage something on the BOOSTXL-EDUMKII board.
This could certainly be done. I would start by removing and reconfiguring the pins that need to work for a different function, then combine other example code into the project.
Status: Issue closed
|
NucleusPowered/Nucleus | 163518253 | Title: Disable pvp on god mode
Question:
username_0: When players are on god mode, it's better that they shouldn't be able to hit other players.
Answers:
username_1: No. God mode and disabling PvP are two different things that other server owners won't want to link. For example, if a PvP game is being hosted and one of the mechanics is that the host can't get killed, so you need to run away from then, disabling PvP on God mode is not ideal.
Adding an option to disable PvP per player is doable, but I won't force a link to god mode.
username_0: Ah that makes sense, but shouldn't the host not be allowed to hit others as well or can that be part of the mechanics too that he should hit? Anyway an option for pvp to be disabled when god mode is on could be a thing to avoid those with the command such as admins or vips to abuse it in pvp.
username_1: I gave an example. Different servers will run things differently - this plugin should not be imposing conditions that can restrict the use of a feature.
It _could_ be a thing, but it's not high on the to do list.
Status: Issue closed
|
ether/etherpad-lite | 602594048 | Title: What's wrong with the frontend tests in Travis? And with the scroll test locally?
Question:
username_0: Thanks to #3854 we now have the CI status very prominently displayed in the homepage.
I think it is important to have those tests passing.
On my PC, the frontend scroll test keeps failing. In Travis there are many more failures. Maybe they are related to Saucelabs having problems?
@username_1, you are the test man: what's wrong here?
Answers:
username_1: I need to comment out the scroll tests for now. That's all. The reason they are broken is because browsers restricted simulating certain type of arrow key events.
username_1: Awaiting merge. That was a nightmare of a job taking far too long due to some additional tests being craftily injected by ``.travis`` and by sauce labs being inefficient (I'm being polite).
username_0: An interesting info is that if you click on one of the saucelabs links in this log, for example https://app.saucelabs.com/tests/5c4e1febafdb43c79faf570aae9d0933#2, you can see this screenshot:

It seems that (at the moment) **saucelabs browsers are not able to connect to the Etherpad instances in Travis**, thus no test ever starts at all.
May it be a problem in `sauce_tunnel.sh`?
2. Ok for me if we want to **disable the scroll tests**, as long as they are not informative. Is there a PR that disables them?
username_1: https://github.com/ether/etherpad-lite/pull/3899
username_1: Closing as duplicate of #3748
Status: Issue closed
|
grpc/grpc | 244984130 | Title: ASAN leak report: bins/asan/h2_http_proxy_nosec_test resource_quota_server GRPC_POLL_STRATEGY=poll
Question:
username_0: Please answer these questions before submitting your issue.
### Should this be an issue in the gRPC issue tracker?
Yes
### What version of gRPC and what language are you using?
master
### What operating system (Linux, Windows, …) and version?
Linux
### What runtime / compiler are you using (e.g. python version or version of gcc)
ASAN
### What did you do?
Ran listed test as a result of a pull request that is not related to the test in question.
### What did you expect to see?
No leaks
### What did you see instead?
Interestingly, the leak reported only consists of indirect leaks. I believe that this means that a linked data structure (like a doubly-linked list) has nodes that are leaking.
https://grpc-testing.appspot.com/job/gRPC_pull_requests_asan_c/6143/
### Anything else we should know about your project / environment?
Answers:
username_0: Note that this is not exactly the same as #11055 but looks similar enough that it might be the same root cause
Status: Issue closed
username_1: I'm fairly sure this is a duplicate of #11055. |
SuperTurban/ekm_mobiilposm2ng | 260758841 | Title: Create a project plan scope
Question:
username_0: - Create issues(tasks) for all functionalities
- Add task descriptions
- Sort the tasks into milestones (iterations 2-4)
- Give each issue a time estimation
- Prioritize issues
- Assign issues
Answers:
username_0: List of all issues should be in project plan page: https://github.com/SuperTurban/ekm_mobiilposm2ng/wiki/Project-plan
username_1: Add more detailed tasks and evaluate the scope for all iterations
username_1: Iteration 2
The tasks are estimated, and the estimates have been discussed with the customer. The project plan indicates what tasks are delivered in the subsequent iterations. There is a clear picture of what functionality will be ready after iteration 3 and after iteration 4.
Status: Issue closed
|
EwyBoy/EwysWorkshop | 173566929 | Title: Missing Upgrades upon removal of main upgrade.
Question:
username_0: ###### **Forge Version & Ewy's Workshop Version**
My dev env...
###### **Expected behavior**
Giving upgrades back
###### **Actual Behavior**
Upgrades get eaten by the deep dark
###### **Steps to reproduce** *[If Applicable]*
1) Add furnace upgrade
2) Add upgrades (ie: speed)
3) Remove furnace upgrade
4) Lose upgrades<issue_closed>
Status: Issue closed |
restsharp/RestSharp | 728131119 | Title: SemVer breaking change
Question:
username_0: RestClient.AddHandler(IDeserializer deserializer, params string[] contentTypes) was public in version 106.6.7.0 but now doesn't exist in 172.16.31.10.
This kind of breaking change should have ideally been indicated by an increase in the major version number (106 to 107), as it's not a forward compatible change
Status: Issue closed
Answers:
username_1: I agree, but that's what happened and it's impossible to revert. Sorry for the inconvenience and thanks for the heads up.
username_0: Thanks for the quick response. Was only meant as a heads up (and maybe bump the version for the next release anyway)
username_1: Thanks again. I guess the insane value of the major version got me scared... Hence we only had one major release in two years. More to come, I expect. I will hold breaking changes for major versions.
username_1: You can update the release notes Markdown file and add a couple of lines to the docs as a warning.
The `AddHandler` function was marked as obsolete quite a while ago, at least a couple of years by now.
I had to remove the handlers to support custom serialisers properly. It was historically painful with RestSharp and now it's very easy. I fully understand your concern, still what happened has happened. |
bblanchon/ArduinoJson | 711570967 | Title: Setting nested object fails for object set with returned value
Question:
username_0: If a JsonDocument is set via a value returned from another function then setting nested objects does not work and fails silently. On the other hand if the same JsonDocument is set directly via a text string the nested objects get set correctly.
This is strange behaviour and I've not been able to find anything about based on my reading. Is it something I am doing wrong or something amiss in the library?
Target platform - ESP8266 (Lolin Wemos D1 mini) Arduinjson 6.16.1 on Arduino 1.8.13
Tests via HTTP server -
Test when Jsondocument is set directly using the string, nested object gets value set
```
$ curl http://192.168.1.121/test
{
"msg": "directly set text",
"state": {
"a": "0",
"b": "1"
}
}
```
Test when Jsondocument is set from another function, nested object does not get set
```
$ curl http://192.168.1.121/test
{
"msg": "text from function",
"state": {}
}
```
Serial console output for both above tests together
```
HTTP server started on ajtest.local at ip 192.168.1.121
{
"msg": "directly set text",
"state": {
"a": "0",
"b": "1"
}
}
HTTP server started on ajtest.local at ip 192.168.1.121
{
"msg": "text from function",
"state": {}
}
```
Sample code to reproduce problem, only change between the two tests is commenting one or the other `doc["msg"]` line, and re-uploading
```
#include <ArduinoJson.h>
#include <ESP8266WebServer.h>
#include <WiFiManager.h>
#define HOSTNAME "ajtest"
[Truncated]
Connecting........_
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: ec:fa:bc:5e:f7:d6
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Auto-detected Flash size: 4MB
Flash params set to 0x0240
Compressed 314000 bytes to 225732...
Wrote 314000 bytes (225732 compressed) at 0x00000000 in 5.4 seconds (effective 466.0 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting via RTS pin...
```
Answers:
username_1: Hi @username_0,
When you insert a `String` (as opposed to a `const char*`), you must increase the capacity of the `JsonDocument` because ArduinoJson makes a copy of the string.
It's one of the core principles of ArduinoJson; I don't know why you didn't read about it earlier, since it's described in:
* [Serialization Tutorial](https://arduinojson.org/v6/doc/serialization/), section 4.6: "Duplication of strings"
* [How to determine the capacity of the JsonDocument?](https://arduinojson.org/v6/how-to/determine-the-capacity-of-the-jsondocument/)
* [Why is the output incomplete?](https://arduinojson.org/v6/faq/why-is-the-output-incomplete/)
* [StringExample.ino](https://arduinojson.org/v6/example/string/)
There is probably a lot of room for improvement in the documentation.
Please let me know if you have any suggestions.
For example, where did you search the information, and why did you miss the existing one?
Best regards,
Benoit
username_0: @username_1 Thank you so much for the prompt response. I had a vague idea it was to do with `String` vs `const char *` but not the specifics. I skimmed through the Serialization tutorial and it was on my deep dive list though missing the FAQ is completely on me.
Guess why I got into this situation was using the assistant to get capacity. The Serialization example gives
`const size_t capacity = 2*JSON_OBJECT_SIZE(2);`
vs Deserialization gives
`const size_t capacity = 2*JSON_OBJECT_SIZE(2) + 40;`
and blindly copying that, expecting it to work. Just adding the capacity (+ 40) for the text string resolves the issue.
Really appreciate the help and your effort on writing a great library! 👍
Status: Issue closed
username_1: You're welcome, @username_0.
Thank you for using ArduinoJson!
Don't hesitate to [cast a star](https://github.com/username_1/ArduinoJson/stargazers) to support the project 😉 |
seu-as-code/seu-as-code.plugins | 168197522 | Title: Git Plugin: Allow for options to be passed
Question:
username_0: It would be great is there was additional flexibility to pass arbitrary options to a task to customise the Git behaviour. For instance with `gitPullXyz` I might want to have the equivalent of `git pull --rebase`.
Maybe `AbstractGitTask` can be extended to add some way for options.
Answers:
username_1: I guess the easiest and quickest to implement (at least for the existing tasks) would be to allow to pass Gradle command line options to the tasks. So you could do:
```bash
$ ./gradlew gitPullXyz --rebase
```
Currently the Git tasks do not support every possible command line option of the Git command when used on the shell that is technically possible. But adding these should be fairly straight forward.
Internally the plugin uses JGit. So the possible options are limited by the JGit API. Having *arbitrary* options may prove a bit difficult.
Another option I see is to provide some GenericGitTask that basically makes the JGit API scriptable via the build.gradle file. This would give almost arbitrary freedom, e.g.
```groovy
task gitPullPluginsWithRebase(type: GenericGitTask) {
git.pull {
remote "https://github.com/seu-as-code/seu-as-code.plugins.git"
rebase true
}
}
```
What do you think? Would that suite your needs?
username_0: I think having a generic task type is always a useful addition
Overall I have to say that I like the way that defining the repositories ends up in having a number of tasks predefined. (It is a bit like the new model in certain way. it does present a challenge if one would want to configure attributes (which currently does not exist) on of of these tasks.
There could be two alternative DSL way to allows configuration of the standard tasks that are created for a repo. Let's take `gitPullFoo` as an example task and assume that you have also extended `GitPullTask` to have
```groovy
boolean rebase = false
```
as a property.
In the simplest form the build script author can now do
```groovy
tasks.whenTaskAdded(GitPullTask) {
it.rebase = true
}
```
This will happily work as your plugin only creates the tasks via an `afterEvaluate` closure.
A second form could be to add the config in the extension DSL
```groovy
foo {
url 'https://foo-server/foo.git'
options {
pull {
rebase = true
}
}
}
```
This provides a slick, communicative DSL to the user, but more of a programming challenge to you as the plugin author. 🎱
username_1: Adding some extra attributes to the Git*Tasks and providing additional extension DSL support should be pretty straight forward to implement.
Once I am done with the Mac support for the Credentials plugin I can start working on this one.
username_1: Just published a RC1 version of the Git plugin to Bintray and the Gradle Plugin portal.
```groovy
plugins {
id 'de.qaware.seu.as.code.git' version '2.3.0.RC1'
}
```
Please check if this suites your needs. Pretty much implemented the DSL configuration way you suggested.
```groovy
git {
SeuAsCodePlugins {
url 'https://github.com/seu-as-code/seu-as-code.plugins.git'
options {
pull {
rebase = true
timeout = 600
}
clone {
singleBranch = false
cloneSubmodules = true
noCheckout = false
timeout = 300
}
push {
dryRun = true
pushAll = true
pushTags = true
timeout = 200
force = true
}
}
}
}
```
username_0: 👍 Will try it over the weekend.
Status: Issue closed
username_0: I forgot to tell you that the changes you made works very well. |
hpi-swa/Squot | 1097242632 | Title: "Switch to this branch, but keep uncommited changes" discards changes from previously untracked packages
Question:
username_0: Let branch `a` contain a package `P` that is not contained in branch `b`. After switching to branch `b`, the package `P` is kept in the image. If the user modifies the - now untracked - package `P` and then chooses "switch to this branch (`a`), but keep uncomitted changes", the modifications to package `P` are lost!
Answers:
username_0: I was able to work around the issue by using the undocumented "make this the current branch" feature for branch `b`>
username_0: Unfortunately, the workaround is not applicable for pulling from the remote.
username_1: Reproduced
username_1: What happens is the following:
During the switch, first the unsaved changes are saved. Since the package is not tracked at the moment, it is not saved here. Then Squot proceeds to load (as in: overwrite) the destination version into the working copy. When it does that, it first adds all packages to the working copy, which are loaded and tracked in the destination version, so the package gets added to the working copy here. Then it proceeds to compute the patch to transform the package into the state at the destination version, and loads that patch non-interactively, so you lose your unsaved changes to the package. Afterwards it first restores all unsaved changes from the destination branch, and then it applies your previously saved "unsaved changes" from the working copy. But as noted before, the latter does not include the then-untracked package, so it does not get restored.
So this switch procedure must be changed to look a bit further ahead and also consider packages that will be added during the switch and that are already loaded in the image. It should make a snapshot of such packages and must then, unfortunately, do yet another merge step during the switch to keep such packages up to date. Question is, what should be assumed as the base of the three-way merge of such a package? With none (as in: the branch one switches away from, since it does not have the package), all modifications made to the package will show as conflicts. With the snapshot of the package before the switch as the base, it would look like the destination branch has modified the package, while there were no changes to it on the branch which actually does not contain the package. With the destination branch as the base, all modifications will appear as modifications to apply the uncommitted changes to the destination branch. The latter sounds the most useful to me, but unfortunately that does not follow the logic of cherry-picking (where the predecessor of the commit to pick is used as the virtual base of the merge), so I will have to put more thought into the consequences.
Status: Issue closed
username_1: @username_0 Please try whether it works as expected for you now. |
Atlantiss/NetherwingBugtracker | 375772571 | Title: [NPC] Omor the Unscarred
Question:
username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.)
[//]: # (You can use screenshot ingame to visual the issue.)
[//]: # (Write your tickets according to the format:)
[//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!)
[//]: # ([NPC] Magistrix Erona)
[//]: # ([Spell][Mage] Fireball)
[//]: # ([Npc][Drop] Ghostclaw Lynx)
[//]: # ([Web] Armory doesnt work)
**Description**:
The room for Omor has no invisible walls, you can walk up on the ledge and jump down. This means that his spell that throws you into the air will some times throw you out of the instance.
Getting thrown out of the room onto hellfire peninsula still keeps you in the instance.
**Current behaviour**:

Outside of the instance while still inside.

**Expected behaviour**:
Invisible walls.
**Server Revision**:
2238
Answers:
username_1: On retail are no invisible walls. you can also fall down and get kicked out of instance then.
https://www.twitch.tv/videos/330152645
username_2: Seems like the problem is that you should be kicked out of the instance if you fall down.
username_0: Thanks for the proof. I was just using my shoddy memory, could've sworn there were invisible walls at some point. But yeah, you should still get thrown out of the dungeon and not be in the instanced version of hellfire.
username_3: rev 3441. still an issue.
(if u fall down it suppose to port u front the dungeon) |
nvs-vocabs/S03 | 1013086524 | Title: NTR: Request for S03 microwave-assisted nitric acid digestion
Question:
username_0: <H2> Required </H2><p><b>Term name (PrefLabel)</b><p>microwave-assisted nitric acid digestion <p><b>Definition</b><p> Dissolution of a sample using a nitric acid solution and microwave exposure, a treatment generally performed prior to elemental analysis. The efficiency of the procedure depends on the chemical properties of the samples.<p><H2> Optional </H2><p>
Answers:
username_1: Add HNO3 to the definition. |
sbb-design-systems/design-system-webapp-documentation | 730391204 | Title: Feature Request Webapp: Inline Editing for table
Question:
username_0: **Is your feature request related to a problem? Please describe.**
For some use cases of business applications a very fast and efficient manipulation is needed.
**Describe the solution you'd like**
Inline editing:
- every cell
- entries (rows)
**Describe alternatives you've considered**
Example for cell editing: https://js.devexpress.com/Demos/WidgetsGallery/Demo/DataGrid/CellEditingAndEditingAPI/React/Light/
Example for row editing: https://js.devexpress.com/Demos/WidgetsGallery/Demo/DataGrid/RowEditingAndEditingEvents/React/Light/
**Additional context**
<img width="1247" alt="Bildschirmfoto 2020-10-27 um 13 06 22" src="https://user-images.githubusercontent.com/6129137/97299470-4727a780-1855-11eb-96cb-dd57403d8824.png"> |
schempy/react-flux-routing-seo-revisited | 104921780 | Title: Cannot run app
Question:
username_0: 'rm' is not recognized as an internal or external command,
operable program or batch file.
npm ERR! [email protected] build: `rm -rf static/bundle.js; browserify -t reactify -t require-globify src/app.js -o static/bundle.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is most likely a problem with the react-engine-blog package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! rm -rf static/bundle.js; browserify -t reactify -t require-globify src/app.js -o static/bundle.js
npm ERR! You can get their info via:
npm ERR! npm owner ls react-engine-blog
npm ERR! There is likely additional logging output above.
npm ERR! System Windows_NT 6.2.9200
npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "run" "build"
npm ERR! cwd C:\HTML5\React\Isomorphic\Examples\react-flux-routing-seo-revisited-master
npm ERR! node -v v0.10.33
npm ERR! npm -v 1.4.28
npm ERR! code ELIFECYCLE
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! C:\HTML5\React\Isomorphic\Examples\react-flux-routing-seo-revisited-master\npm-debug.log
npm ERR! not ok code 0
Any idea how I can resolve this issue. Thanks for your time. |
covertsan/Test-Py | 172678139 | Title: from py 1471952728.5
Question:
username_0: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse interdum diam sit amet arcu imperdiet mollis. Proin at aliquet augue. Praesent ut pharetra lectus. Nunc hendrerit nibh augue, in vehicula est scelerisque facilisis. Phasellus tincidunt turpis convallis sagittis congue. Vivamus quis mauris eu est posuere scelerisque vitae quis lorem. Vivamus sit amet commodo orci. Mauris viverra dignissim nibh, id interdum massa ultricies sed. Vivamus sit amet consectetur ante. In non nibh vitae orci eleifend congue non ullamcorper risus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nunc a convallis tellus, in accumsan nunc.
Duis nunc quam, aliquam nec faucibus non, efficitur nec nunc. Donec dapibus ligula vitae nunc hendrerit, in rhoncus velit pellentesque. Nulla turpis nibh, luctus in libero vitae, condimentum porta lacus. Ut vitae mauris ullamcorper, iaculis ex mattis, feugiat elit. Ut aliquam volutpat gravida. Morbi sed accumsan neque, sed sagittis ipsum. Praesent vel rutrum purus. Nulla facilisi. Sed quis nisl mauris.
Nullam purus magna, sollicitudin in nisi vitae, pretium euismod purus. Proin eleifend in tellus quis aliquam. Donec tristique tortor id ipsum tristique rhoncus. In id leo sed mi vehicula suscipit et in velit. In hac habitasse platea dictumst. Suspendisse scelerisque dui eros, eu lobortis mi finibus eget. In est nisi, maximus nec tincidunt eu, consectetur vel lorem. Etiam tempor rutrum semper. Nunc semper ac est at malesuada. Integer euismod convallis aliquam. Ut sed dui a ex imperdiet molestie. Praesent mauris est, pharetra sit amet velit vel, interdum scelerisque enim. Ut in nisl et nibh varius semper dignissim a arcu. Nunc scelerisque, mi sed rhoncus congue, ipsum tortor rhoncus neque, nec blandit risus lorem vel sapien.
Suspendisse ultricies aliquet ipsum eget pretium. Mauris semper pulvinar lectus eget suscipit. Donec placerat libero id tortor euismod, id posuere ipsum laoreet. Proin tincidunt, velit non scelerisque pulvinar, mauris lacus feugiat dolor, et convallis arcu risus vitae erat. Quisque at ex in nisl posuere rhoncus. Donec sed facilisis eros. Vestibulum et lectus quis eros elementum ullamcorper. Pellentesque efficitur vulputate sapien sed efficitur. Maecenas commodo tempus blandit. Vivamus vel metus ac ex fermentum finibus a sit amet elit. Etiam in efficitur metus. Aliquam erat volutpat. Aliquam lacinia dapibus risus eu gravida. Duis tristique tincidunt ante, ut facilisis justo convallis eget. In rutrum interdum urna, nec ullamcorper dolor. Nam lacinia mollis mi, vel venenatis diam.
Sed eget libero sed tortor tempor dictum vitae vitae turpis. Etiam quam felis, euismod sit amet lacinia vel, dignissim a leo. In accumsan venenatis vehicula. Vivamus diam massa, consectetur eleifend erat quis, consectetur imperdiet purus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas ut est blandit, feugiat sem blandit, tristique ex. Nulla aliquam nulla odio, non tempus ligula vehicula nec. Donec malesuada mollis metus. Mauris ornare, leo in euismod malesuada, purus libero egestas lorem, vel egestas tellus libero in lorem. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Donec dignissim ex sed diam malesuada, nec tempor nunc cursus. Praesent eget posuere turpis, in elementum eros.
Vestibulum eget faucibus mauris. Curabitur ut nisl ante. Sed finibus venenatis dui, non pharetra quam feugiat non. Praesent molestie odio turpis, at vehicula lacus porta nec. Aliquam eu arcu leo. Morbi lacus nisi, fringilla nec vulputate eu, egestas dapibus nisl. Donec dignissim, sem ac maximus porttitor, arcu odio laoreet dolor, tristique suscipit neque est eget magna. Mauris ac lobortis arcu. In hac habitasse platea dictumst. Maecenas tempus euismod nunc in accumsan. Nunc quis efficitur est, a porttitor nisl. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Nam volutpat libero vitae dolor luctus semper. Integer varius, lacus in dictum tristique, arcu neque venenatis diam, non luctus ex nulla sed.<issue_closed>
Status: Issue closed |
emergenzeHack/covid19italia_segnalazioni | 579824439 | Title: Didattica a distanza e valutazione delle competenz
Question:
username_0: <pre><yamldata>
Da_chi_offerta: CampusStore
Descrizione: "webinar 12 marzo alle ore 10.30 con <NAME> \n\n#smartlearning\
\ #elearning #valutazione #MiurSocial"
Destinatari: Insegnanti
Link: bit.ly/2THdfbQ
Natura: didattica
Titolo: 'Didattica a distanza e valutazione delle competenze: la parola ai Dirigenti
Scolastici'
</yamldata></pre>
Answers:
username_1: @username_2 questo Link non ha "http" ma è stato accettato comunque
username_2: @username_1
Nel form "Segnala Iniziative e Servizi" non c'era la regexp che valida sul link, ho redeployato il form. Se vuoi fai pure una prova
username_3: Funziona! Ricordiamoci di chiudere issue domani!
Status: Issue closed
username_4: Scaduta. Chiudo. |
eclipse/microprofile | 432222894 | Title: JDK 11
Question:
username_0: Should we consider switching all components to build with JDK 11 for MicroProfile 3.0, and specifying that it runs on JDK 11?
If there are invasive changes required, it's a good time to do it with the breaking change release.
Answers:
username_0: FYI @andymc12
username_1: Build JDK 11 compiled jars that can't run on JDK 8 or just jars that run on JDK 11 and JDK 8?
This first will require all runtimes to run on JDK 11. However the baseline for the underlying Java EE 8 components is JDK 8.
username_0: We could take either approach.
But we need to consider that JDK 8 is no longer supported commercially without a separate agreement.
username_2: You can go a much better route while still keeping JDK 8 compatibility -
The goal is to be able to distribute JLink distributions, which require that all libraries be JPMS fully compatible, aka have a module-info class in either the root or the versions/ directory placement.
Current standard is to use moditect to build the module info file.
There is no need to break JDK 8 building for 100% JPMS compatibility, but there needs to be, acceptance, that we no longer look at backwards compatible, but forwards acceptable.
username_0: @username_2 I'm not quite sure what you're suggesting
username_1: I don't see why we would break JDK 8 compatibility to deliver MP 3.0. Are there specific JDK 11 only features that are being introduced in the 3.0?
username_0: I don't believe so, it's more a recognition that the JDK has moved on and we need to be mindful of what JDK support customers are able to have going forward
username_1: Of course it should run on JDK 11 however it shouldn't mandate JDK 11 there's a big difference.
username_0: That's why I said we could take either approach right now, but there will come a point where we need to drop JDK 8
username_2: @username_0 To rather keep JDK 8 for now while people migrate across to JPMS and JLink (As different to JDK 11/Classpath Mode) by using a module-info generator such as @moditect,
This will satisfy all requirements for JPMS, as well as keep support for JDK 8 while moving across/allowing time to move across.
As the chosen path for most current up to date libraries I think it should be a proper consideration xD But I'm just the peon :)
username_0: Thanks for clarification @username_2
username_3: If/when Reactive Streams Operators (RSO) is added to the platform, JDK 11 should be required, such that RSO can be updated to use JDK 9's `java.util.concurrent.Flow` API instead of the original Reactive Streams API
username_0: Closing in favor of splitting this out into two issues, as there are two separate steps we can take:
https://github.com/eclipse/microprofile/issues/106
https://github.com/eclipse/microprofile/issues/107
Status: Issue closed
|
kubernetes/test-infra | 161557290 | Title: Don't leave 4 "ok to test?" messages on every PR
Question:
username_0: I think each PR builder job leaves a comment saying this. We should not do this, it looks ridiculous...
Answers:
username_1: I agree, but we don't have a choice: https://github.com/jenkinsci/ghprb-plugin/issues/292
username_2: we could extend the munger's comment-deleter to remove 3 of them....
username_1: Done :)
Status: Issue closed
|
enonic/cms2xp | 285442527 | Title: Fragments used only once should not be fragments
Question:
username_0: To convert Portlets from CMS we need to turn them into Fragments in XP. However, this creates loads and loads of content bloat. If one Fragment is used only once, just add the part with the config and skip creating the fragment.
We can now get installations with hundreds of Fragments, and a handful of them only used once. Makes cleanup more cumbersome.<issue_closed>
Status: Issue closed |
mikro-orm/mikro-orm | 1046547177 | Title: Library Supported "Native" Custom Types (Enum Array)
Question:
username_0: Having read through some past issues/discussions, there seems to be a general constraint that anything vendor libraries (eg: knex or umzug) do not support prevents certain functionality from being introduced. in particular the constraint seems to mostly be around schema generation.
i ran into #2079, where I have an existing production schema which uses enum[] array types. understandly, managing enum types is hard to pull off due how the underlying tools work for schema generation, but if your opt'ing out of that part of the library, a lot of these things can easily be supported.
example simple enum array support:
```typescript
import { Type, ValidationError } from '@mikro-orm/core';
export class EnumArrayType extends Type<string[], string> {
convertToDatabaseValue(value: any): string {
if (!Array.isArray(value)) {
throw ValidationError.invalidType(EnumArrayType, value, 'JS');
}
return `{${value.join(',')}}`;
}
convertToJSValue(value: any) {
if (typeof value !== 'string') {
throw ValidationError.invalidType(EnumArrayType, value, 'JS');
}
const trim = value.substr(1, -1);
return trim.split(',');
}
}
```
I think we could feature flag types, eg, expand the Type interface like so:
```typescript
abstract class Type {
supportsSchemaGeneration(): boolean;
}
```
if the types are used in a project who attempts schema generation, we can throw an exception and point to documentation outlining how to go about managing migrations yourself.
Answers:
username_1: Not sure I follow here, there is already an [`EnumArrayType`](https://github.com/mikro-orm/mikro-orm/blob/master/packages/core/src/types/EnumArrayType.ts) that works pretty much this way, and it does support schema generation.
username_0: postgres stores enum arrays as {FOO,BAR,BAM}
username_1: yes, that is what that type is exactly doing
username_1: https://github.com/mikro-orm/mikro-orm/blob/master/packages/postgresql/src/PostgreSqlPlatform.ts#L95
Status: Issue closed
username_0: alright my mistake, everything I was reading in old issues pointed to it (enum maybe?) not being implemented
username_1: Native postgres enums are not implemented (better say schema diffing for native postgres enums is not implemented, you could use them via `columnType`), but postgres arrays are, and postgres enum arrays work as `text[]` in the schema generator - for querying part it should have the same interface as with native enums i guess?
username_0: @username_1 thought, wouldn't ts-morph/reflect be able to detect that a mapping is an array? eg:
```typescript
@Enum()
enums: MyEnum[];
```
if so, shouldn't we implicitly set the type to EnumArrayType (the combination of using @Enum and []) unless the user overrides? |
streamlit/streamlit | 770359870 | Title: Add conda build instructions for Streamlit sharing
Question:
username_0: Streamlit sharing supports installation from conda, as evidenced by the following example: https://github.com/username_0/test-conda
Add example to Sharing docs to highlight that requirements.txt or conda.txt would work.
ref: https://discuss.streamlit.io/t/can-i-add-conda-package-in-requirements-txt/8062
Answers:
username_1: Hello, Is this issue still open for work ??? If so, I would like to work on it
Please let me know if I can work on this
username_0: Yes, it is still open. It's probably just 2-3 sentences as part of this section:
https://docs.streamlit.io/en/stable/deploy_streamlit_app.html#put-your-streamlit-app-on-github
username_1: Sure sir @username_0 I will start working on it ASAP. Can you assign it to me?
username_1: @username_0 I was able to install streamlit with this command as well
conda install -c conda-forge streamlit
apart from creating a virtual environment and then installing
username_0: Hi @username_1 -
This issue is for Streamlit sharing specifically, so we just need a note about using `conda.txt` and `conda-channels.txt` files. You can see a working example [here](https://github.com/username_0/test-conda), and it might even be useful to link directly to this project.
Best,
Randy
username_1: @username_0 if you are available can I have a call over google meet??
username_1: because I could not understand do I have to change the readme.md here or the docs file that is being used to build the documentation of the website that is build using the sphinx? @username_0
username_0: It's not practical for me to have a call with everyone contributor to this library.
The conda instructions should be added to the existing deploy section:
https://github.com/streamlit/streamlit/blob/develop/docs/deploy_streamlit_app.md#put-your-streamlit-app-on-github
Add the instructions to the Markdown, and it will be published in our documentation. Our README file is for general info, not documenting small details like this.
username_2: Just saw this. We should hold off on adding this to our docs, since the solution where we support conda.txt is a bit of a hack. The real solution is coming soon, at which point we'll update the docs and close this accordingly. |
akordun/pokerplanner | 233462963 | Title: Clarify room scope requirements
Question:
username_0: How many rooms per team?
How many active games per room?
Answers:
username_0: I agree to simply by having 1 active game per room at time.
A team (equivalent of a install, sub-domain, etc) can have multiple rooms.
A room is therefore like a channel in Slack.
I'll modify the wire-frames accordingly.
Status: Issue closed
|
midhunhk/lib-aeapps | 697399842 | Title: CRITICAL: Projects not able to access class files from the library
Question:
username_0: Upon adding the library to an Android application, the java classes are inaccessible
**To Reproduce**
Steps to reproduce the behavior:
1. Use the latest version of the library (https://jitpack.io/com/github/username_0/lib-aeapps/v4.0.4/)
implementation 'com.github.username_0.lib-aeapps:core:v4.0.4'
2. Sync the gradle file
3. e.g; 'EmptyRecyclerView' is inaccessible
**Expected behavior**
Class files should be available to use
**Thanks** to the community for not checking or reporting this issue.
Answers:
username_0: https://android.jlelse.eu/proguard-r8-in-the-world-of-modularity-f599650b4553
Status: Issue closed
|
maximtrp/scikit-posthocs | 580476560 | Title: posthoc_dscf Calculation
Question:
username_0: in definition posthoc_dscf, ni and nj have been replaced.
CURRENT
`
def posthoc_dscf(a, val_col=None, group_col=None, sort=False):
....
def compare(i, j):
....
u = np.array([nj * ni + (nj * (nj + 1) / 2), nj * ni + (ni * (ni + 1) / 2)]) - r
....
`
TRUTH
`
u = np.array([nj * ni + (ni * (ni + 1) / 2), nj * ni + (nj* (nj + 1) / 2)]) - r`
Answers:
username_1: Thank you for reporting. As you can see, `u` is only used to [calculate minimum](https://github.com/username_1/scikit-posthocs/blob/672620b08e1fc311e4b494db5eb73edbd9f765ba/scikit_posthocs/_posthocs.py#L2300), so the order is not important.
Status: Issue closed
username_0: Thank you for your answer. But `r` is for [i, j] and np.array in `u` is for [j, i]. In fact, I tried posthoc_dscf, then the false answer returned.
username_1: in definition posthoc_dscf, ni and nj have been replaced.
CURRENT
```
def posthoc_dscf(a, val_col=None, group_col=None, sort=False):
....
def compare(i, j):
....
u = np.array([nj * ni + (nj * (nj + 1) / 2),
nj * ni + (ni * (ni + 1) / 2)]) - r
....
```
TRUTH
```
u = np.array([nj * ni + (ni * (ni + 1) / 2),
nj * ni + (nj* (nj + 1) / 2)]) - r
```
username_1: @username_0
Could you please post your input data and the results you obtained? What software package are you using as a reference?
username_0: Thank you for checking.
data
|group|value|
|-----|------:|
|H2 | 28.033|
|F2 | 8.917|
|H2 | 30.000|
|F2 | 9.083|
|H2 | 30.000|
|F2 | 23.033|
|H2 | 30.000|
|CT | 30.000|
|F2 | 7.483|
|H2 | 30.000|
|F2 | 30.000|
|CT | 6.717|
|CT | 28.067|
|CT | 19.517|
posthoc_dscf output p value
| | CT | F2 | H2 |
|---|-------:|------:|-------:|
|CT |-1.00000| 0.6510| 0.00888|
|F2 | 0.65103|-1.0000| 0.08649|
|H2 | 0.00888| 0.0865|-1.00000|
True p value(using R)
| | CT | F2 | H2 |
|---|-------:|------:|-------:|
|CT |-1.00000| 0.9277126| 0.2413583|
|F2 | 0.9277126|-1.0000| 0.0864099|
|H2 | 0.2413583| 0.0864099|-1.00000|
`i` replaced `j` output p value
| | CT | F2 | H2 |
|---|-----:|------:|------:|
|CT |-1.000| 0.900| 0.2416|
|F2 | 0.900| -1.000| 0.0865|
|H2 | 0.242| 0.086|-1.0000|
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (1.17.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (3.1.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (1.4.1)
Requirement already satisfied: pandas>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (0.25.3)
Requirement already satisfied: statsmodels in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (0.10.2)
Requirement already satisfied: seaborn in /usr/local/lib/python3.6/dist-packages (from scikit-posthocs) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->scikit-posthocs) (1.1.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->scikit-posthocs) (2.6.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->scikit-posthocs) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->scikit-posthocs) (2.4.6)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.20.0->scikit-posthocs) (2018.9)
Requirement already satisfied: patsy>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from statsmodels->scikit-posthocs) (0.5.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->scikit-posthocs) (45.2.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->scikit-posthocs) (1.12.0)
Installing collected packages: scikit-posthocs
Successfully installed scikit-posthocs-0.6.2
Status: Issue closed
username_1: You are right. Thank you! Fixed version is available. |
npdarrington/instaquizzes | 739224307 | Title: App - Restart New Quiz Game
Question:
username_0: As a user, when I complete a quiz game, I want to be able to click a button to start a new game rather than refreshing the page to start a new one
- [ ] Create a Start New Game button that will allow a user to start a new game after finishing the previous one.<issue_closed>
Status: Issue closed |
scikit-learn-contrib/hdbscan | 271520994 | Title: Spurious multiple connected components error when matrix has only one connected component
Question:
username_0: This is related to https://github.com/scikit-learn-contrib/hdbscan/issues/82 but not identical.
I have noticed **in some cases**, HDBSCAN will apparently incorrectly report `Sparse distance matrix has multiple connected components` error when the precomputed distance matrix has only one connected component. To be specific, calling `scipy.sparse.csgraph.connected_components` on the csr_matrix (as @username_1 suggests in https://github.com/scikit-learn-contrib/hdbscan/issues/82) shows only one connected component, but if a value of `min_clusters` is used which is "too large" then the error will be reported.
A repro for this is below.
Using python 3.6 and hdbscan 0.8.10, the below code
```
from sklearn.cluster import dbscan
from sklearn.metrics import pairwise_distances
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import connected_components
import numpy as np
import pandas as pd
from hdbscan import HDBSCAN
n=200
r = 1.5
centers = [(0,0), (-r,-r), (-r,r), (r, -r), (r,r)]
np.random.seed(1)
s = np.random.randn(n // len(centers), 2)
X = np.concatenate([s + c for c in centers])
d1 = pairwise_distances(X)
num_components = len(centers)
k = n//num_components
for i in range(num_components):
for j in range(num_components):
if i != j:
d1[k*i:k*(i+1),k*j:k*(j+1)] = 0
d2 = d1.copy()
for i in range(num_components):
for j in range(num_components):
if i != j:
d2[k*i,k*j] = 100
sparse_disconnected = csr_matrix(d1)
sparse_connected = csr_matrix(d2)
print('num components in disconnected:', connected_components(sparse_disconnected)[0])
print('num components in connected:', connected_components(sparse_connected)[0])
print(pd.Series(HDBSCAN(metric='precomputed', min_cluster_size=25).fit(sparse_connected).labels_).value_counts())
for min_cluster_size in [38, 39, 40]:
try:
HDBSCAN(metric='precomputed', min_cluster_size=min_cluster_size).fit(sparse_connected)
print(f'min_cluster_size {min_cluster_size} is ok')
except ValueError as e:
print(f'min_cluster_size {min_cluster_size} failed')
```
results in:
```
num components in disconnected: 5
num components in connected: 1
4 40
3 40
2 40
1 40
0 40
dtype: int64
min_cluster_size 38 is ok
min_cluster_size 39 failed
min_cluster_size 40 failed
```
Answers:
username_1: Thanks, that's a good catch. It's an issue of how things get handled further downstream, and may not be resolvable, but I should ensure you at lest get a more informative error message if that's the case.
username_2: Same problem here. Any updates?
username_1: Sorry, I haven't gotten around to updating the errors -- in practice what is happenning is that the *mutual-reachability distance* matrix has multiple connected components to it. This occurs because ``min_samples`` (set automatically by ``min_cluster_size``) is too large, and makes things disconnected. You can work around this by explicitly setting ``min_samples`` to something lower (say 5 or 10) and see if that helps.
username_1: One possibility of what might be causing this is if you have duplicate points -- points with explicit distance equal to 0. Suring processing with the sparse matrices this may get eliminated, which would essentially set them to have infinite distance. Do you know if your data has any of the situations? I believe I can correct for the error (when I find some time), but if it is not the problem it would be useful to know what else may be at issue.
username_3: Hmm, maybe! I actually resolved the issue just by calculating multiple distances from each point in isolated components to other points rather than just single bridging edges, but it's possible that I might have just papered over the issue. There definitely was a significant issue with duplication and explicit zero distances in that dataset.
username_1: Thanks, I think that narrows down what was likely the problem. I'll endeavour to get a proper fix in place soon. |
magefree/mage | 542587047 | Title: Outlaws Merriment - Creature choice chosen at random by computer but not human
Question:
username_0: Hi - with Outlaw's Merriment - the computer chooses at random for the player which creature type out of three can be generated as a token creature at the start of the upkeep phase.
Shouldn't this be that the human gets to choose from three possible options? If we were playing paper magic I am sure this is how it would be handled or maybe I am wrong?
https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=473160
4/10/19 | As you put the triggered ability of Outlaws’ Merriment on the stack, you choose a mode at random. Players can respond to the ability knowing which token will be created.
-- | --
I get the feeling it's programmed for the computer to choose for the player at random but surely in a paper game it would mean we as players have three choices and we can choose whichever we deem / believe is right and not the AI choose for us? If it was paper magic at your local game store it would be chosen by us and not decided by a dice roll right? E.g. 1-2 for token 1 / 3-4 for token choice 2 etc.
I may be wrong - it's just my thought and the above is the official ruling from Wizards of the Coast.

Status: Issue closed
Answers:
username_0: Played the same deck in MTGO and it does the same as Xmage. My mistake. |
LiliaIsABell/cart315_Prototype3 | 835263349 | Title: Playtest Answers - Martin (J)
Question:
username_0: Playtest Questions:
- Were you able to play with a second player?
Yup! Got to play this with my partner. We were both a little confused at first, but had a good time!
- What did you think of the controls? Did they work well? Was it uncomfortable?
At the time of testing, there were no accessible instructions (main menu buttons didnt work) so I had to open up the code to figure out how it worked. After that, it was alright - I would rather have had the punch key for the arrow keys player to be on the numpad, so both can use their left hand for movement. As for the input itself, I was a little unsure how long or many times I had to punch to get points.
- What did you think of the split screen? Was it distracting?
Worked fairly well, but I would like to see different backgrounds or visuals so it's easier to tell which player is where - similarly, I think I would have preferred a horizontal split!
- Was the game enjoyable as it was or would you have preferred a more traditional fighting game?
As far as I can tell, there's no reason for either player to approach the other, meaning they can turtle the whole game without making a first move. I'd add abilities to move the other character, like a gust of wind, or maybe map hazards. I'm a big fan of 2D fighters, but I think you've got something different on your hands here. It's still a fun concept, but I'd move away from typical fighting game traditions.
- Is there anything you would add or change?
See above.
- If I wanted to continue with the theme of siblings fighting, what could I add?
See above.
Cool stuff, all in all! |
thomasloven/lovelace-slider-entity-row | 679743230 | Title: Creating a Slider with cover buttons and slider ?
Question:
username_0: Is it possible joining both?

Someting like joinning and getting buttons and slider like this (visually?)

Answers:
username_1: I'm looking to do the same basic thing. Have the slider and the arrows on the same line.
username_2: I opened the same issue #107 about 5 months ago
username_3: Just use two separate rows.
Set the name to " " and the icon to one that doesn't exist.
Status: Issue closed
|
cdnjs/cdnjs | 291397843 | Title: Wrong version for LiveScript 1.5.0 (it is 1.4.0 on CDN)
Question:
username_0: LiveScript seems to have configured an auto-updater, as of https://github.com/cdnjs/cdnjs/pull/3897, but the newest browser build was generated right after the 1.5.0 tagged commit (and was fixed later https://github.com/gkz/LiveScript/commit/87daae63c4e988eb5c5d0ab599654f58af6c8d0a).
Is there a way to trigger a reupload of the newest version? I don't have the rights to the LiveScript repository.
Answers:
username_1: Hi @username_0 cdnjs adds versions based on git tags, so on cdnjs.com v1.5.0 seems v1.4.0. Maybe you can open an issue there to ask can they fix the tag. cc @PeterDaveHello
username_0: I created a pull request for this issue, I thought it might me easier to update the library from a PR.
Status: Issue closed
username_0: There's a new version of LiveScript, 1.6.0, correctly loaded onto CDNJS. Although 1.5.0 will remain untouched, using 1.6.0 is preferable. Therefore I'll close this issue. |
arborrow/montserrat | 306509142 | Title: META: Date and time formatting inconsistencies
Question:
username_0: Review date/time fields throughout app to ensure that they are formatted and handled consistently.
There are places where what is desired (like birthday) is listed as a date/time. And other places where what is desired is date/time and it is just a date. It may take some time but probably not a bad idea to follow through on all of the routes and review that things are handled consistently.
Similarly, to check output on reports to make sure that things are being displayed properly.
I am creating the label of meta to describe an issue that could pull together a variety of similar detailed issues. |
OfficeDev/generator-office | 556449021 | Title: cannot edit registry
Question:
username_0: Please remove the requirement to edit the registry. Can we use environment variables instead?
```
ERROR: Registry editing has been disabled by your administrator.
```
Answers:
username_1: Also facing the same issue.. Is there a solution found for this issue
username_2: @username_0 @username_1 Can you please be more specific about the scenario where you are required to edit the registry? Generator-office itself doesn't require any registry edits, so are you referring to a project derived from generator-office? Please provide steps so I can reproduce this behavior.
Thanks,
Courtney
username_0: If you remove any existing Micosoft Developer entries from the registry, disable registry editing, then try to `npm install` you should get the error. It is specific to microsoft projects, not only the generator-office package. This issue can be moved to the project that is requesting write-access to the registry, if it can be identified.
I could not trace it back but it most likely has something to do with this: https://github.com/OfficeDev/Office-Addin-Scripts/blob/master/packages/office-addin-dev-settings/src/dev-settings-windows.ts
username_2: @username_0 Just to clarify, is this the error your see:
Debugging is being started...
App type: desktop
Error: Unable to start debugging.
Error: Unable to set registry value "UseDirectDebugger" to "1" (REG_DWORD) for key "HKCU\SOFTWARE\Microsoft\Office\16.0\Wef\Developer\704cb52f-ba5e-45c2-b104-8cc7460ca61a".
ProcessUncleanExitError: ADD command exited with code 1:
ERROR: Registry editing has been disabled by your administrator.
This issue is presumably caused by https://github.com/username_2/Office-Addin-Scripts/blob/master/packages/office-addin-dev-settings/src/dev-settings.ts.
I see that there is a '--no-debug' for Office-Addin-Debugging which should in theory bypass the need to write debug settings to the registry but it's doesn't appear to be working for me.
Adding @akrantz who may have more insight here
username_0: @username_2 Yes.
```bash
npm install -g yo generator-office
yo office --projectType taskpane --host excel --name exceladdin --ts true
cd exceladdin
npm run start
```
and that error comes up
username_2: While I've figured out a way to disable writing the dev-settings values to the registry with a local change, it's not going to be easy to get everything working end-to-end because of the registerAddin step in https://github.com/OfficeDev/Office-Addin-Scripts/blob/master/packages/office-addin-dev-settings/src/sideload.ts. Writing to registry is required when automatically sideloading an add-in
One way to work around things is to start your dev-server via "npm run dev-server" and then manually register your add-in via these steps: https://docs.microsoft.com/en-us/office/dev/add-ins/testing/create-a-network-shared-folder-catalog-for-task-pane-and-content-add-ins.
After you've manually registered your addin, just keep the dev-server running and then start your Office Application.
Not ideal obviously, but the ability to write to the registry is required to allow for automatic registration of the add-in
username_2: Yes, that's correct you need to have admin permissions in order to do the automatic registration of the add-in, as we write the location of the add-in manifest to Computer\**HKEY_LOCAL_MACHINE**\SOFTWARE\Microsoft\Office\16.0\WEF.
As for issues with the dev-server, I've provided guidance at https://github.com/OfficeDev/TrainingContent/issues/699
Thanks,
Courtney
Status: Issue closed
|
mob41/osumer | 494564223 | Title: Official page changed login redirection page
Question:
username_0: Need to modify code to accept ```https://osu.ppy.sh/?success=xyz``` as the login success redirection page, and move on to use new web parser.
Request:
```
Request URL: https://osu.ppy.sh/forum/ucp.php?mode=login
Request Method: POST
Status Code: 302
Remote Address: 192.168.127.12:443
Referrer Policy: no-referrer-when-downgrade
```
Response:
```
cache-control: private, no-cache="set-cookie"
content-type: text/html; charset=UTF-8
date: Tue, 17 Sep 2019 11:22:51 GMT
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
expires: 0
location: https://osu.ppy.sh/?success=1568719371
pragma: no-cache
server: cloudflare
status: 302
strict-transport-security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
```
Answers:
username_1: @username_0 How long can I wait for the update?)
username_1: @username_0 maybe you use osu.gatari.pw server?
Status: Issue closed
|
swagger-api/swagger-codegen | 332357286 | Title: mustache file for swagger ui configuration uses deprecated class/methods
Question:
username_0: <!--
Please follow the issue template below for bug reports and feature requests.
Also please indicate in the issue title which language/library is concerned. Eg: [JAVA] Bug generating foo with bar
-->
##### Description
When I use spring 5 and use swagger-codegen/modules/swagger-codegen/src/main/resources/JavaSpring/libraries/spring-mvc/swaggerUiConfiguration.mustache
as a template the generated class contains deprecated code.
Also if the json that is created by the API contains new lines etc, the pretty printer that is attached in this configuration might produce bad results (escaped quotes and new lines will appear when using curl/postman).
I think that the example should be up to date or at least you should have one that is compativle with spring 5.
This example is still not updated in head's version.
we are using swagger codegen 2.3.1 with spring 5.0.3.
When using the template (codegen/src/main/resources/JavaSpring/libraries/spring-mvc/swaggerUiConfiguration.mustache) almost as is, returning an already formatted json will produce results which include \r\n and escaped quotes (\") as part of the result.
updaing the code to match the new interface in spring 5 and not setting the default mapper caused the json to return as expected via curl/postman. |
kubesphere/kubesphere | 511055393 | Title: how could I see event data in application page
Question:
username_0: **Describe the bug(描述下问题)**
how could I see event data in application page
**Versions used(KubeSphere/Kubernetes的版本)**
KubeSphere: 2.1.0 dev
Kubernetes: (If KubeSphere installer used, you can skip this)
**Environment(环境的硬件配置)**
3 masters: 8cpu/8g
3 nodes: 8cpu/16g
(and other info are welcomed to help us debugging)
**To Reproduce(复现步骤)**
Steps to reproduce the behavior:
1. Deploy bookinfo application
2. there's no event data shown in application event page and sub-modules event pages
3. Edit application and sub-modules, and also no event data
Answers:
username_0: /assign @username_1
username_1: there is no event right now because controller didn't write any, @username_2 this tab can be removed
username_2: Fixed
Status: Issue closed
username_0: verified |
Azure/azure-sdk-tools | 1059283103 | Title: [Test-Proxy] Add error handling
Question:
username_0: There are a few places where an unhandled exception can be thrown and the output we get back contains only:
```
Status: 500 (Internal Server Error)
Headers:
Date: Sun, 21 Nov 2021 02:47:04 GMT
Server: Kestrel
Content-Length: 0
```
It would be helpful to catch these exceptions and add the error information into the response that gets returned.
One such case can happen in record mode if the upstream host does not exist:
https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/RecordingHandler.cs#L143
Another example is when starting playback the file does not exist:
https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/RecordingHandler.cs#L272
Status: Issue closed
Answers:
username_0: I don't think this is completely fixed. I just ran into another error that isn't propagated at all when an exception is thrown due to having a null `originalValue` here - https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/Sanitizers/BodyKeySanitizer.cs#L67
In this case, the test finishes but nothing gets recorded.
username_0: There are a few places where an unhandled exception can be thrown and the output we get back contains only:
```
Status: 500 (Internal Server Error)
Headers:
Date: Sun, 21 Nov 2021 02:47:04 GMT
Server: Kestrel
Content-Length: 0
```
It would be helpful to catch these exceptions and add the error information into the response that gets returned.
One such case can happen in record mode if the upstream host does not exist:
https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/RecordingHandler.cs#L143
Another example is when starting playback the file does not exist:
https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/RecordingHandler.cs#L272
In this second example, we should probably be throwing a TestRecordingMismatchException if the file doesn't exist.
username_0: Patched specific issue here https://github.com/Azure/azure-sdk-tools/pull/2374, but ideally we would have every possible exception throwing line of code in a try/catch.
username_2: https://docs.microsoft.com/en-us/aspnet/core/web-api/handle-errors?view=aspnetcore-6.0
Status: Issue closed
|
bteapot/BTInfiniteScrollView | 143103825 | Title: - (void)reloadViewForItem:(BTItem *)item place:(BTPosition)place edge:(CGFloat)edge
Question:
username_0: - (void)reloadViewForItem:(BTItem *)item place:(BTPosition)place edge:(CGFloat)edge
new frame should be assign to the new view, not item.view
Status: Issue closed
Answers:
username_1: Thank you! I've never actually used this `-reloadViews` method. |
StartupAPI/users | 172510575 | Title: Fix Accounts API call
Question:
username_0: /users/api.php?call=/startupapi/v1/accounts fails with error:
```
Argument 1 passed to StartupAPI\API\v1\Accounts::StartupAPI\API\v1\{closure}()
must be an instance of StartupAPI\API\v1\Account, instance of Account given
in /www/data/cardonist.lc/users/classes/API/v1/Accounts.php on line 29
```
Looks like wrong namespace is used.<issue_closed>
Status: Issue closed |
seek-oss/seek-style-guide | 215634700 | Title: Router is not working on IE
Question:
username_0: React-Router is not fully supported on IE, well, this is another example.
so basically, on IE, each component's demo page is down.

Answers:
username_1: The URL in this example is incorrect, it should be http://seek-oss.github.io/seek-style-guide/footer/. How did you end up at this URL?
username_1: @username_0 just cleaning up old PRs and issues—is this still a problem?
username_1: Closing this just to clean things up. @username_0, please reopen if this is still an issue.
Status: Issue closed
|
dbfannin/ngx-logger | 1077349454 | Title: V5 Not working with Jest
Question:
username_0: I am using ngx-logger in an Angular project and jest for testing. While wanting to go from V4 to V5 of ngx-logger, I have an error message when I run the tests with jest:
```
Jest encountered an unexpected token
Jest failed to parse a file. This happens e.g. when your code or its dependencies use non-standard JavaScript syntax, or when Jest is not configured to support such syntax.
Out of the box Jest supports Babel, which will be used to transform your files into valid JS based on your Babel configuration.
By default "node_modules" folder is ignored by transformers.
Here's what you can do:
• If you are trying to use ECMAScript Modules, see https://jestjs.io/docs/ecmascript-modules for how to enable it.
• If you are trying to use TypeScript, see https://jestjs.io/docs/getting-started#using-typescript
• To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config.
• If you need a custom transformation specify a "transform" option in your config.
• If you simply want to mock your non-JS modules (e.g. binary assets) you can stub them out with the "moduleNameMapper" config option.
You'll find more details and examples of these config options in the docs:
https://jestjs.io/docs/configuration
For information about custom transformations, see:
https://jestjs.io/docs/code-transformation
Details:
/Users/flavien/Developpement/rafal-web-front2/node_modules/ngx-logger/fesm2020/ngx-logger.mjs:1
({"Object.<anonymous>":function(module,exports,require,__dirname,__filename,jest){import * as i0 from '@angular/core';
```
I have tried various configuration in jest.config, but cannot find a solution.
Answers:
username_1: It says
`By default "node_modules is ignored by transformers`
And
`To have some of your "node_modules" files transformed, you can specify a custom "transformIgnorePatterns" in your config`
So did you try removing `transformIgnoreParterns` from your config ?
username_0: Yes of course I tried.
I'm trying to make a new angular project to show you the error
username_1: Ok that would be amazing if you have a repro so that I can look into it
username_0: After hours of research ... It works!
For those who encounter the problem:
Update to the new version just released from `@briebug/jest-schematic` and replace latest from the `@angular-builders/jest` package automatically added with a version greater than or equal to: `13.0.1`
thank you @username_1 for suggesting solutions!
Status: Issue closed
|
jupyterlab/jupyterlab | 968986241 | Title: Looking for an option to fix tab title for Version 3.1.x
Question:
username_0: ### Problem
It is distracting and impractical to have the following ever-changing tab-title when I switch across multiple notebooks in a workspace.

The old way of fixing the tab title no longer works as intended. Upon fixing the tab title in one notebook, switching to another notebook resets the tab title. Even switching back to the notebook that ran the following old script won't restore the tab name as "A fixed title".
```
%%javascript
document.title='A fixed title'
```
### Proposed Solution
I may have missed something well documented. If so, please point me there. If not, it helps to know what other `%%javascript` snippets we can use to get the tab title fixed.
### Additional context
I am current running upyterlab Version 3.1.4. I prefer to have a fixed tab title since I rely on it to identify the "type of window". This helps to issue additional AutoHotKey configurations to, say, prevent the Print interface from popping up with `Ctrl+p`. I also use it to map `Ctrl+]` as jump back to the cell-selection layer.
Answers:
username_1: I agree that this should be user-customizable. This was originally suggested in https://github.com/jupyterlab/jupyterlab/issues/6946#issuecomment-664604505 and some suggestions on variables to be made available is in https://github.com/jupyterlab/jupyterlab/issues/6946#issuecomment-675804557.
username_1: https://xkcd.com/1172/ ;) [Just joking, I fully appreciate your use case and as mentioned earlier I do agree this should be improved].
In case if you would like to take a stab at it, the PR which introduced this feature is: https://github.com/jupyterlab/jupyterlab/pull/10002
username_0: @username_1 Thanks a lot for pulling the relevant threads together. As advised by <https://xkcd.com/1205/> (seriously, lol), I will live with my current workaround based on the following observation:
* When the tab-title updates, it always contains the first 15 characters of the current working directory for the noetbook.
Since I don't have too many variants for the working directories, I will add a few more lines to my AutoHotKey script to take note of these directory names. This will break, well, should the name of the notebook gets the spotlight in the tab-title area :) Glad this is not happening with Version 3.1.4.
PR #10002 looks really promising. I am not confidence if I can get it up and running locally in a reasonably short period of time. :) |
fac21/week2-http-project-ECRC | 835832467 | Title: API keys
Question:
username_0: https://github.com/fac21/week2-http-project-ECRC/blob/e4bb4879b685604036c472deb77b6ae76fc11e9e/script.js#L3
Ideally you don't want to be publishing your key to github, or anywhere publicly online.
This is a good overview of how to hide your key:
[https://dev.to/ptprashanttripathi/how-to-hide-api-key-in-github-repo-2ik9](https://dev.to/ptprashanttripathi/how-to-hide-api-key-in-github-repo-2ik9) 🔑 |
mir-dataset-loaders/mirdata | 860548248 | Title: Small fixes
Question:
username_0: 1) Test folder: `dagstuhl` and `queen` tests outside the `dataset` folder
2) Download instructions: in `RWC` collection, `Queen`, `Salami`, and others we didn't update the instructions so the download folder is the same name as the loader
Answers:
username_1: Closed by #500
Status: Issue closed
|
Nitrux/nitrux-bug-tracker | 534259860 | Title: VMetal does not create virtual storage "not enough space available"
Question:
username_0: **Describe the bug**
VMetal fails to create virtual storage drives on a 3.5TB hard drive outputting the error
```
OS storage image does not exists. Creating 50G image
/tmp/.mount_vmetalvrKXO1/vmetal_init.sh[114]: 4.0: unexpected '.'
- [ERROR] Not enough space available to create OS Storage Image. Clear up some space and retry
- Exitting...
```
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Nitrux on a larger drive, e.g., 3.5TB
2. Start `vmetal` from the Terminal
3. VMetal fails to create the virtual storage
**Expected behavior**
The virtual storage devices are created
**Desktop (please complete the following information):**
- Nitrux 1.2.3
Answers:
username_1: It is a 4 TB drive. This is the command output:
`df -H $HOME/VMetal
Filesystem Size Used Avail Use% Mounted on
. 4.1T 7.3G 4.0T 1% /home
`
`df -H $HOME/VMetal | grep -vE "^Filesystem|tmpfs|cdrom" | awk '{print $4}'
4.0T
`
`df -H $HOME/VMetal | grep -vE "^Filesystem|tmpfs|cdrom" | awk '{print $4}' | rev | cut -c 2- | rev
4.0
` |
BabylonJS/BabylonNative | 703054208 | Title: Javascript debugging should not be enabled for chakra backend on HoloLens
Question:
username_0: We've determined that chakra script debugging does not work on hololens. This causes AppRuntimeChakra.cpp to throw an exception when kicking off a debug babylon native build on a HoloLens device due to a call to JsStartDebugging.
Workarounds include commenting out this line of code or building for release. But, in delivering a high quality developer experience, we should prevent this crash for debug builds on HoloLens. Third party developers shouldn't need to comment out code in BabylonNative releases to get HoloLens debug images to work.
Answers:
username_1: Worth noting that the reason this crashes is due to the fact that VS isn't correctly deploying the JS debugging support along with the remote debugger package. We've pinged them about this, but we should figure out if we can gracefully fall back when it's not present while we wait for a fix.
username_1: Since it looks like we're unlikely to get Chakra debugging support for ARM targets, the remaining work to enable a Babylon Native debugging story on HoloLens 2 is tracked by #463 and #464.
Status: Issue closed
|
tendermint/tendermint | 777445865 | Title: blockchain/v2: (*BlockchainReactor).AddPeer miuses lock by acquiring a read lock then writes to the critical section acquired channel
Question:
username_0: <!--
Please fill in as much of the template below as you can.
Be ready for followup questions, and please respond in a timely
manner. We might ask you to provide additional logs and data (tendermint & app).
-->
https://github.com/tendermint/tendermint/blob/aef1ac7ba5d5f2d73cba706d2d23fafeeb73d8ef/blockchain/v2/reactor.go#L549-L553
the code above is invalid because it acquires a read lock, then performs a write on code that shouldn’t have a write.
Answers:
username_0: Ditto for RemovePeer https://github.com/tendermint/tendermint/blob/aef1ac7ba5d5f2d73cba706d2d23fafeeb73d8ef/blockchain/v2/reactor.go#L557-L566
username_1: There is nothing wrong with acquiring a shared read-lock in the above examples.
Both methods do not modify `r.events`, and they only **read** a reference to an already thread-safe data structure. Therefore exclusive lock is unnecessary. You would be right if `r.events` was a slice, for example.
username_2: @username_1 is right. The mutex protects the `Reactor.events` field itself (which can be modified, e.g. set to `nil`), not the contained channel (which is already thread-safe). Here, we're only reading the `events` field, not modifying it, so an `RLock` is appropriate.
Status: Issue closed
|
WarEmu/WarBugs | 686430127 | Title: Auction house not displaying all your listed items
Question:
username_0: <!--
Issues should be unique. Check if someone else reported
the issue first, and please don't report duplicates.
Only ONE issue in a report. Don't forget screens or a video.
-->
**Expected behavior and actual behavior:**
**Steps to reproduce the problem:**
**Testing Screenshots/Videos/Evidences (always needed):**
<!-- Drag and drop an image file here to include it directly in the bug report,
no need to upload it to another site -->


<!--
Note that game critical and game breaking bugs may award a manticore/griffon (realm specific) at the leads discretion however, asking for one instantly disqualifies you from this reward.
-->
Answers:
username_1: This is a known and already live issue.
Status: Issue closed
|
fdw/rofimoji | 1132888348 | Title: [Font Problem]
Question:
username_0: * What does rofimoji show instead of the emojis?

* What do other apps like Firefox/Chrome/your text editor show instead of emojis?

* Which emoji fonts do you have installed?
Noto Fonts Emoji
* Are the emoji fonts set in your font config?
No, because I don't know how to
* Which versions of rofimoji and pango are installed?
rofimoji 5.4.0
libpango-1_0-0
* Additional Information
OS: openSUSE Tumbleweed
Kernel: 5.16.5-1-default
Answers:
username_1: Do you have anything in `~/.config/fontconfig` (or `~/.fontconfig)? If so, what?
Also, maybe the issues #13 , #20 and #93 might have something helpful.
username_0: I actually do have a fontconfig in my `~/.config`:
`<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<alias>
<family>serif</family>
<prefer>
<family>Open Sans</family>
<family>Noto Color Emoji</family>
</prefer>
</alias>
<alias>
<family>sans-serif</family>
<prefer>
<family>Open Sans</family>
<family>Noto Color Emoji</family>
</prefer>
</alias>
<alias>
<family>sans</family>
<prefer>
<family>Noto Color Emoji</family>
<family>Open Sans</family>
</prefer>
</alias>
<alias>
<family>monospace</family>
<prefer>
<family>Noto Color Emoji</family>
<family>JetBrainsMono Nerd Font</family>
</prefer>
</alias>
<alias>
<family>mono</family>
<prefer>
<family>Hack Nerd Font</family>
<family>Noto Color Emoji</family>
</prefer>
</alias>
</fontconfig>`
username_1: What exactly is that file's path? Is it a file called `.config/fontconfig`, or is it a file (with which name?) *in* `.config/fontconfig/`?
Apart from that, it looks good. Can you show emoji in your terminal, text editor or something?
username_1: Awesome, thanks for reporting back! Hope you enjoy `rofimoji` 🙂 |
yokostan/Leetcode-Solutions | 372818835 | Title: Leetcode #62 Unique Paths
Question:
username_0: A robot is located at the top-left corner of a m x n grid (marked 'Start' in the diagram below).
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below).
How many possible unique paths are there?
Above is a 7 x 3 grid. How many possible unique paths are there?
Note: m and n will be at most 100.
Example 1:
Input: m = 3, n = 2
Output: 3
Explanation:
From the top-left corner, there are a total of 3 ways to reach the bottom-right corner:
1. Right -> Right -> Down
2. Right -> Down -> Right
3. Down -> Right -> Right
Example 2:
Input: m = 7, n = 3
Output: 28
Solutions:
class Solution {
public int uniquePaths(int m, int n) {
int N = n + m - 2;// how much steps we need to do
int k = m - 1; // number of steps that need to go down
double res = 1;
// here we calculate the total possible path number
// Combination(N, k) = n! / (k!(n - k)!)
// reduce the numerator and denominator and get
// C = ( (n - k + 1) * (n - k + 2) * ... * n ) / k!
for (int i = 1; i <= k; i++)
res = res * (N - k + i) / i;
return (int)res;
}
}
Points:
1. This can be simplified as a permutation C (m + n -2) (n - 1) problem,the result is (m + n - 2)!/((m - 1)!(n - 2)!). If we write an iteration like this, we can calculate the result in this way. |
copy/v86 | 588775589 | Title: Cannot boot 9front due to APIC issues
Question:
username_0: ISO from here:
http://9front.org/iso/
It almost boots, but runs into this:
https://code.9front.org/hg/plan9front/file/d416b601a9df/sys/src/9/pc/mp.c#l167
Status: Issue closed
Answers:
username_1: Done: https://username_1.sh/v86/?profile=9front
Feel free to send me bootable disk image if you'd like to add anything.
username_0: Nice, thanks! |
mne-tools/mne-python | 506184267 | Title: Max and min frequency and time of EEG signal
Question:
username_0: I am new to signal processing and I am trying to find the maximum and minimum frequency and time of EDF signals obtained by electrodes. How could I do that with MNE - Python?
I read the file with mne.io.read_raw_edf and I tried to filter it from the artefacts with the filter method.
Answers:
username_1: please don't use issue tracker to ask usage questions. Use gitter or the mailing list
thanks
Status: Issue closed
|
node-red/node-red-dashboard | 209271051 | Title: ui not working in embedded node red
Question:
username_0: Following
https://nodered.org/docs/embedding
and installing node-red-dashboard inside /home/nol/.nodered/
the ui is not accessible
/red works for editor
/api and /ui are not working for dashboard
How can i add the dashboard to my express application?
Answers:
username_1: Hi,
not sure that the dashboard has been built to be embedded.
If you look at https://github.com/node-red/node-red-dashboard/blob/master/ui.js#L226
you can see what it is trying to attach to where... I suspect this won't work well when embedded, but happy to discuss changes if it can be made to work sensibly.
username_0: hello,
i'll send a pull request when nested routing is working. i think the current approach of forwarding the app object into the dashboard and modifying it there is not optimal.
Status: Issue closed
username_0: It is working. The dashboard is served under /api/ui/. |
Linux74656/SpaceEngineersLinuxPatches | 503747448 | Title: ".NET is out of date" on start
Question:
username_0: I followed the steps in the proper order and nothing seemed to be off with any of them. I didn't closely read the output step 2, but I did see that there were some messages that said processes were completing properly. I also did the change of renaming the video so I wouldn't have to worry about it. However when I opened the game it gave me the message "Please update your .NET runtime with this hotfix:\nhttps://support.microsoft.com/kb/3120241\n\nThe game will not run correctly otherwise." and froze. I was able to kill it after a bit and upon relaunch clicking multiple times caused it to close itself however I do not know if it just crashed. I then restarted my computer hoping that this might fix it but the game continued to show this error and sometimes close. Also when it closes it opens [this link](https://support.microsoft.com/en-us/help/3120241/hotfix-rollup-3120241-for-the-net-framework-4-6-and-4-6-1-on-windows) in Firefox. I am inexperienced in wine so I do not know how to use the hotfix. Any help is appreciated.

Answers:
username_1: Try deleting your prefix and running this
`WINEPREFIX="INSERT/DIRECTORY/TO/SPACEENGINEERS/pfx" winetricks --force -q dotnet48 vcrun2015 faudio d3dcompiler_47`
Make sure you replace INSERT/DIRECTORY/TO/SPACEENGINEERS/pfx with the location of your Space Engineers prefix. If you installed space engineers to your default library folder, it is usually found at /home/YOURUSERNAME/.steam/steam/steamapps/compatdata/244850/pfx/
Status: Issue closed
username_0: That worked a charm!
I would however recommend clarifying somewhere in the instructions that the prefix is in compatdata/244850/pfx and not common/SpaceEngineers/pfx which is what I had originally tried and had seemed to have worked for me.
username_2: Hey,
I have the same problem, but "WINEPREFIX="INSERT/DIRECTORY/TO/SPACEENGINEERS/pfx" winetricks --force -q dotnet48 vcrun2015 faudio d3dcompiler_47" did not work. Is there anything else I could try?
username_1: This usually means the prefix is not installed to the correct directory. You need to make sure you change ` "WINEPREFIX="INSERT/DIRECTORY/TO/SPACEENGINEERS/pfx" ` to be your space engineers prefix directory. You can try using the script i wrote  it will try to find your prefix location for you. If it cannot find it, you will need to enter the steamapps folder location of where the game is installed. For example, if you installed the game to an external HDD, the game directory would be somthing like:
/mnt/SSD/Steamfolder/steamapps/common/SpaceEngineers/
When the script asks you would enter:
/mnt/SSD/Steamfolder/steamapps/.
NOTE: Make sure you use your actual directory location, above is just an example.
username_2: Yes
this was my problem.
Thanks |
mlpack/ensmallen | 585796314 | Title: PrimalDualSolver states arma::mat deprecated type to be removed in 2.10.0, but still present in 2.11.5
Question:
username_0: <!--
Welcome! Unfortunately not all documentation is perfect, and if you're opening
a documentation issue we are interested in fixing it. Please fill out the
template below so that we can solve the problem more quickly; or, alternately,
open a PR with a fix, if you like.
-->
#### Problem location
https://github.com/mlpack/ensmallen/blob/2.11.5/include/ensmallen_bits/sdp/primal_dual.hpp
<!-- Link to incorrect website or location of source file with bad
documentation. -->
#### Description of problem
The comments below indicate the interface should have been removed by now. (2.11.5 > 2.10.0)
```
/**
* PrimalDualSolver is a primal dual interior point solver for semidefinite
* programs.
*
* PrimalDualSolver can optimize semidefinite programs. For more details, see the
* documentation on function types included with this distribution or on the
* ensmallen website.
*
* @tparam DeprecatedSDPType Type of SDP to solve. This parameter is deprecated
* and will be removed in ensmallen 2.10.0.
*/
template<typename DeprecatedSDPType = SDP<arma::mat>>
```
<!-- Tell us what is wrong with the documentation so we can fix it. -->
The documentation should be updated to reflect when the interface will actually be deprecated.
Answers:
username_1: It looks like we should remove it then.
username_0: It looks like removal will break the sdp unit tests. Maybe add `-Wdeprecated-declarations` to catch this type-of thing when building tests?
username_2: Ack, I think that we failed on this one. Should have opened an issue and tacked it onto a 2.10.0 milestone or something.
Anyway, I think this is a good issue for someone new who's looking to contribute. But, this will break reverse compatibility, so we'll have to release 2.12.0 as a change. Basically the task is to remove the deprecated template parameter to `PrimalDualSolver`, remove the `ens_deprecated` methods, and update the tests in `tests/` that use it, then ensure that the documentation is still up to date (and all the tests work :)).
username_0: I'll give it a whack tomorrow or Wednesday. I'm looking to use the interface anyways. :)
Status: Issue closed
|
Azelphur/SourceIRC | 892058352 | Title: L4D2 Team Chat is displaying the number instead of team name
Question:
username_0: sourceirc-relayall.sp#L114
I think (Survivor) or (Infected) should be showing???? instead it shows up as 04Playername or 15Playername
Answers:
username_1: Where does it appear like that. IRC or in game?
username_0: both in HexChat and Twitch Chat (Google Chrome) |
bodono/scs-python | 724136728 | Title: module '_scs' has no attribute 'sizeof_int'
Question:
username_0: I installed SCS and cvxpy on my Python 3.6 environment. I can see SCS 2.1.2 in the list. I tried
- python test/solve_random_cone_prob.py
- Import SCS
and in both cases I have got an error,
File "/Users/../anaconda3/envs/py36/lib/python3.6/site-packages/scs/__init__.py", line 8, in <module>
__sizeof_int__ = _scs_direct.sizeof_int()
AttributeError: module '_scs' has no attribute 'sizeof_int'
Any suggestion
Answers:
username_1: What do you get when you run `import scs; print(scs.__version__)` ?
username_0: Hi
```
Traceback (most recent call last):
File "<ipython-input-88-97fdeadba5ba>", line 1, in <module>
import scs
File "/Users/afsaneh/anaconda3/envs/py36/lib/python3.6/site-packages/scs/__init__.py", line 8, in <module>
__sizeof_int__ = _scs_direct.sizeof_int()
AttributeError: module '_scs' has no attribute 'sizeof_int'
```
This is what I get when I import SCS
and
```
Traceback (most recent call last):
File "<ipython-input-90-cf800631b0a3>", line 1, in <module>
print(scs.__version__)
NameError: name 'scs' is not defined
```
and when I `conda list`, i can see `scs 2.1.2 pypi_0 pypi` in the list on the same env
Thank you
username_1: What happens when you run:
```
import _scs_direct; print(_scs_direct.version())
```
?
I've never seen or heard of anything like this before. The only thing I can think of is that the python version of SCS (which is just a wrapper) is different from the installed binary version. I would suggest doing a deep clean of SCS (deleting everything related to SCS on your machine) and trying to reinstall from scratch.
username_0: Interesting
`print (_scs_direct.version())
1.2.6`
which is different from what I see in the list..
I will try to reinstall. Thank you |
rust-lang/rust | 237499590 | Title: request: report the number of compile errors
Question:
username_0: when i begin to refactor some code, a single change results in 50 or 500 errors, and i'd like to know that number. right now, i just know some errors happen, but cannot tell the difference between 50 or 500 errors because the compiler just says `error: aborting due to previous error(s)` without saying how many.
when i begin to fix errors, i cannot tell how many errors remain or how much progress i've made, because it is not reported. in other languages, i use the number of compile errors all the time, even when i'm not refactoring. it's important to me.
would we be open to including this number back into rust, to say `aborting due to 52 error(s)` instead of some faceless amount?
---
- it appears rust previously did report the number of compilation errors, but it could be inaccurate (#33525)
- pr [#42150](https://github.com/rust-lang/rust/pull/42150/files) "fixed" this problem completely stripping out the number of reported errors, instead replacing it with a generic message that just says "errors exist"
- the machinery to count errors still remains, and that count is propagated all the way up to the point where we abort with a message. the only difference is that the abort message now ignores it.
Answers:
username_1: Nominating for compiler team: we should make a decision here and stick to it, and I'm inclined to say that restoring the old functionality also leads to reports of incorrect numbering. If we can fix the numbering (easily) then I'm all for it though, but last I looked this wasn't an easy task.
username_2: If completely fixing the numbers is non-trivial, is it feasible/acceptable to report "There were **at least** 10 errors"? Or is the count so buggy we don't even know a minimum?
username_0: according to https://github.com/rust-lang/rust/issues/33525, the current implementation returns the number of errors in the last pass only.
so we do at least have a minimum!
username_1: IIRC, I saw an issue recently about the returned value being 3 where it should've been 1, so I don't think that's true. It's possible that was due to old code though.
username_3: triage: P-medium
My feeling is that if we can get a correct count, that would be good. This doesn't seem too hard in principle, but I guess there are... obstacles.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.