repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
vaadin/vaadin-grid | 715644174 | Title: "Using data type specific filtering" demo is broken
Question:
username_0: In https://vaadin.com/components/vaadin-grid/java-examples/filtering:
* If I choose a marital status, all of the items are removed (regardless of which option I choose)
* The "Filter by birth date" dropdown is not good UX, since that means I'd have scroll many many years before I can find a date with any users younger than the date
* The "Birth Date" column is empty |
mapbox/mapbox-maps-android | 863820235 | Title: Anchor point is not restored after gesture interaction
Question:
username_0: ## Environment
- Maps SDK Version: 10.0.0-beta.17
## Observed behavior and steps to reproduce
When the camera animations plugin is initialized and the anchor is `null`, playing around with gestures (zooming in/out, rotating) often leaves the anchor point set to a certain value, `null` is not restored correctly in all cases.
## Expected behavior
After gesture interaction is finished, the anchor point is restored back to `null`.
## Notes / preliminary analysis
This is very easily reproducible by logging out the `CameraAnimationsPlugin#anchor` value. You would quickly notice that when you stop executing a gesture, the anchor point would not be reset back to `null`.
This impacts all the other camera transitions that the app is trying to run afterward, which can result in very unexpected camera position changes.
For example:
https://user-images.githubusercontent.com/16925074/115553998-50e31280-a2ae-11eb-93e5-a3db66197c65.mp4
In the example above, if the anchor point was correctly reset to `null` by the gesture recognizer, the transition would correctly zoom in on top of the puck.
The same unexpected effect will happen for all user-driven transitions that modify the zoom level or bearing of the camera.
The workaround is to explicitly set the anchor to `null` before starting any transition.
cc @mapbox/maps-android<issue_closed>
Status: Issue closed |
vim-jp/vim-vimlparser | 516993885 | Title: Literal carriage returns break parser under Python 3.x
Question:
username_0: This chunk from my `.vimrc` causes `python3 py/vimlparser.py ~/.vimrc` to error out on my system.
Since this didn't show up in my initial search for VimL parsers for Python, I started work on a VimL parser of my own and I know why it happens.
The literal `<CR>`s in this piece of my `.vimrc` (the `^M`s)...

...get converted to `\n` by the universal newline support in Python 3.x's text mode. (I assume because a bare `\r` is the old Classic MacOS line terminator and Python's universal newlines mode is converting all potential newlines rather than limiting itself to whichever kind of newline it encounters first like Vim itself does.)
The quick hack I came up with to make the parser robust was to open the file in binary mode and then manually decode and split lines:
```python3
with open(path, 'rb') as fobj:
process_group(fobj.read().decode('utf8').split('\n'))
```
Answers:
username_1: Yeah, I think we should not use universal newlines.
Can you tell where in vimlparser this happens?
username_0: Sorry for the delay. The last couple of days were distracting.
I think I know where the problem is, but I'll need to find a moment to test my hypothesis before I can give you a definitive answer.
Also, I can't say for certain without reading the Vim source code to try to match whatever it does, but I'm guessing that the proper solution for detecting newlines would be to check for `\n`, `\r\n`, and `\r` and assume whichever occurs first is what the file should be split on.
(I'll have to try generating a test `.vimrc` to see if Vim's parser breaks if I include a `\r` on the very first line of an otherwise `\n` file.)
username_0: I didn't have time to check vimlparser yet, but I did find a moment to look up Vim's documentation on the line-ending detection algorithm that we probably want to replicate:
From `:h 'fileformats'`:
```
- When more than one name is present, separated by commas, automatic
<EOL> detection will be done when reading a file. When starting to
edit a file, a check is done for the <EOL>:
1. If all lines end in <CR><NL>, and 'fileformats' includes "dos",
'fileformat' is set to "dos".
2. If a <NL> is found and 'fileformats' includes "unix", 'fileformat'
is set to "unix". Note that when a <NL> is found without a
preceding <CR>, "unix" is preferred over "dos".
3. If 'fileformat' has not yet been set, and if a <CR> is found, and
if 'fileformats' includes "mac", 'fileformat' is set to "mac".
This means that "mac" is only chosen when:
"unix" is not present or no <NL> is found in the file, and
"dos" is not present or no <CR><NL> is found in the file.
Except: if "unix" was chosen, but there is a <CR> before
the first <NL>, and there appear to be more <CR>s than <NL>s in
the first few lines, "mac" is used.
4. If 'fileformat' is still not set, the first name from
'fileformats' is used.
```
In other words, assuming that `unix`, `dos`, and `mac` are all listed for detection, it behaves like this:
1. If a `\n` is found that isn't preceded by `\r`, assume UNIX line endings.
2. Else, if no lone `\n` was found, but at least one `\r\n` was found, assume DOS line endings.
3. Else, if neither `\n` nor `\r\n` were found, but `\r` was found, assume Classic MacOS line endings.
4. Else, use whatever type of line-endings was listed first in `fileencodings`. (Defaults to `dos,unix` for native Windows builds, `unix,dos` for everything else including Cygwin.)
For step 4, the best way to eliminate surprises would probably be to use `os.name == 'nt'` to decide whether to split on `\r\n` or `\n`. That'll perfectly match Vim's behaviour in every situation except "single-line VimL file being loaded by a Vim with a non-default first entry in `fileformats`."
username_0: That said, step 4 may not even be necessary because, a little further down, it looks like it's saying that it uses a simplified form of that algorithm for VimL that's sourced or in vimrc:
```
For systems with a Dos-like <EOL> (<CR><NL>), when reading files that
are ":source"ed and for vimrc files, automatic <EOL> detection may be
done:
- When 'fileformats' is empty, there is no automatic detection. Dos
format will be used.
- When 'fileformats' is set to one or more names, automatic detection
is done. This is based on the first <NL> in the file: If there is a
<CR> in front of it, Dos format is used, otherwise Unix format is
used.
```
(Which, now that I think about it, would make sense. I'm on Linux and had to manually change the line endings from DOS to Unix on bwHomeEndAdv to get it to load properly.)
Sorry about not catching that before I made the first post. I didn't sleep well and I'm just about to go back to bed.
username_0: Sorry again for the delay.
The problem is this function:
```python
def viml_readfile(path):
lines = []
f = open(path)
for line in f.readlines():
lines.append(line.rstrip("\r\n"))
f.close()
return lines
```
...and it works if experimentally changed to this:
```python
def viml_readfile(path):
lines = []
# Replicate Vim's algorithm for identifying DOS vs. UNIX line endings
with open(path, 'rb') as f:
content = f.read().decode('utf-8')
first_n = content.index('\n')
if first_n > 0 and content.index('\r') == (first_n - 1):
raw_lines = content.split('\r\n')
else:
raw_lines = content.split('\n')
for line in raw_lines:
lines.append(line.rstrip("\r\n"))
f.close()
return lines
```
...though, to be honest, that's still broken and was always broken, because there's another bug in the Python version of the parser that I noticed while writing that. My unconditional `.decode('utf-8')` replicates Python 3's behaviour in text mode and, if you try to open a non-UTF8 file in Python's text mode, you'll get this error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 3: invalid continuation byte
The proper solution to match Vim is tricky because, again, it depends on stuff you can reconfigure external to the script but it boils down to "Parse once using the system's default encoding as specified by `$LANG`, falling back to `latin1`, and, if a `scriptencoding` statement is found, then re-parse as whatever encoding it declares. (eg. `scriptencoding utf-8` in my case)
My best guess at an algorithm that doesn't require a big table of regional encoding would be something like this:
1. If `os.environ.get('LANG', '').endswith('.utf8')`, then attempt to parse as UTF-8.
2. If that fails, parse as `latin1`, which is both Vim's default if `LANG` doesn't specify otherwise and infallible for our purpose because there are no invalid bit patterns and what we need for a proper re-parse is part of the ASCII subset that all codepages share.
3. If the parse turned up a `scriptencoding` statement, re-parse as the encoding it declares.
username_0: The closest trivial hack I can think of would be to replace `content = f.read().decode('utf-8')` in my changed version with:
content = f.read()
try:
content = content.decode('utf-8')
except UnicodeDecodeError:
content = content.decode('latin1')
Since all structural elements of VimL are 7-bit ASCII, that'll get you something that parses UTF-8 correctly and, if that fails, you instead get something that'll produces a correct AST but any tokens containing non-ASCII characters will be assumed to be latin1, even if that makes them gibberish.
username_2: Does the `newline` argument of [open](https://docs.python.org/3.8/library/functions.html#open) solve the problem?
```
with open('nl', 'bw') as f:
f.write(b'newline\ncarriage\rnewline carriage\r\n')
with open('nl', 'r') as f:
print(f.readlines())
# -> ['newline\n', 'carriage\n', 'newline carriage\n']
with open('nl', 'r', newline='') as f:
print(f.readlines())
# -> ['newline\n', 'carriage\r', 'newline carriage\r\n']
```
username_0: Sorry for the delayed response. I'll try to get you an answer soon.
username_0: No. Substituting `open(path)` for `open(path, 'r', newline='')` has no apparent effect.
To avoid this back-and-forthing, here's a test file, containing the relevant two lines bit of my vimrc, plus a few extra lines of context to help make sure things are working properly:
[vimrc.txt](https://github.com/vim-jp/vim-vimlparser/files/4173826/vimrc.txt)
Vim parses it properly, as does vimlparser when run with Python 2.x, or when my proposed "Replicate Vim's algorithm for identifying DOS vs. UNIX line endings in sourced scripts and .vimrc files" modifications are run with Python 3.x.
Otherwise, vimlparser with Python 3.x will produce the somewhat amusingly confusing failure mode of appearing to output nothing at all because the error message doesn't use `repr()` when showing the problem line and the portion after the `\r` contains a literal `^O` which the terminal dutifully interprets.
If you use `less` to force the non-printables to not be interpreted by the terminal, you see this:
vimlparser: E492: Not an editor command: JA TODO:: line 4 col 1
For that reason, I'd also suggest changing the `%s` in this line to a `%r`:
```python
raise VimLParserException(Err(viml_printf("E492: Not an editor command: %s", self.reader.peekline()), self.ea.cmdpos))
```
...that removes the need for `less` to see the message and makes it look like this:
vimlparser: E492: Not an editor command: '\x1bkJA TODO:': line 4 col 1
However, the successful output still looks truncated unless you run it through something like `less` because it doesn't use `repr()` on the `^O` either. |
JTFouquier/ghost-tree | 65129698 | Title: Click makes options wrap by single characters; options are difficult to read
Question:
username_0: Click works great, but it still wraps the command options by single characters. Sometimes there are 4 to 5 options in ghost-tree, making reading options very difficult. I’ve asked Click how the bug fix is going, and they have a workaround but haven't accepted it as a PR. Can ghost-tree be packaged with a modified version of Click (not released by Click)?
Click Issue:
https://github.com/mitsuhiko/click/issues/231 (they gave me directions on how to implement the PR, but I'm not sure it's worth trying due to future packaging of ghost-tree)
Click PR:
https://github.com/mitsuhiko/click/pull/240
According to @gregcaporaso via previous email (3/26/15):
"it's better to just leave as-is then try to build a modified version of click"
Example:

Answers:
username_1: Since someone is working on a fix in click, it might be worth waiting to see if this gets resolved before an initial ghost-tree release. If it isn't fixed by then, @ebolyen or I can help with packaging a modified version of click with ghost-tree (should be possible because click is BSD-licensed).
username_0: I think they were considering the PR for a while. Looks like they merged it last night. So does that mean that the next Click release will contain the fix? I suppose it should if they pulled it. Thanks!
username_1: Yep, the next click release should contain the fix. When ghost-tree is ready for an initial release, let's check to see if there's been a click release. If not, we can package the latest dev version of click with ghost-tree.
username_0: @ebolyen the Click fixes are referenced above. I don't think they made it into the new release of Click 4.0 (http://click.pocoo.org/4/changelog/#version-4-0).
username_1: Looks like they fixed it in click 4.0.
With click 3.3:
```
$ ghost-tree scaffold hybrid-tree --help
Usage: ghost-tree scaffold hybrid-tree [OPTIONS] TIPS_OTU_TABLE
T
I
P
S
_
T
A
X
O
N
O
M
Y
_
F
I
L
E
T
I
P
S
_
S
E
Q
U
E
N
C
E
_
F
I
L
E
B
A
C
K
B
O
N
E
_
A
L
I
G
N
M
E
N
[Truncated]
E
Options:
--help Show this message and exit.
```
With click 4.0:
```
$ ghost-tree scaffold hybrid-tree --help
Usage: ghost-tree scaffold hybrid-tree [OPTIONS] TIPS_OTU_TABLE
TIPS_TAXONOMY_FILE TIPS_SEQUENCE_FILE
BACKBONE_ALIGNMENT_FILE
GHOST_TREE_OUTPUT_FILE
Options:
--help Show this message and exit.
```
I'll update setup.py to require click >= 4.0 in my open PR (#18) since I already have other fixes to ghost-tree's dependency versions.
Status: Issue closed
|
michael-spengler/wwi18dsa-semester-6 | 897883584 | Title: Sentiment Analysis & Hot News Platform...
Question:
username_0: https://github.com/username_0/wwi18dsa-semester-6/issues/7#issuecomment-845819677
Answers:
username_0: Related Art:
1. https://github.com/username_0/price-predictor-wwi19dsa
2. https://github.com/DHBWMannheim/MachineLearning
3. https://github.com/DHBWMannheim/ml-server
4. https://www.youtube.com/watch?v=qCB8MZ-W1Ig&t=126s
username_0: if you want to check out Slack Channel Sentiments, you might consider this:
https://github.com/username_0/slack-channel-sentiment-analyzer
username_1: Link zu unserem GitHub:
https://github.com/hannahweber244/CryptoSentiment
username_2: Video zur Erstellung der Credentials erstellen |
xiaolyuh/layering-cache | 719368705 | Title: 关于执行forceRefresh不清除一级缓存的疑问
Question:
username_0: 您好,有以下问题请教下:
1、在设置了preloadTime参数,并指定forceRefresh=true,会跑异步线程进行二级缓存更新。但是我注意到好像在更新二级缓存时,没有push消息进行一级缓存更新,这样设计是出于对性能的考虑吗?
2、基于上面的场景,有些内容对于失效时间有较为严格的要求(例如播放地址过期即无法使用),上面的情况如果发生在边界时间(二级缓存即将过期,重建了一级缓存),则需要再等一个完整的一级缓存过期时间,才能拿到最新的二级缓存,会导致很长一段时间拿到的播放地址都是不可用的。一级缓存时间设置的足够小,可以解决此问题,但这样没有完全发挥出一级缓存的作用
Status: Issue closed
Answers:
username_1: 1. 因为每次刷新缓存,获取到的数据不一定会更新,所以不是每次刷新都会push消息进行一级缓存的更新,这个问题后面的版本可以考虑看怎么优化下不。
2. 这个就只能靠设置准确的失效时间和刷新时间来避免。
username_1: 第一点我已经在3.1.6版本做了优化,如果刷新二级缓存的时候发现数据发生变化就更新一级缓存 |
mongodb-partners/mongo-rocks | 285564278 | Title: "File too large" error on most write ops
Question:
username_0: {
"ok" : 0,
"errmsg" : "IO error: While appending to file: /data/mongodb/db/070269.sst: File too large",
"code" : 1,
"codeName" : "InternalError"
}
```
The same error can be seen in background compaction log:
```
2018/01/03-01:37:32.183489 7f85302e8700 [db/compaction_job.cc:1437] [default] [JOB 3] Compacting 4@0 + 8@3 files to L3, score 1.00
2018/01/03-01:37:32.183501 7f85302e8700 [db/compaction_job.cc:1441] [default] Compaction start summary: Base version 2 Base level 0, inputs: [70255(16MB) 70253(40MB) 70251(53MB) 70249(48MB)], [70240(64MB) 70241(64MB) 70242(64MB) 70243(64MB) 70244(64MB) 70245(64MB) 70246(66MB) 70247(22MB)]
2018/01/03-01:37:32.183531 7f85302e8700 EVENT_LOG_v1 {"time_micros": 1514932652183514, "job": 3, "event": "compaction_started", "files_L0": [70255, 70253, 70251, 70249], "files_L3": [70240, 70241, 70242, 70243, 70244, 70245, 70246, 70247], "score": 1, "input_data_size": 664650099}
2018/01/03-01:37:55.800970 7f85302e8700 [WARN] [db/db_impl_compaction_flush.cc:1653] Compaction error: IO error: While appending to file: /data/mongodb/db/070269.sst: File too large
2018/01/03-01:37:55.800993 7f85302e8700 (Original Log Time 2018/01/03-01:37:55.800891) [db/compaction_job.cc:621] [default] compacted to: base level 3 max bytes base 536870912 files[4 0 0 8 23 183 1375] max score 1.00, MB/sec: 28.1 rd, 2.8 wr, level 3, files in(4, 8) out(1) MB in(159.3, 474.6) out(63.1), read-write-amplify(4.4) write-amplify(0.4) IO error: While appending to file: /data/mongodb/db/070269.sst: File too large, records in: 6606920, records dropped: 13
2018/01/03-01:37:55.801000 7f85302e8700 (Original Log Time 2018/01/03-01:37:55.800949) EVENT_LOG_v1 {"time_micros": 1514932675800921, "job": 3, "event": "compaction_finished", "compaction_time_micros": 23617298, "output_level": 3, "num_output_files": 1, "total_output_size": 66180284, "num_input_records": 6606920, "num_output_records": 6606907, "num_subcompactions": 1, "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [4, 0, 0, 8, 23, 183, 1375]}
2018/01/03-01:37:55.801007 7f85302e8700 [ERROR] [db/db_impl_compaction_flush.cc:1333] Waiting after background compaction error: IO error: While appending to file: /data/mongodb/db/070269.sst: File too large, Accumulated background error counts: 1
2018/01/03-01:37:56.813827 7f85302e8700 EVENT_LOG_v1 {"time_micros": 1514932676813817, "job": 3, "event": "table_file_deletion", "file_number": 70269}
```
The filesystem is ext4, `target_file_size_base`=67108864, rocksdb directory size is 107 GB and there are 53 GB free space in the filesystem. I don't quite understand what is happening here.
Answers:
username_1: I've never seen this error before, very interesting :). What's the size of /data/mongodb/db/070269.sst file? Does it even exist?
Errors in write operations are expected -- when an error happens in the background thread RocksDB immediately switches to read-only mode, so all write operations fail. To fix that you'll need to restart the database.
Does the same error happen after restart?
username_1: Also cross-posted to RocksDB github: https://github.com/facebook/rocksdb/issues/3321
username_0: @username_1 The file does not exist, seems that the compaction process is started, it writes several files, can't write this one, and immediately deletes what it has just written (see the last log line:
```
2018/01/03-01:37:56.813827 7f85302e8700 EVENT_LOG_v1 {"time_micros": 1514932676813817, "job": 3, "event": "table_file_deletion", "file_number": 70269}
```
The same happens after restart, after freeing up more filesystem space, and after replacing this database snapshot with a newer one.
username_1: Is it possible that `ulimit` sets the maximum file size on your system? What's the output of `ulimit -f` on the same shell that's launching mongorocks?
Status: Issue closed
username_0: @username_1 Wow! Thanks a lot! I had a typo in `ulimit -n 64000` - it has become `ulimit 64000` which sets `-f` by default. Sorry for wasting your time. |
Azure/azure-sdk-for-java | 870729804 | Title: Key Vault JCA Readme issue
Question:
username_0: 1.
Section [Link](https://github.com/Azure/azure-sdk-for-java/tree/azure-security-keyvault-jca_1.0.0-beta.6/sdk/keyvault/azure-security-keyvault-jca#azure-key-vault-jca-client-library-for-java):

Suggestion:
For 1:
Change link to https://azure.github.io/azure-sdk-for-java/keyvault.html#azure-security-keyvault-jca
For 2:
Add link for `Samples` https://github.com/Azure/azure-sdk-for-java/tree/azure-security-keyvault-jca_1.0.0-beta.6/sdk/keyvault/azure-security-keyvault-jca/src/samples/java/com/azure/security/keyvault/jca
2.
Section [Link](https://github.com/Azure/azure-sdk-for-java/tree/azure-security-keyvault-jca_1.0.0-beta.6/sdk/keyvault/azure-security-keyvault-jca#additional-documentation):

Suggestion:
Add link for `API reference documentation` https://azure.github.io/azure-sdk-for-java/keyvault.html#azure-security-keyvault-jca
@jongio, for notification
Answers:
username_1: Hi @username_2, please take a look at this.
Status: Issue closed
|
MicrosoftDocs/visualstudio-docs | 341291038 | Title: Typo in documentation for the Quick Shortcut
Question:
username_0: "... the Quick MonoBehavior wizard puts them right at your fingertips. Press CTRL+ALT+Q." should be CRTL + SHIFT + Q
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f27057a9-85be-f96c-29aa-0fa60b488f35
* Version Independent ID: cd80ccea-8ecd-d21f-08b8-af0a2ada7cf3
* Content: [Overview of Visual Studio Tools for Unity - Visual Studio](https://docs.microsoft.com/en-us/visualstudio/cross-platform/overview-of-visual-studio-tools-for-unity)
* Content Source: [docs/cross-platform/overview-of-visual-studio-tools-for-unity.md](https://github.com/MicrosoftDocs/visualstudio-docs/blob/master/docs/cross-platform/overview-of-visual-studio-tools-for-unity.md)
* Product: **visual-studio-dev15**
* GitHub Login: @username_2
* Microsoft Alias: **crdun**
Answers:
username_1: Hey @username_0. This feature has actually been removed in the latest version of Visual Studio Tools for Unity, now that we have IntelliSense support for the Unity API. We're working on updating the documentation to reflect this. Thanks for the feedback!
username_1: @username_0 We've just finished a fairly large overhaul of the VSTU docs, including addressing this issue. Take a look at your convenience! https://docs.microsoft.com/en-us/visualstudio/cross-platform/visual-studio-tools-for-unity
Status: Issue closed
|
BSData/The-9th-Age | 186208344 | Title: Repos not on the main BS website
Question:
username_0: So I went to test the new files on my phone and it asked me to update to the new version of battle scribe so I did. But when I went to download the new repo the page where it normally is is empty.
http://battlescribedata.appspot.com/#/repos
Are you guys having this? Is there a work arround?
Answers:
username_1: Probably because I did a release around then, takes a while to update.
Status: Issue closed
|
yuval-alaluf/SAM | 927077984 | Title: Latent vector
Question:
username_0: I am trying to execute style_mixing.py but I am getting zero response. Following is the command,
python scripts/style_mixing.py \
--exp_dir=to \
--checkpoint_path=trained/sam_ffhq_aging.pt \
--data_path=from \
--test_batch_size=4 \
--test_workers=4 \
--latent_mask=8,9 \
--target_age=50
What does it means i.e. (--latent_mask=8,9)?
What is 8,9 in the latent mask as in the directory there are no files of these name?
How can I get latent mask files from cloud?
Answers:
username_1: The latent mask refers to which layers of StyleGAN we will perform the style mixing over. We perform mixing over layers 8 and 9 in the above example.
There are no files you need need to pass to this script. This is just a comma-separated list of numbers.
username_0: The problem is still there i.e. in the reference_guided_inference.py, we gave a parameter (--ref_images_paths_file=/path/to/ref_list.txt) to follow the style. But in style_mixing.py there is no parameter to follow the style and I got a blank response.
How can I resolve it?
username_1: The style mixing script performs style mixing with randomly samples latents rather than using reference images.
Can you please clarify what you mean by a blank response?
username_0: Could you please send me complete command of style_mixing?
Actually, I am little confuse that in this command, we are not giving any path of samples to follow in style mixing due to which we are not getting any results. I am using following command,
python scripts/style_mixing.py
--exp_dir=to
--checkpoint_path=trained/sam_ffhq_aging.pt
--data_path=from
--test_batch_size=4
--test_workers=4
--latent_mask=8,9
--target_age=50
Status: Issue closed
|
LecksC/ASOR_Bugs | 218045598 | Title: AE Menu Forced Respawn Option
Question:
username_0: Add a forced respawn command to the AE Menu
Related function: https://github.com/username_1/ASOR_Bugs/issues/44
Answers:
username_0: Add a Update Curator to AE Menu
username_1: Might be able to make 'Update Curator' an ace zeus action instead? (I think those exist)
username_0: added to AE menu, not tested yet forgot to check it last op.
Status: Issue closed
username_0: double up - Done |
zkvalidator/mina-vrf-rs | 970381124 | Title: only got invalid slots?
Question:
username_0: I only got invalid slots running the commands below:
```
export CUR_EPOCH=10
cargo run --release -- batch-generate-witness --pub <KEY> --epoch $CUR_EPOCH > requests
cat requests | mina advanced vrf batch-generate-witness --privkey-path /home/ubuntu/keys/my-wallet | grep -v CODA_PRIVKEY_PASS > witnesses
cat witnesses | cargo run --release -- batch-patch-witness --pub <KEY> --epoch $CUR_EPOCH > patches
cat patches | mina advanced vrf batch-check-witness > check
sed -i 's/}Using password from environment variable CODA_PRIVKEY_PASS/}/g' check
cat check | grep -v CODA_PRIVKEY_PASS > check_new
cat check_new | cargo run --release -- batch-check-witness --pub <KEY> --epoch $CUR_EPOCH
```
and the outputs looks like:
```
DEBUG 2021-08-13T12:56:43Z: reqwest::connect: starting new connection: http://localhost:3085/
DEBUG 2021-08-13T12:56:43Z: reqwest::async_impl::client: response '200 OK' for http://localhost:3085/graphql
DEBUG 2021-08-13T12:56:43Z: reqwest::connect: starting new connection: https://raw.githubusercontent.com/
DEBUG 2021-08-13T12:56:43Z: reqwest::async_impl::client: response '200 OK' for https://raw.githubusercontent.com/zkvalidator/mina-graphql-rs/main/data/epochs/jxhjiLBeMR7pgtV8ogcJvqXdr6asoNrC3g6hoUzEDLBSnZoxUDJ.json
ERROR 2021-08-13T12:56:43Z: mina_vrf: invalid slots: [71400, ....., 78539]
```
Answers:
username_0: The command `cargo run --release -- batch-generate-witness --pub B62qp --epoch $CUR_EPOCH > requests` does not save the output to the `requests` file, instead print out on screen, not sure why...
username_1: Related to #25
Status: Issue closed
username_0: Thanks Gareth, I'll close the issue now. |
sascha245/vuex-context | 425256876 | Title: Action context: ts complains cannot invoke expression lacking call signature
Question:
username_0: I'm currently experiencing the following error when implementing your sample store. Specifically, in `Counter.ts` ts complains about the `incrementAsync` action and the call to `ctx.commit` inside:
```
[ts] Cannot invoke an expression whose type lacks a call signature. Type '((() => void) & ((payload: undefined, options?: CommitOptions | undefined) => void)) | ((payload:...' has no compatible call signatures.
(property) increment: ((() => void) & ((payload: undefined, options?: CommitOptions | undefined) => void)) | ((payload: any, options?: CommitOptions | undefined) => void)
```
Action:
``` typescript
export const actions = {
async incrementAsync(context) {
const ctx = Counter.getInstance(context);
ctx.commit.increment();
}
};
```
Thoughts?
Answers:
username_1: Hi @username_0,
Sorry for the delay, I didn't receive / see any notification about your post.
Anyway, this error seams to appear only when using optional parameters, like with this one:
```ts
export const mutations= {
increment(state, payload?: number) {
...
}
};
```
I will look into it and fix it as soon as possible.
And thanks for the input!
username_1: So, fixes have been published, please tell me if everything works
username_0: thanks @username_1 that resolved it, haven't fully finished converting my store into vuex-context land but everything is looking good so far.
Status: Issue closed
|
contentful/rich-text | 1127183808 | Title: Rich text to markdown custom rules
Question:
username_0: Hello, I have a specific situation and I wonder if there is an option to add rules to ignore certain placeholders during the conversion.
When I have the text like ``` [[Hello there]]``` it is stripped from it's inner content to ```[]```, and I it would be great if I could know why and how to avoid this behavior since we are using lot's of these placeholders in the above-mentioned format.
Thanks! |
MONROE-PROJECT/Maintenance | 194985372 | Title: [Node 198] lsusb reports mifi state 1225
Question:
username_0: Mifi was in mode 1408 --> Take port down and drain battery.
Port up + APU reboot
Mifi not working:
* lsusb: Bus 001 Device 076: ID 19d2:1225 ZTE WCDMA Technologies MSM
* lsusb: Sometimes, device disappears
* Mifi does not act normal after one hour.
* dmesg:
[ **6435**.710462] usb 1-1.1: SerialNumber: P680A1ZTED000000
[ **6552**.103375] usb 1-1.1: USB disconnect, device number 75
[ **6560**.747958] usb 1-1.1: new high-speed USB device number 76 using ehci-pci
[ 6560.869099] usb 1-1.1: New USB device found, idVendor=19d2, idProduct=1225
[ 6560.869114] usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6560.869121] usb 1-1.1: Product: ZTE Technologies MSM
[ 6560.869128] usb 1-1.1: Manufacturer: ZTE,Incorporated
[ 6560.869134] usb 1-1.1: SerialNumber: P680A1ZTED000000
Answers:
username_0: Powering up port. MiFi is back into mode 1225.
Status: Issue closed
|
cornerStone-Dev/fith | 592024243 | Title: ROT Command
Question:
username_0: Four of the five basic stack manipulations in Forth are currently implemented (`dup`, `over`, `drop`, and `swap`.
It would be nice to have access to the 5th, `rot`
ROT | ( n1 n2 n3 — n2 n3 n1 ) | Rotates third item to top
Answers:
username_0: Resolved with merged PR #1
Status: Issue closed
|
alliedvision/VimbaPython | 997636691 | Title: a strange error need to be solved help!!!
Question:
username_0: 
this error occur sometimes, i solve this question through pull the cable and i exactly deal with this question by this way.
but i can't solve this situation as usual, so i want to know why the error occur and how i solve this error forever
Answers:
username_1: For further process I would like to know which camera is being used in the application? |
NickWaterton/Roomba980-Python | 246371951 | Title: DISCUSSION: Rename this project
Question:
username_0: Hi,
since this library should work with any of the Wi-Fi enabled Roombas (980, 960, 890, 690) - except the mapping feature for the 890 and 690 - wouldn't it make sense to rename to something more general like `roombapy` or `wroomba` before releasing it to Pypi (ping for #3 o:))?
Answers:
username_1: It was written for the 980, and I haven't tried it on anything else. I think that it would need to be tested on other Roomba's first, before we could make any claims about compatibility with other Roombas.
username_1: I guess there is no reason why it wouldn't work in theory, but we would really need someone to try it out on other versions first, to be sure.
username_2: I have a 960, and it works fine |
lightly-ai/lightly | 968317933 | Title: Clean up navigation
Question:
username_0: # Clean up navigation
The navbar in the docs has become quite cluttered. Especially the "First Steps" section has a lot of content which does not really belong there. We should regroup and clean it up.
To Do:
- [ ] Regroup / rewrite the introduction sites (see below for a suggestion)
- [ ] Make PYTHON API unfold on click
- [ ] Rename titles in PYTHON API so they are more readable and easier to navigate
Current state:

Suggested new structure:
- GETTING STARTED
- Main Concepts (new page with the content from the current landing page)
- Installation
- Command-line Tool
- Self-supervised Learning (new page with the content from "Lightly at a Glance"
- Active Learning (move the concepts part to "Main concepts")
- The Lightly Platform (optional - maybe merge it with AL)
- ADVANCED
- Advanced Concepts (new page with contents of current "Advanced" page)
- Benchmarks
Answers:
username_1: One suggestion is to rename "Python API" to "PIP REFERENCE" or similar as to not confuse it with the REST API
username_1: okey, I was more inspired by torch lightning that calls it `API REFERENCE` https://pytorch-lightning.readthedocs.io/en/latest/. Generally I like the word `reference`.
username_0: Yes you're right, but they also have `Lightning Reference`... 🤔
Thinking about it, I would also be ok with something like `Lightly Reference` or `Package Reference` tbh.
username_2: After moving this, the landing page is quite empty. Furthermore, we are missing on overview of the lightly ecosystem:
We should tell that we have the following parts and how they fit together:
- SSL package
- Lightly Platform + Webapp
- CLI
- AL
- Docker
Status: Issue closed
|
ionic-team/capacitor | 552129226 | Title: Randomly stored cookies
Question:
username_0: When I set cookies from my backend everything works as expected in application, but when I close the application ( both android and ios ) cookies are not stored, If I don't wait 20-30 seconds. If I set cookie and immediately close the application cookies are not stored. If I wait 20-30 seconds before close the application, cookies are stored.
Status: Issue closed
Answers:
username_1: Please, use the template when creating issues.
On WKWebView there has always been problems like this, not really a Capacitor issue.
I've never seen something like this on Android, but it's probably also an Android WebView issue. |
cortexproject/cortex | 730275069 | Title: Alertmanager fails to read fallback config
Question:
username_0: ## Description
I'm trying to use `fallback_config_file` with Alertmanager service but it fails with:
```
msg="GET /api/prom/configs/alertmanager (500) 78.226µs Response: \"Failed to initialize the Alertmanager\\n\"
```
## Details
I'm running `1.4.0` binary release form GitHub and I have `fallback_config_file` configured to point to a file I configured the same way I used to for normal Alertmanager. When I check the logs I do not see either of these two errors:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L186
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L190
But when I query the API:
```
curl -sv http://10.1.31.155:9101/api/prom/configs/alertmanager -H 'X-Scope-OrgID: 0'
```
I get back:
```
Failed to initialize the Alertmanager
```
But the code that triggers this error doesn't actually show what caused it because `err` is discarded:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L476-L485
So I have no clue what I'm doing right. The file is in place, it has read permissions for the service.
Answers:
username_0: I'm actually confused as to what `fallback_config_file` is supposed to do. The docs state that:
```
# Filename of fallback config to use if none specified for instance.
# CLI flag: -alertmanager.configs.fallback
[fallback_config_file: <string> | default = ""]
```
So I assume this is where I can specify the Alertmanager configuration for alert routes(like email or VictorOps). But then where is the proper canonical place for specifying non-fallback config? There is no `config_file` or `config_dir` setting in the `alertmanager` configuration, so that's confusing.
I __THINK__ I'm supposed to `POST` the config to `/api/prom/configs/alertmanager`, but that is not actually stated anywhere in the documentation, so I'm not sure if I'm correct.
username_0: Also, I'm not sure why I have to specify `X-Scope-OrgID` if I have `auth_enabled: false` set in the config....
username_0: In the `alertmanagerFromFallbackConfig` function I actually see no reference to the fallback config:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L490-L507
Which apparently was loaded with `amconfig.LoadFile` but then discarded:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L188`
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L188
username_0: When I comment out the `fallback_config_file` setting and try to post my config:
```
curl -sv http://localhost:9101/api/prom/configs/alertmanager -H 'X-Scope-OrgID: 0' -XPOST -d @/etc/cortex-alertmanager/routes.yml
```
It just fails with:
```
the Alertmanager is not configured
```
Which is this:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L466-L474
So it wants some kind of `userAM` to be active, but where does that come from? The documentation says nothing about this.
username_0: There is something called `loadAllConfigs` that is run at startup:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L248-L252
Which apparently loads configs from `am.poll()`
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L279-L292
Which gets it from `am.store.ListAlertConfigs`:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L303-L310
Which in turn gets them from `a.client.List()`:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/alerts/objectclient/store.go#L34-L52
Which appears to be using the chunk object storage:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/alerts/objectclient/store.go#L22-L25
Which is also used to store the config with `SetAlertConfig`:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/alerts/objectclient/store.go#L84-L92
Which is used by `alertmanagerFromFallbackConfig`:
https://github.com/cortexproject/cortex/blob/23554ce028c090a4a3413ac0e35e5e1dc9fa929f/pkg/alertmanager/multitenant.go#L490-L493
username_0: So I THINK that the `alertmanager` process requires access to the chunk storage configured via `storage_config`:
https://cortexmetrics.io/docs/configuration/configuration-file/#storage_config
But why is this not mentioned anywhere in the documentation?
username_0: Also, the error makes no sense. If Alertmanager requires chunk storage then it should FAIL at startup if it's unavailable. And not return vague `Failed to initialize the Alertmanager` or `the Alertmanager is not configured` that means nothing to the user.
username_0: And I am still failing to see what's the advantage of running this setup over just running Alertmanager as is.
Is there any reason to run your overcomplicated Alertmanager service via Cortex rather than just [Alertmanager](https://github.com/prometheus/alertmanager) itself?
username_1: Hi @username_0!
Thanks for documenting your whole saga - helps us find gaps in the documentation. Two quick points:
1. > Also, I'm not sure why I have to specify X-Scope-OrgID if I have auth_enabled: false set in the config....
There's currently a bug in master that is meant to be solved by #3343 for this.
2. The fallback config is meant to provide Cortex tenants with a default configuration in case they have none (e.g. if an alert is received for a tenant, it'll start the corresponding Alertmanager with the fallback configuration)
username_0: Thanks for explaining and pointing to the issue. Appreciate it.
Your documentation doesn't make it very clear what a `tenant` is even. Or am I just bad at searching? What is a `tenant` for? How is it created? Should it be created? Is there a part of docs that addresses this? Because I can't find it.
Also, am I correct in my diagnosis that running of `alertmanager` requires `storage_config` to be provided?
If so, why doesn't it fail at startup if such config is not provided? Surely that would be the sensible thing to do.
username_1: Which goes back on the subject of multi-tenancy - if you do not require multi-tenancy, there's (generally) no reason to run the Alertmanager from Cortex. The Cortex Ruler (if needed) is compliant with the vanilla Alertmanager.
username_0: But it just says "requires a database". It doesn't specify that it requires exactly `storage_config`. What I though is that either the `storage` sections of the `alertmanager` config supply that. Since their docs state:
```
storage:
# Type of backend to use to store alertmanager configs. Supported values are:
# "configdb", "gcs", "s3", "local".
# CLI flag: -alertmanager.storage.type
[type: <string> | default = "configdb"
```
https://github.com/cortexproject/cortex/blob/fd0548e4288a2cecdcb530ec90c9297a5a36e503/docs/configuration/config-file-reference.md#alertmanager_config
It clearly states that this is used to "store alertmanager configs", but based on my code dive and what you're saying this is NOT where the alertmanager is stored, and in turn it uses `storage_config` for chunk storage.
It's very confusing.
username_0: That's great to hear, then I can avoid this whole complex setup and just re-use my existing Alertmanager cluster. Great.
I think this information should definitely be mentioned in documentation, like [here](https://cortexmetrics.io/docs/architecture/#alertmanager).
username_1: Thanks for your valuable feedback.
The confusion certainly comes from `client chunk.ObjectClient` - the name is deceiving, this does not require access to the chunk storage. We just re-use use the same object storage client that the chunks package uses.
username_0: Interesting. Let me try building your PR branch and see what actual error I get from it.
username_0: total 20
drwxrwxr-x 5 cortex adm 4096 Aug 14 11:55 cortex
drwxrwxr-x 4 cortex adm 4096 Aug 19 14:49 cortex-alertmanager
```
username_0: ```
But I'm getting the same error:
```
err="local alertmanager config storage is read-only"
```
username_0: Oh...
https://github.com/cortexproject/cortex/blob/0c44cc2e3dc04b13be71ecef20cfa90c552de3c7/pkg/alertmanager/alerts/local/store.go#L99-L101
username_0: It appears only `objectclient` type storage supports `SetAlertConfig`:
https://github.com/cortexproject/cortex/blob/0c44cc2e3dc04b13be71ecef20cfa90c552de3c7/pkg/alertmanager/alerts/objectclient/store.go#L84-L92
So I'm confused. Why are you saying in the docs:
```
storage:
# Type of backend to use to store alertmanager configs. Supported values are:
# "configdb", "gcs", "s3", "local".
# CLI flag: -alertmanager.storage.type
[type: <string> | default = "configdb"
```
When both `configdb` and `local` are clearly not supported since they just fail to set anything.
username_1: @username_4 Similarly to #3401, this can also be categorised as a doc improvement.
@username_0 the configuration API is marked as experimental because there's a history with `configdb` (`local` is more a testing strategy) - which I believe it's [marked as deprecated at this point](https://cortexmetrics.io/docs/api/#configs-api).
username_0: alertmanager
all
compactor
configs
distributor *
flusher
ingester *
purger *
querier *
query-frontend *
ruler *
store-gateway *
table-manager *
Modules marked with * are included in target All.
```
The `configs` module is not included in `all`, so I assume that means it has to be run separately. Correct?
username_2: It is now also possible to run it together with other modules, by using `-target=all,configs` or `-target=compactor,ruler,configs`.
username_0: Oh, that's neat. That makes it simpler I guess.
Though it does make me wonder. Why is `config` not included in `all` if it would make it easier to setup the service initially?
username_3: Configs service is deprecated.
username_0: What is recommended then?
username_0: Just so you know, based on https://github.com/cortexproject/cortex/issues/3395#issuecomment-717144961 I've decided to abandon configuring Alertmanager via Cortex. Do with this issue what you will.
username_1: We'll keep it around to improve the documentation - thanks again for the feedback!
username_0: Thanks for explaining things. Cheers.
username_4: _Keeping it open to improve the doc._
username_5: Any insights on getting `error validating Alertmanager config: mkdir /tmp/validate-config798960130: read-only file system
` when configuring alerts?
username_3: #3888 introduced a new implementation of AlertManager config.
However documentation still appears to be sketchy. |
websockets/ws | 604459753 | Title: Can't send an error code or message when rejecting a connection due to auth failure
Question:
username_0: <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found in ws.
General support questions should be raised on a channel like Stack Overflow.
Please fill in as much of the template below as you're able.
-->
- ☑️ I've searched for any related issues and avoided creating a duplicate
issue.
#### Description
Your example of client authentication here: https://github.com/websockets/ws#client-authentication sends some text via the Node socket (socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n');) and then calls socket.destroy(), but the text sent using socket.write is never seen by the client AFAICT. Thus the client never knows the reason for the socket connection rejection.
#### Reproducible in:
- version: 7.2.3
- Node.js version(s): 10
- OS version(s): Linux
#### Steps to reproduce:
1. Implement an auth mechanism per the example
2. Send a socket connection request with bad credentials
#### Expected result:
Socket receives a message with 401 Unauthorized
#### Actual result:
Socket is closed without an informative error code or message
Answers:
username_1: What you mean with error code or message? The connection is still not upgraded, what is written to the socket is an HTTP response. If you want to add headers or a body with JSON response you can do that.
username_0: Thanks - it makes sense that the response is HTTP, and I looked for evidence of a successful 401 response in the Chrome network tab response headers for that socket request, but there is no sign of it. The response headers look like:
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: <some id>
username_1: Yes, that is to complete the handshake. You don't want to complete it if the authentication fails and that's why the example use a minimal 401 response.
The only gotcha is that you have a raw socket.
username_0: Hmm, so I'm still left unclear as to how to detect the auth failure on the client. I have no idea where that 401 header is going or how to catch it in the browser.
username_1: Unfortunately the WebSocket interface in the browser does not allow you to read the HTTP response of the opening handshake so there is no proper way. It will probably call `websocet.onerror` and certainly `websocet.onclose` with 1006.
Status: Issue closed
username_1: Closing as answered. Discussion can continue if needed.
username_0: It does not call onerror, which leaves me with handling the onclose with code 1006 - not exactly a satisfactory situation since 1006 could occur for other reasons than an auth failure. I know verifyClient is discouraged, but would using it give me better options here? Or does that also abandon the handshake if verification fails?
username_1: `verifyClient()` works exactly is in the same way. I tried it on latest chrome and it calls `ws.onerror`. Another way would be to not abort the handshake, wait for the connection to be established and then close the websocket on the server after sending a custom made message or closing with a custom close code but I wouldn't do it.
username_2: @username_0 I have found a solution to the problem. While I also believe the library should support this as well, I believe this workaround should work for you. Just for reference, the reason this needs to be added to the library is because of re-connects. For example, I have my client trying to reconnect with a back off (starts at 1 second and multiplies by 1.5x until it reaches a total of 30 seconds). Now when I receive a 401 obviously it's not trying to re-connect because of a network issue and rather an authentication issue. In this case, I want it to stop trying to re-connect because the user obviously has to re-authenticate. Solution is here:
```
socket.on('unexpected-response', (req, res) => {
this.socket.emit('close', res.statusCode);
})
.socket.on('close', (code) => {
// try to reconnect
if (code !== 401) this.reconnect();
})
```
So basically what I am doing there is catching the `unexpected-response` event from the library (go to the source websocket.js file and search for it). That event actually sends the response from the server, which you can then get the status code from. In my case, the close event was not getting triggered, so I trigger it manually sending it the status code. Now in the `close` event, you can listen for your status code and detect if it is a 401.
Hoping this helps anyone in the future. |
angular/angular | 249843012 | Title: CLI build-optimizer failure
Question:
username_0: ## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
`ng eject --aot --prod --build-optimizer --no-sourcemaps` generates a webpack.config.js that tries to use `PurifyPlugin` without importing it (`ReferenceError: PurifyPlugin is not defined`).
`ng build --aot --prod --build-optimizer --no-sourcemaps` fails with this error:
```
92% chunk asset optimization
<--- Last few GCs --->
[455:0x560e167f5620] 707807 ms: Mark-sweep 1413.1 (1518.5) -> 1413.1 (1502.5) MB, 7277.9 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 7279 ms) last resort
[455:0x560e167f5620] 715214 ms: Mark-sweep 1413.1 (1502.5) -> 1413.1 (1502.5) MB, 7406.3 / 0.0 ms last resort
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x364665b1bbd9 <JS Object>
2: def_variable [0x364665b02241 <undefined>:3551] [pc=0x4528827ac2b](this=0x1f233b8a5261 <an AST_Function with map 0x127ef311b591>,symbol=0x1f233b8bfc71 <an AST_SymbolFunarg with map 0x150b45cc9009>)
3: visit [0x364665b02241 <undefined>:3402] [pc=0x45286bed532](this=0x2b8b84e771a1 <a TreeWalker with map 0x3e9d366dda09>,node=0x1f233b8bfc71 <an AST_SymbolFunarg with map 0x150b45cc9009>,de...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [@angular/cli]
2: 0x560e14f2b97e [@angular/cli]
3: v8::Utils::ReportOOMFailure(char const*, bool) [@angular/cli]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [@angular/cli]
5: v8::internal::Factory::NewFixedArray(int, v8::internal::PretenureFlag) [@angular/cli]
6: v8::internal::Factory::NewScopeInfo(int) [@angular/cli]
7: v8::internal::ScopeInfo::Create(v8::internal::Isolate*, v8::internal::Zone*, v8::internal::Scope*, v8::internal::MaybeHandle<v8::internal::ScopeInfo>) [@angular/cli]
8: v8::internal::DeclarationScope::AllocateVariables(v8::internal::ParseInfo*, v8::internal::AnalyzeMode) [@angular/cli]
9: v8::internal::DeclarationScope::Analyze(v8::internal::ParseInfo*, v8::internal::AnalyzeMode) [@angular/cli]
10: v8::internal::Compiler::Analyze(v8::internal::ParseInfo*, v8::internal::ThreadedList<v8::internal::ThreadedListZoneEntry<v8::internal::FunctionLiteral*> >*) [@angular/cli]
11: 0x560e148b8aaa [@angular/cli]
12: 0x560e148b9b2f [@angular/cli]
13: 0x560e148be795 [@angular/cli]
14: v8::internal::Compiler::Compile(v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Compiler::ClearExceptionFlag) [@angular/cli]
15: v8::internal::Runtime_CompileLazy(int, v8::internal::Object**, v8::internal::Isolate*) [@angular/cli]
16: 0x452869840bd
../commands/prodbuild.sh: line 10: 455 Aborted ng build --aot --prod --build-optimizer --no-sourcemaps
```
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Both of these commands should work.
## Minimal reproduction of the problem with instructions
<!--
[Truncated]
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
Functioning as documented.
## Environment
<pre><code>
Angular version: 4.3.4
Angular CLI version: 1.3.0
<!-- Check whether this is still an issue in the most recent Angular version -->
For Tooling issues:
- Node version: 8.2.1
- Platform: Debian Sid x86-64 in a Docker image with 4 GB RAM allocated
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, HTTP server, ... -->
</code></pre>
Answers:
username_1: Sounds like an issue with the CLI which should be made here https://github.com/angular/angular-cli
username_0: Oops, thanks!
Status: Issue closed
|
serde-rs/serde | 479192335 | Title: DeserializedOwned doesn't work for structs with &'static refs?
Question:
username_0: This playground link illustrates what I'm talking about:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=4dd075320b16d9a9c6edb4fce6993171
The error:
```
|
17 | let f: Foo = test("");
| ^^^^ the trait `for<'de> serde::Deserialize<'de>` is not implemented for `Foo`
|
= help: the following implementations were found:
<Foo as serde::Deserialize<'static>>
= note: required because of the requirements on the impl of `serde::de::DeserializeOwned` for `Foo`
```
The signature of `test` is similar to `reqwest`s `json` function for deserializing responses.
I think I understand why this can't work, but the docs aren't very clear on the subject. https://serde.rs/lifetimes.html#trait-bounds makes references to not explicitly using 'static in implementing Deserialize, but it doesn't mention anything about static refs simply not working with `#[derive(Deserialize)]`.
So, if this is in fact impossible, it'd be nice to have the docs call that out explicitly
Thanks! Love Serde!
Answers:
username_0: Just after this I realized it still fails for me even replacing `'static` with `'a`, with something even more esoteric:
```
error: implementation of `serde::Deserialize` is not general enough
--> src/main.rs:17:18
|
17 | let f: Foo = test("");
| ^^^^
|
= note: `Foo<'_>` must implement `serde::Deserialize<'0>`, for any lifetime `'0`
= note: but `Foo<'_>` actually implements `serde::Deserialize<'1>`, for some specific lifetime `'1`
```
This is definitely pretty confusing to encounter and would be worth mentioning in the section on Derive
username_1: derive(Deserialize) works fine with &'static references, but as with any type that borrows not owns its contents, you need to deserialize it from data that lives long enough.
https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=17fa9a33ced083c056f280945e174653
username_0: Thanks, that makes sense. I'm mostly just wondering how this can be made more clear to people who run into this. It's not immediately obvious from reading the docs or any googling, until you think about the actual lifetime implications.
I think maybe the confusion arises from expecting to see your typical "doesn't live long enough" type of error rather than "this isn't implemented". Perhaps a section in that same bit of the docs that says, hey, if you get this kind of error, it's probably this?
Status: Issue closed
|
ant-design/ant-design-mobile | 184170537 | Title: ListView点击俩下之后才更新
Question:
username_0: <!-- 请按照下列格式报告问题,务必提供复现步骤,否则恕难解决,感谢您的支持。-->
#### 本地环境
<!-- 务必提供 -->
- antd-mobile 版本:0.8.6
- 操作系统及其版:win10
- 浏览器及其版本:谷歌
#### 你做了什么?
引入ListView组件,请求数据
#### 你期待的结果是:

#### 实际上的结果:
搜索点击俩下周后得到想要的结果

点击一下

#### 可重现的在线演示
<!-- 请修改并 Fork https://codepen.io/username_1/pen/bwRPvx -->
Answers:
username_1: 点击哪里两下?
username_0: 这个可能是我代码的问题,你们能提供es6的写法吗
username_1: /(ㄒoㄒ)/~~ es5 es6 这些自己学习下吧
Status: Issue closed
|
rParslow/TeamWhisky | 251856014 | Title: BULLEIT 10 ans Bourbon456%
Question:
username_0: BULLEIT 10 ans Bourbon 45,6%<br>
http://ift.tt/2wsFlxj<br>
#TeamWhisky BULLEIT 10 ans Bourbon 45,6% Bourbon Etats-unis/Kentucky LMDW http://ift.tt/2wsFlxj 39,90 € <img src="http://ift.tt/2xnhHyL"><br><br>
via Fishing Reports http://ift.tt/2dm5cfF<br>
August 22, 2017 at 08:34AM |
JuliaDiff/TaylorSeries.jl | 560737049 | Title: Printing issue with ModelingToolkit
Question:
username_0: It would nice to be able to do symbolic calculations using ModelingToolkit.jl.
Construction seems to work but there's an error in printing:
```jl
julia> using TaylorSeries, ModelingToolkit
julia> @variables x[1:5]
(Operation[x₁, x₂, x₃, x₄, x₅],)
julia> y = Taylor1(x);
julia> y
Error showing value of type Taylor1{Operation}:
ERROR: MethodError: no method matching pretty_print(::Taylor1{Operation})
```
Answers:
username_0: e.g.
```
julia> z = y * y;
julia> z.coeffs
5-element Array{Operation,1}:
x₁ * x₁
x₁ * x₂ + x₂ * x₁
(x₁ * x₃ + x₂ * x₂) + x₃ * x₁
((x₁ * x₄ + x₂ * x₃) + x₃ * x₂) + x₄ * x₁
(((x₁ * x₅ + x₂ * x₄) + x₃ * x₃) + x₄ * x₂) + x₅ * x₁
```
(Note that ModelingToolkit currently has no simplification.
Unfortunately I don't think we can really use TaylorN for this if there are too many new variables.)
username_1: Interesting idea... I notice that whatever seems to work is because `Operation <: Number`.
What I don't quite see is how to exploit the `x` variable(s) when they are coefficients of a Taylor polynomial. Or, are you thinking in propagating the actual coefficients, to somehow exploit them independently?
username_0: I'm just thinking about obtaining symbolic expressions for the coefficients of the result of a calculation with Taylor series.
username_1: ```julia
julia> using TaylorSeries, ModelingToolkit
julia> use_show_default(true) # pretty_print doesn't work for Taylor1{Operations}
true
julia> @parameters x[0:3] # I don't know if @variables is better here
(Operation[x₀(), x₁(), x₂(), x₃()],)
julia> xt = Taylor1(x)
Taylor1{Operation}(Operation[x₀(), x₁(), x₂(), x₃()], 3)
julia> xt * xt
Taylor1{Operation}(Operation[x₀() * x₀(), x₀() * x₁() + x₁() * x₀(), (x₀() * x₂() + x₁() * x₁()) + x₂() * x₀(), ((x₀() * x₃() + x₁() * x₂()) + x₂() * x₁()) + x₃() * x₀()], 3)
julia> getcoeff(ans, 2)
(x₀() * x₂() + x₁() * x₁()) + x₂() * x₀()
```
Is this more or less what you have in mind?
username_0: Yes exactly. Yes I think `@variables` is better since you don't get the parentheses. (Or maybe that's just to do with the version of ModelingToolkit.)
username_1: With `@variables` or `@parameters` I still get the parenthesis; maybe it is a question of the version. I am using ModelingToolkit v0.8.0.
Regarding other functions, things become brittle very fast, but could be fixed. The problem I just noticed is that `NumbersNotSeries` is not recognizing the types introduced by ModelingToolkit.
username_2: You can get rid of the parentheses by calling simplify_expr
username_1: Very nice, thanks!
username_1: I just realized that the following is another alternative, using TaylorSeries only:
```julia
julia> using TaylorSeries
julia> displayBigO(false)
false
julia> x = set_variables("x₀ x₁ x₂ x₃ x₄", order=10);
julia> x0, x1, x2, x3, x4 = x;
julia> xt = Taylor1(x, 4)
1.0 x₀ + ( 1.0 x₁) t + ( 1.0 x₂) t² + ( 1.0 x₃) t³ + ( 1.0 x₄) t⁴
julia> xt * xt
1.0 x₀² + ( 2.0 x₀ x₁) t + ( 2.0 x₀ x₂ + 1.0 x₁²) t² + ( 2.0 x₀ x₃ + 2.0 x₁ x₂) t³ + ( 2.0 x₀ x₄ + 2.0 x₁ x₃ + 1.0 x₂²) t⁴
julia> sin(xt)
1.0 x₀ - 0.16666666666666666 x₀³ + 0.008333333333333333 x₀⁵ - 0.00019841269841269839 x₀⁷ + 2.7557319223985884e-6 x₀⁹ + ( 1.0 x₁ - 0.5 x₀² x₁ + 0.041666666666666664 x₀⁴ x₁ - 0.0013888888888888887 x₀⁶ x₁ + 2.4801587301587298e-5 x₀⁸ x₁) t + ( 1.0 x₂ - 0.5 x₀² x₂ - 0.5 x₀ x₁² + 0.041666666666666664 x₀⁴ x₂ + 0.08333333333333333 x₀³ x₁² - 0.0013888888888888887 x₀⁶ x₂ - 0.004166666666666667 x₀⁵ x₁² + 2.4801587301587298e-5 x₀⁸ x₂ + 9.920634920634919e-5 x₀⁷ x₁²) t² + ( 1.0 x₃ - 0.5 x₀² x₃ - 1.0 x₀ x₁ x₂ - 0.16666666666666666 x₁³ + 0.041666666666666664 x₀⁴ x₃ + 0.16666666666666666 x₀³ x₁ x₂ + 0.08333333333333333 x₀² x₁³ - 0.0013888888888888885 x₀⁶ x₃ - 0.008333333333333333 x₀⁵ x₁ x₂ - 0.006944444444444444 x₀⁴ x₁³ + 2.4801587301587298e-5 x₀⁸ x₃ + 0.00019841269841269839 x₀⁷ x₁ x₂ + 0.00023148148148148144 x₀⁶ x₁³) t³ + ( 1.0 x₄ - 0.5 x₀² x₄ - 1.0 x₀ x₁ x₃ - 0.5 x₀ x₂² - 0.5 x₁² x₂ + 0.041666666666666664 x₀⁴ x₄ + 0.16666666666666666 x₀³ x₁ x₃ + 0.08333333333333333 x₀³ x₂² + 0.25 x₀² x₁² x₂ + 0.041666666666666664 x₀ x₁⁴ - 0.0013888888888888887 x₀⁶ x₄ - 0.008333333333333333 x₀⁵ x₁ x₃ - 0.004166666666666667 x₀⁵ x₂² - 0.020833333333333332 x₀⁴ x₁² x₂ - 0.006944444444444444 x₀³ x₁⁴ + 2.4801587301587298e-5 x₀⁸ x₄ + 0.00019841269841269839 x₀⁷ x₁ x₃ + 9.920634920634919e-5 x₀⁷ x₂² + 0.0006944444444444444 x₀⁶ x₁² x₂ + 0.0003472222222222222 x₀⁵ x₁⁴) t⁴
```
I guess it has its own limitations, though you can get some results for more interesting functions (e.g., `sin(xt)`).
username_0: Yes I did think of that, but that won't work for many variables I guess.
username_1: I guess you are right, though maybe a combination of nesting Taylor1's and TaylorN may work... I truly never though about such a beast.
username_1: Yet, it does work with `HomogeneousPolynomial`s:
```julia
julia> x[1]([xx, yy])
0 + 1.0 * (xx ^ 1 * yy ^ 0)
```
So the problem may be related to the Horner implementation.
username_1: This is solved by #244:
```julia
julia> using TaylorSeries, ModelingToolkit
julia> @variables x[0:4]
(Operation[x₀, x₁, x₂, x₃, x₄],)
julia> y = Taylor1(x);
julia> y
x₀ + x₁ t + x₂ t² + x₃ t³ + x₄ t⁴ + 𝒪(t⁵)
julia> 2*y
2x₀ + 2x₁ t + 2x₂ t² + 2x₃ t³ + 2x₄ t⁴ + 𝒪(t⁵)
julia> y*y
x₀ * x₀ + x₀ * x₁ + x₁ * x₀ t + (x₀ * x₂ + x₁ * x₁) + x₂ * x₀ t² + ((x₀ * x₃ + x₁ * x₂) + x₂ * x₁) + x₃ * x₀ t³ + (((x₀ * x₄ + x₁ * x₃) + x₂ * x₂) + x₃ * x₁) + x₄ * x₀ t⁴ + 𝒪(t⁵)
```
And you can play further, e.g., `sqrt(1+y)`.
Status: Issue closed
|
dlemstra/Magick.NET | 290606321 | Title: System.IO.FileNotFoundException: assembly Magick.NET-Q16-AnyCPU, Version=7.2.1.0
Question:
username_0: HI,
I receive the following error:
**System.IO.FileNotFoundException: 'Non è stato possibile caricare il file o l'assembly 'Magick.NET-Q16-AnyCPU, Version=7.2.1.0, Culture=neutral, PublicKeyToken=null' o una delle relative dipendenze. Impossibile trovare il file specificato.'**
I have the following configuration:
Console application (.NET Framework 4.7.1) that references Class Library (.NET standard 2.0).
Class library references Magick.NET-Q16-AnyCPU v 7.3.0.
In attachment the sample program. Could you help me, please?
<NAME>
Italy
[MyConsole.zip](https://github.com/username_1/Magick.NET/files/1653506/MyConsole.zip)
Answers:
username_0: Hey is there anybody here?
username_1: It took me a while but I finally tracked down the issue. It appears to be some weird binding issue. The `netstandard` library links with the `netstandard` version of Magick.NET that does no strong naming. Then when you use that in a `net471` project it will try to load the library without a strong name. But the `net471` build will use the `net40` dll and that is strong named and then it is unable to load the library.
I just pushed a patch to fix this in the next release. The next release will also strong name the `netstandard1.3` library and that should fix the issue. I will try it in this a test repo (https://github.com/username_1/MyConsole) before I push the new release to make sure it works.
username_0: Thank you for your support, and thanks for the Magick.NET library, is awesome.
Cristiano
username_1: Will try to push out a new release this weekend.
username_1: It appears that the MyConsole application needs NuGet a reference to Magick.NET to find the library. The project with the new csproj does not need a reference. You can find my commits to test this here: https://github.com/username_1/MyConsole/commits/master
username_0: Ok, thank you. I think now it's the right behaviour: no references between MyConsole app and magick.net but only the library project references Magick.net.
Status: Issue closed
|
clburlison/Munki-s3Repo-Plugin | 224244480 | Title: Add progress bar for uploads
Question:
username_0: Hopefully I can use the following
https://github.com/boto/boto3/blob/master/boto3/s3/transfer.py#L81-L105
Answers:
username_0: This had other benefits:
* multipart uploads
* built-in retries
* thread/concurrency support.
username_0: See commit notes.
Status: Issue closed
|
SocialGouv/emjpm | 624778245 | Title: [ENQUETES] Rafraichissement des données après import
Question:
username_0: Si les données sont déjà chargées sur la page, puis que l'on importe un fichier excel, au retour le formulaire n'est pas à jour car le cache `apollo` n'est pas à jour.
Answers:
username_0: J'ai essayé de bypasser le cache en utilisant `fetchPolicy`, sans succès:
```bash
const { data, loading, error } = useQuery(ENQUETE_MANDATAIRE_INDIVIDUEL, {
variables: { enqueteId, mandataireId },
fetchPolicy: "no-cache",
});
```
Lié à ce ticket: https://github.com/apollographql/react-apollo/issues/3315
Status: Issue closed
|
Agoric/SwingSet | 444659146 | Title: prohibit send()-to-self
Question:
username_0: I made a mistake while working on some comms integration, and convinced a vat to send messages to itself. Specifically, by getting an "ingress" and "egress" backwards, the receiving comms vat did a `syscall.send({type:'export',id},...)`. Normally vats only use `send` to target `{type:'import'}`, but in this case the kernel cheerfully translateded the vat-local export into a sending-vat-specific export, and put it on the run queue just like any other message. Eventually the message was delivered to `dispatch.deliver()` back into the same vat from which it originated.
This was kinda confusing, and I don't think there's a good reason for vats to use the kernel to call back into themselves. So for now I'm going to disable these calls, and have the kernel throw an exception if a vat tries to `send()` to one of their own exports. It's conceivable that this will become a sensible thing to do in the future, when we get the escalator schedulers in place, but even then I'm not sure it will make sense.
The specific check is added to `src/kernel/vatManager.js`, in `doSend()`.
Status: Issue closed
Answers:
username_0: in the old repo. this was SwingSet issue 43 |
parse-community/Parse-Swift | 1122751584 | Title: None
Question:
username_0: Did you look at the Playground examples? That should always be the first place to check: https://github.com/parse-community/Parse-Swift/blob/5307210d97dedeb88297c653b0675ff1ef920221/ParseSwift.playground/Pages/12%20-%20Roles%20and%20Relations.xcplaygroundpage/Contents.swift#L282-L333 |
nus-cs2103-AY2021S2/pe-dev-response | 859973314 | Title: None
Question:
username_0: # Team's Response
As per consulting with an actual insurance agent, clients can indeed purchase several copies of the same plan. I.e. for investment plans that have a certain maturity period, clients can purchase several of the same plan at different points of time.
## Duplicate status (if any):
-- |
apple/swift-nio | 505941603 | Title: Extend ByteBufferView to be a MutableCollection and a RangeReplaceableCollection
Question:
username_0: When we added ByteBufferView (#411), @weissi and I thought that we couldn't make it mutable because it would inevitably CoW. This is because the way you build `ByteBufferView` objects is as constructed objects from a `ByteBuffer` (i.e. via `ByteBuffer.viewBytes(at:length:)` or `ByteBuffer.readableBytesView`), and as a result the original buffer still exists and holds a reference to the backing storage.
What we missed is that this can be enabled by adding a `ByteBufferView.init()` function that takes a `ByteBuffer` as its argument. In this mode the `ByteBufferView` would take ownership of the `ByteBuffer` entirely, including the writable bytes, and could therefore appropriately conform to the mutable protocols. This also creates proper bidirectional conversion between `ByteBuffer` and `ByteBufferView`, which is handy when interoperating with algorithms written generically over the Swift Collection types.
To make this work, we'd need to do a few things:
1. Add the appropriate initializer.
2. Extend `ByteBufferView` to have an idea that it may hold a slice larger than its range.
3. Implement `MutableCollection` and `RangeReplaceableCollection` for `ByteBufferView`.
4. Write tests verifying that, when using this API on the happy path, we don't incur CoW operations.
5. (optional, extension) Extend `NIOFoundationCompat` so that `ByteBufferView` conforms to `MutableDataProtocol` as well.
Answers:
username_0: /cc @username_1, who may end up taking this work on if no-one else does it.
username_1: Unfortunately the Swift compiler is not on our side and we'll still incur a CoW at the moment: https://bugs.swift.org/browse/SR-11675
Is this something we still want to pursue at this time?
username_0: I think it is: the failure of this in the trivial case is annoying, but I suspect if the creation and use are separated by enough space the compiler may correctly do the right thing.
Status: Issue closed
|
ContinuumIO/anaconda-issues | 230999589 | Title: Windows shortcuts for new environments are not correctly set
Question:
username_0: Upon creating new environments, several shortcuts are created automatically with the following command as their target:
`
C:\Anaconda3\envs\myenv\python.exe C:\Anaconda3\cwp.py C:\Anaconda3\envs\myenv "C:/Anaconda3/envs/myenv/python.exe" "C:/Anaconda3/envs/myenv/Scripts/ipython-script.py"
`
However, due to not using the cwp.py file in that specific environment, the shortcut fails to work.
This is true for all shortcuts including "Anaconda Cloud", "Anaconda Navigator", "IPython", "Jupyter Notebook", "Jupyter QTConsole", "Reset Spyder Settings" and "Spyder".
Solution: `C:\Anaconda3\cwp.py` should be changed to `C:\Anaconda3\envs\myenv\cwp.py`
Answers:
username_1: @wulmer I believe you're right. Sorry I missed seeing being assigned here. I'll look into this today.
Status: Issue closed
|
aws/amazon-ecs-agent | 897655917 | Title: iptables configuration should explicitly permit access to agent task metadata port
Question:
username_0: ### Summary
If the instance on which the ECS Agent is installed has an iptables configuration such that the default `INPUT` chain rule is `DROP` instead of `ACCEPT`, containers will be unable to access the ECS task metadata endpoint.
### Description
Customized AMIs such as the CSI Hardened Amazon Linux 2 image are often chosen by customers to meet enhanced security and compliance requirements. This image has a slightly different out-of-the-box iptables configuration than our standard AL2 image. In particular, the default `INPUT` iptables chain rule is to `DROP` packets instead of `ACCEPT` them.
Here is the default `INPUT` chain configuration on a stock AL2 AMI, after the ECS agent has been installed:
```
Chain INPUT (policy ACCEPT 4664 packets, 1084K bytes)
pkts bytes target prot opt in out source destination
0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:51678
0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 ! ctstate RELATED,ESTABLISHED,DNAT
```
And here is the default `INPUT` chain configuration on a CIS-hardened AL2 AMI, also after the ECS agent has been installed:
```
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:51678
0 0 DROP all -- * * !127.0.0.0/8 127.0.0.0/8 ! ctstate RELATED,ESTABLISHED,DNAT
10 2410 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * * 127.0.0.0/8 0.0.0.0/0
3578 771K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state ESTABLISHED
587 58179 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state ESTABLISHED
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 state ESTABLISHED
111 5780 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW
0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:68 state NEW
0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:123 state NEW
0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:323 state NEW
```
You'll notice more rules configured for the CIS-hardened AMI, along with a default policy of `DROP`.
Containers scheduled by ECS onto the CIS-hardened instance are unable to access their task metadata service:
```
root@5a3f826ef907:/build# curl $ECS_CONTAINER_METADATA_URI_V4/task
curl: (7) Failed to connect to 169.254.170.2 port 80: Connection timed out
```
### Suggested fix
Fixing this issue looks to be relatively straightforward. If the ECS agent (or ecs-init) explicitly adds an `-j ALLOW` rule to the `INPUT` iptables chain that permits access to the ECS Agent port, it works fine. Here is an example rule:
```
iptables -A INPUT -i docker0 -d 127.0.0.0/8 -p tcp -m tcp --dport 51679 -j ACCEPT
```
Answers:
username_1: @username_0 Thank for reporting this. Looking into this.
username_0: To enable task metadata service access for tasks using awsvpc network mode, the following iptables rule is also required:
```
-A INPUT -i ecs-bridge -d 127.0.0.0/8 -p tcp -m tcp --dport 51679 -j ACCEPT
```
username_1: Hello @username_0, We will be updating our documentation to update that CIS-hardened AL2 AMI will require these additional iptables rules.
username_2: Wanted to verify that this worked for us. Thanks for the fix!
Status: Issue closed
username_3: please reopen if you encounter any other issues. |
aprilrochelle/localWeather | 326338857 | Title: Submit Button
Question:
username_0: ## The Story
As a developer, I need to add a submit button next to the input field
## Acceptance Criteria
**Given** a user wants to view weather information
**When** the user visits your initial view
**Then** there should be a submit button next to the zip code field
## Technical Notes
- Bootstrap<issue_closed>
Status: Issue closed |
metagenome-atlas/atlas | 418646076 | Title: Error in rule download_eggNOG_fastas
Question:
username_0: Hi @username_1
I'm using ATLAS 2.0.6 on a Ubuntu 18.04 server (non-cluster).
I used the same database directory as I've used when testing ATLAS 2.0.0 to 2.0.3. However, some of the download rules were run again, I think because the download directory structure has been reorganized (e.g., `OG_fasta`).
An error occurred in `rule download_eggNOG_fastas`. Here is the end of the logfile (what was printed to the screen):
```
localrule download_eggNOG_fastas:
output: /Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta
jobid: 74
reason: Forced execution
curl 'http://eggnogdb.embl.de/download/emapperdb-4.5.1/OG_fasta.tar.gz' -s > /Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta.tar.gz
tar -zxf /Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta.tar.gz --directory /Data/reference_databases/atlas/2.0.0/EggNOG
[Thu Mar 7 23:28:31 2019]
Error in rule download_eggNOG_fastas:
jobid: 0
output: /Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta
Traceback (most recent call last):
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 357, in _callback
callback(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/scheduler.py", line 315, in _proceed
self.get_executor(job).handle_job_success(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 370, in handle_job_success
super().handle_job_success(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 151, in handle_job_success
assume_shared_fs=self.assume_shared_fs)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/jobs.py", line 885, in postprocess
self.dag.handle_protected(self)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/dag.py", line 450, in handle_protected
f.protect()
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/io.py", line 358, in protect
lchmod(os.path.join(self.file, f), mode)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/io.py", line 62, in lchmod
follow_symlinks=os.chmod not in os.supports_follow_symlinks)
FileNotFoundError: [Errno 2] No such file or directory: '/Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta/14XIR.fa'
exception calling callback for <Future at 0x7f4462414e48 state=finished returned NoneType>
Traceback (most recent call last):
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 357, in _callback
callback(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/scheduler.py", line 315, in _proceed
self.get_executor(job).handle_job_success(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 370, in handle_job_success
super().handle_job_success(job)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/executors.py", line 151, in handle_job_success
assume_shared_fs=self.assume_shared_fs)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/jobs.py", line 885, in postprocess
self.dag.handle_protected(self)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/dag.py", line 450, in handle_protected
f.protect()
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/io.py", line 358, in protect
lchmod(os.path.join(self.file, f), mode)
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/site-packages/snakemake/io.py", line 62, in lchmod
follow_symlinks=os.chmod not in os.supports_follow_symlinks)
FileNotFoundError: [Errno 2] No such file or directory: '/Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta/14XIR.fa'
During handling of the above exception, another exception occurred:
[Truncated]
onerror(os.unlink, fullname, sys.exc_info())
File "/Analysis/username_0/miniconda3_mellea/envs/atlas_2.0.6/lib/python3.6/shutil.py", line 436, in _rmtree_safe_fd
os.unlink(name, dir_fd=topfd)
PermissionError: [Errno 13] Permission denied: '14XIR.fa'
```
The key messages seem to be:
```
FileNotFoundError: [Errno 2] No such file or directory: '/Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta/14XIR.fa'
PermissionError: [Errno 13] Permission denied: '14XIR.fa'
```
This file is actually nested in a subfolder, in `/Data/reference_databases/atlas/2.0.0/EggNOG/OG_fasta/perNOG/14XIR.fa`
Is `rule download_eggNOG_fastas` looking for these output files and searching in the wrong directory? Could this be related to trying to change their permissions, from `protected()` in `protected(directory(f"{EGGNOG_DIR}/OG_fasta")),` in commit 85598a6, line 99?
It's not ideal, but I removed `protected()` from the output line, and the rule finished without issue.
Jackson
Answers:
username_1: No I think you are right. This protected flags doing more harm than good.
I should remove them.
username_1: @username_0 Can you open a PR? and remove all the protected flags in the download.snakefile ?
username_0: PR created
Status: Issue closed
|
J08nY/pyecsca | 761741847 | Title: Handle coordinate system assumptions
Question:
username_0: The coordinate systems have assumptions, like the Short Weierstrass projective-1 system which assumes that `a = -1`.
Most of the assumptions are of the form `lhs = rhs`, where `lhs` is a curve parameter and `rhs` is an expression evaluable from curve parameters and constants only. However, some assumptions in the Edwards model and [yz](https://github.com/username_0/efd/blob/master/edwards/yz/variables)/yzsquared coordinate systems are different. These coordinate systems introduce a coordinate parameter `r` and do:
```
name YZ coordinates with square d
parameter r
assume c = 1
assume d = r^2
variable Y
variable Z
satisfying r*y = Y/Z
```
Loading this fails with the error on the line that `assume d` is evaluated:
```python
----> 1 get_params("other", "E-521", "yz")
.../pyecsca/ec/params.py in get_params(category, name, coords, infty)
182 raise ValueError("Curve {} not found in category {}.".format(name, category))
183
--> 184 return _create_params(curve, coords, infty)
.../pyecsca/ec/params.py in _create_params(curve, coords, infty)
97 alocals: Dict[str, Union[Mod, int]] = {}
98 compiled = compile(assumption, "", mode="exec")
---> 99 exec(compiled, None, alocals)
100 for param, value in alocals.items():
101 if params[param] != value:
.../pyecsca/ec/params.py in <module>
NameError: name 'r' is not defined
```
We should handle this somehow in a general way. In this case, it would mean to assign the modular square root of `d` to the `r` parameter and have this parameter in the curve object, perhaps separated from the main curve parameters.
Sympy sadly doesn't support finite fields and so it seems it will not be easy to use it.
https://github.com/sympy/sympy/issues/9544<issue_closed>
Status: Issue closed |
arrayfire/arrayfire-rust | 684361391 | Title: [BUG] Segfault error message when matmul function is used.
Question:
username_0: Description
===========
Segfault error message when matmul function is used.
Reproducible Code and/or Steps
------------------------------
```
let n:u64 = 1000 ;
let a_dims = arrayfire::Dim4::new(&[n,n,1,1]);
let a = arrayfire::randn::<f32>(a_dims);
let b_dims = arrayfire::Dim4::new(&[n,n,1,1]);
let b = arrayfire::randn::<f32>(b_dims);
let c = arrayfire::matmul(&a, &b, arrayfire::MatProp::NONE, arrayfire::MatProp::NONE) ;
```
System Information
------------------
ArrayFire v3.7.0 (CUDA, 64-bit Linux, build fbea2ae)
Platform: CUDA Runtime 10.1, Driver: 440.82
[0] TITAN Xp, 12195 MB, CUDA Compute 6.1
Arrayfire version: (3, 7, 0)
Name: TITAN_Xp
Platform: CUDA
Toolkit: v10.1
Compute: 6.1
Revision: fbea2ae
Answers:
username_1: @username_0 Does the output change if you run your program with environment variable `AF_PRINT_ERRORS` set to 1.
You can it using `export AF_PRINT_ERRORS=1`
username_0: @username_1 Why is it throwing a segment fault?
username_1: I am not sure if even the crash and code-run are related if those two events are separated by such huge time difference. How did you conclude crash was caused by this particular run from 10 hours ago ?
username_0: It was the only program running on the machine. I left it overnight, when I went to sleep.
Status: Issue closed
|
hairizuanbinnoorazman/meetup-stats | 375266957 | Title: Adding unit tests
Question:
username_0: Unit tests to be added for sane-r development. Currently, testing of the tool is being done manually on each deploy. It would make sense for smaller feature set but for larger feature sets, it might result in regressions being introduced. |
freaktechnik/eslint-plugin-array-func | 629564889 | Title: `avoid-reverse` is wrong
Question:
username_0: Readme example:
```js
const sum = array.reverse().reduce((p, c) => p + c, 0);
const reverseSum = array.reverse().reduceRight((p, c) => p + c, 0);
```
The sums are a bad example since their order doesn't matter. Try this:
```js
['a','b','c'].reverse().reduce((p, c) => p + c, '');
// 'cba'
['a','b','c'].reverse().reduceRight((p, c) => p + c, '');
// 'abc'
```
```js
['a','b','c'].reduce((p, c) => p + c, '');
// 'abc'
['a','b','c'].reduceRight((p, c) => p + c, '');
// 'cba'
```
Did you mean to replace `.reverse().reduce()` with `.reduceRight()` and vice-versa instead?
Answers:
username_1: I will admit that the readme examples aren't always great in showing the full behavior, their main intent is to show the syntax that triggers a rule, since the rules do not do any behavioral analysis of the callback.
The rule does indeed suggest to use `reduceRight` instead of `reverse().reduce()` and `reduce` instead of `reverse().reduceRight()`.
Status: Issue closed
|
arviz-devs/arviz | 609020970 | Title: `plot_pair` displays a blank plot when plotting two variables with bokeh
Question:
username_0: **Describe the bug**
I came across this bug while working on #1167 (related to #1166).
**To Reproduce**
`centered = az.load_arviz_data('centered_eight')
coords = {'school': ['Choate', 'Deerfield']}`
`az.plot_pair(
centered,
var_names=['theta'],
coords=coords,
kind='scatter',
point_estimate="mean",
backend='bokeh'
)`
**Expected behavior**
A bokeh pair plot for the two variables.<issue_closed>
Status: Issue closed |
ZingWorkshop/Studio | 220824253 | Title: Allow elements to be pinned to locations on a map
Question:
username_0: For example as shown in these apps:



Answers:
username_1: Out of scope
Status: Issue closed
|
jlippold/tweakCompatible | 413630545 | Title: `ChargePulse` not working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.skylerk99.chargepulse",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.skylerk99.chargepulse",
"deviceId": "iPad5,4",
"url": "http://cydia.saurik.com/package/com.skylerk99.chargepulse/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": false,
"packageName": "ChargePulse",
"category": "Tweaks",
"repository": "BigBoss",
"name": "ChargePulse",
"installed": "0.2",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.skylerk99.chargepulse",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Status bar battery pulsates while charging.",
"latest": "0.2",
"author": "Skylerk99",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
CADOJRP/FiveM-AdministrationPanel | 414871759 | Title: Having difficulty installing it
Question:
username_0: I am finding it difficult to install and put onto the web so that I can use the Ban Manager, etc. Contact me on my Discord please - H. Chandler#7023. Thanks
Status: Issue closed
Answers:
username_1: Hello, currently the best way to contact our support would be via our Discord server here: https://discord.gg/q8MSQwt |
matplotlib/matplotlib | 415928815 | Title: Unable to install matplotlib 2.2.4 from pypi with Python 2.7.15
Question:
username_0: <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Bug report
**Bug summary**
Matplotlib 2.2.4 cannot be installed on Windows with Python 2.7.15 from pypi.
**Code for reproduction**
<!--A minimum code snippet required to reproduce the bug.
Please make sure to minimize the number of dependencies required, and provide
any necessary plotted data.
Avoid using threads, as Matplotlib is (explicitly) not thread-safe.-->
```python
pip install matplotlib==2.2.4
```
**Actual outcome**
<!--The output produced by the above code, which may be a screenshot, console output, etc.-->
```
(clean_env) C:\Users\rbao>pip install matplotlib==2.2.4
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Ple
ase upgrade your Python as Python 2.7 won't be maintained after that date. A fut
ure version of pip will drop support for Python 2.7.
Looking in indexes: https://pypi.org/simple, http://nz-lnx-01/pypi
Collecting matplotlib==2.2.4
Using cached https://files.pythonhosted.org/packages/1e/20/2032ad99f0dfe0f6097
0941af36e8d0942d3713f442bb3df37ac35d67358/matplotlib-2.2.4.tar.gz
Complete output from command python setup.py egg_info:
============================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [2.2.4]
python: yes [2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018,
16:22:17) [MSC v.1500 32 bit (Intel)]]
platform: yes [win32]
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [not found. pip may install it below.]
install_requires: yes [handled by setuptools]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
freetype: no [The C/C++ header for freetype
(freetype2\ft2build.h) could not be found. You may
need to install the development package.]
png: no [The C/C++ header for png (png.h) could not be
found. You may need to install the development
package.]
qhull: yes [pkg-config information for 'libqhull' could not
be found. Using local copy.]
OPTIONAL SUBPACKAGES
[Truncated]
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in c:\users\rbao\app
data\local\temp\pip-install-5wv7sa\matplotlib\
```
**Expected outcome**
<!--A description of the expected outcome from the code snippet-->
<!--If this used to work in an earlier version of Matplotlib, please note the version it used to work on-->
Matplotlib 2.2.3 can be installed in the same environment correctly.
**Matplotlib version**
<!--Please specify your platform and versions of the relevant libraries you are using:-->
* Operating system: Windows 7 64bit
* Matplotlib version: 2.2.4
* Matplotlib backend (`print(matplotlib.get_backend())`):
* Python version: 2.7.15 32bit
* Jupyter version (if applicable):
* Other libraries:
Answers:
username_1: Because I forgot to upload the windows wheels :sheep: .
Try now.
username_2: Thanks for fixing so fast!!!
Status: Issue closed
username_3: Looks like this is fixed, feel free to comment or re-open if it isn't!
username_5: I am also having this problem.
```
src/checkdep_freetype2.c(1): fatal error C1083: Cannot open include file: 'ft2build.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.23.28105\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2
```
username_6: Me too, same error with Visual Studio 2017 Community Edition.
username_7: Matplotlib 3.1.2 is out, which contains wheels for Python 3.8. So no need to go through a full build process for a pip install.
username_8: Hello, I'm opening back this issue as I have the same error, freetype & png are both not found. I'm using v3.0.3 with py 3.8.2
username_1: We do not typically upload wheels for older releases of Matplotlib for versions of python that were not release when that version of Matplotlib was released. If you want to use an old Matplotlib with a new python you will have to build from source.
username_8: @username_1 but 3.0.3 isn't last version? Sorry I'm quite confused, I just tried to install matplotlib from pip.
username_1: No, the latest version is 3.2.1, see https://pypi.org/project/matplotlib/#history
Can you please move this discussion to https://discourse.matplotlib.org/c/community/install/13 ? Please make sure to include details like your OS, how you installed Python, and how you tried to install Matplotlib. |
debtcollective/landingpage | 370262152 | Title: 'About Us' link in footer of tools.debtcollective.org returns 404
Question:
username_0: Clicking 'About Us' returns a 404 and seems to turn the base route to https://tools.debtcollective.org/undefined/, so clicking other footer links after clicking 'About' also returns 404s (i.e. clicking 'Collectives' or 'Campaigns' on the left side of the footer)
Hovering over 'Campaigns' from the About 404:
<img width="1142" alt="screen shot 2018-10-15 at 1 17 42 pm" src="https://user-images.githubusercontent.com/9204835/46966773-a8202080-d07c-11e8-96ae-b6da3aa3832f.png">
Status: Issue closed
Answers:
username_1: Thanks for reporting @username_0, this should be fixed now. |
pyca/cryptography | 1093153811 | Title: DH / DHE primitives code examples in documentation inappropriate
Question:
username_0: The documentation handles `peer_public_key` in the DH and DHE examples as ready-to-use `DHPublicKey` objects.
Of course objects have to be serialized to bytes or at least into raw integers for transport between peers.
I'm missing that completely and don't get a solution.
`dh.py` and even the documentation only offer a bunch of **serialization to bytes**, but not any counterpart de-serialization method or example.
If someone really chooses to use DH instead of recommended ECDH, it would be practical to transport a public key as a serialized x509 certificate. That requires a DH exchange session to be built from numbers of a `primitives.asymmetric.rsa.RSAPublicKey` object. Of course this step should not be part of the example.
Thank you
Answers:
username_1: You don’t think serializing to DER and then loading via https://cryptography.io/en/latest/hazmat/primitives/asymmetric/serialization/#cryptography.hazmat.primitives.serialization.load_der_public_key is sufficient?
username_0: In fact I didn't expect the well known functions like `load_der_public_key` also support/return object of type `DHPublicKey`. Thank you.
Should be at least refenced at `public_bytes` functions or at the object definitions itself: `dh.DHPublicKey`.
I still would like to know how to construct a DH session based on a `RSAPublicKey` which could be transmitted with meta data in a x509 certificate.
username_1: What you're describing is a TLS handshake. There are many resources describing how a TLS handshake works, but at the highest level (of the ephemeral key variant) you use the certificate to authenticate the server is who you think it is, the two sides exchange ephemeral public key data, and the master secret is derived from the result of the DHE/ECDHE operation.
If you're just using this to learn then you can build a toy like this relatively easily, but TLS contains a variety of additional protections that are highly significant to the security of a system like this. You should just use TLS.
username_0: I'm not describing TLS. I'm still talking about DH.
You know? Two peers agree securely on a common secret key (roughly in my own words).
DH was not designed to pass some key objects within a Python script.
I've mentioned x509 because it's a standardized way to pass a public key with some meta data. And it's serializable.
Why do maintainers always prefer picking the weakest part in issue reports, ignore all other parts like `the documentation is perfect` and blame the author?
It would be great if people could answer like `No it's a bad idea because...` or `An integer would be missing...` or just ask a deeper question instead of assuming reporter is dumb and let's quickly talk about TLS.
Sorry. But I will never create any issues here again if this behaviour is standard.
Status: Issue closed
|
jart/cosmopolitan | 911380671 | Title: Redbean: StoreAsset inside OnHttpRequest when daemonized crashing
Question:
username_0: * Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
```
And the output in the log is:
```
I2021-06-04T03:31:54.456171:tool/net/redbean.c:4097:redbean:2940316] RECEIVED 127.0.0.1:45344 LOOPBACK HTTP11 GET //127.0.0.1:8080/ "" "curl/7.68.0"
[J[30;101merror[94;49m:tool/net/redbean.c:2351:redbean.com[0m: check failed on ryzen pid 2940316
CHECK_NE(-1, fcntl(zfd, F_SETLKW, &(struct flock){F_WRLCK}));
→ 0xffffffffffffffff (-1)
!= 0xffffffffffffffff (fcntl(zfd, F_SETLKW, &(struct flock){F_WRLCK}))
EBADF[9]
[35m./redbean.com \[0m
-d \
-L \
redbean.log
6ffffffff710 000000444cb0 UNKNOWN
6ffffffff720 000000444c22 UNKNOWN
6ffffffff870 0000004099c1 UNKNOWN
6ffffffffa40 0000004212df UNKNOWN
6ffffffffa90 0000004395b3 UNKNOWN
6ffffffffc20 00000042147c UNKNOWN
6ffffffffc50 000000421560 UNKNOWN
6ffffffffc60 00000041b29d UNKNOWN
6ffffffffc70 0000004208bc UNKNOWN
6ffffffffda0 0000004217a0 UNKNOWN
6ffffffffdf0 00000041c69c UNKNOWN
6ffffffffe40 00000040dffb UNKNOWN
6ffffffffe50 000000412c90 UNKNOWN
6ffffffffea0 000000412e0b UNKNOWN
6fffffffff20 000000413534 UNKNOWN
6fffffffff50 00000041385b UNKNOWN
6fffffffff80 0000004143ae UNKNOWN
6fffffffffc0 00000041456f UNKNOWN
6fffffffffe0 0000004018f9 UNKNOWN
7ffd9fd94550 000000401174 UNKNOWN
W2021-06-04T03:31:54.456521:tool/net/redbean.c:785:redbean:2940086] 2940316 exited with 77 (0 workers remain)
```
I have no idea how to debug this further unfortunately.
Answers:
username_1: Judging by the logic on line 2351 (`CHECK_NE(-1, fcntl(zfd, F_SETLKW, &(struct flock){F_WRLCK}));`) redbean fails to acquire a write lock on the `/test.txt` file (maybe because it's in the root directory?)
See if the following patch provides a more graceful handling:
```diff
diff --git a/tool/net/redbean.c b/tool/net/redbean.c
index 444f0c7d..a0540604 100644
--- a/tool/net/redbean.c
+++ b/tool/net/redbean.c
@@ -2348,7 +2348,11 @@ int LuaStoreAsset(lua_State *L) {
}
}
//////////////////////////////////////////////////////////////////////////////
- CHECK_NE(-1, fcntl(zfd, F_SETLKW, &(struct flock){F_WRLCK}));
+ if (-1 == fcntl(zfd, F_SETLKW, &(struct flock){F_WRLCK})) {
+ luaL_error(L, "path is not writable");
+ unreachable;
+ }
+
OpenZip(false);
now = nowl();
```
username_1: @username_0, @jart, I thought the issue is related to the fact that the archive itself if not writable after `setgid` and `setuid` commands, but that doesn't seem to be the case.
I noticed that the daemongid/daemonuid values may be not initialized, as I see the `setgid(daemongid) -> EPERM[1]` messages in the log (same for setuid). I applied the following patch, which eliminated the messages:
```diff
diff --git a/tool/net/redbean.c b/tool/net/redbean.c
index 444f0c7d..7d63db9f 100644
--- a/tool/net/redbean.c
+++ b/tool/net/redbean.c
@@ -312,8 +312,8 @@ static int frags;
static int gmtoff;
static int server;
static int client;
-static int daemonuid;
-static int daemongid;
+static int daemonuid = -1;
+static int daemongid = -1;
static int statuscode;
static int maxpayloadsize;
static int messageshandled;
@@ -773,8 +773,8 @@ static void Daemonize(void) {
open("/dev/null", O_RDONLY);
open(logpath, O_APPEND | O_WRONLY | O_CREAT, 0640);
dup2(1, 2);
- LOGIFNEG1(setgid(daemongid));
- LOGIFNEG1(setuid(daemonuid));
+ if (daemongid > -1) LOGIFNEG1(setgid(daemongid));
+ if (daemonuid > -1) LOGIFNEG1(setuid(daemonuid));
}
static void ReportWorkerExit(int pid, int ws) {
```
However, opening the archive with the write lock still fails even when I explicitly set gid and uid values from the command line (the file appears to be open by the redbean process itself, as shown by fuser command, but it's not different from the non-daemon mode). |
chrislennon/AlexaRedditViewer | 404973766 | Title: Account Linking issue (UK only?)
Question:
username_0: Issues with skill in English (UK)
An error occurred when attempting to complete the account linking process with a user created account.
Steps to reproduce:
- User: "Alexa, open reddit viewer"
- Skill: "You need to link your Reddit account to use this skill."
Steps:
1. Enabled the skill.
2. Clicked on the "SIGN UP" option.
3. Provided the Email id, Username and password to create the account.
After the successful creation of the reddit account, the skill shows an error as "bad request (reddit.com) you sent an invalid request — invalid redirect_uri parameter."
Please fix account linking implementation when resubmitting your skill to complete certification of the account linking functionality.
Answers:
username_0: haven't been able to reproduce this.
Account linking doesn't appear to be localised to issue should persist on any locale
Status: Issue closed
|
sul-dlss/stacks | 418937829 | Title: [Stacks/prod] ZeroDivisionError: divided by 0
Question:
username_0: ## Backtrace
line 44 of [PROJECT_ROOT]/app/models/projection.rb: explicit_tile_dimensions
line 81 of [PROJECT_ROOT]/app/models/projection.rb: tile_dimensions
line 24 of [PROJECT_ROOT]/app/models/projection.rb: tile?
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/49299/faults/46734053) |
rust-lang/rust | 782911835 | Title: Use `atty` crate instead of rolling our own
Question:
username_0: Currently the driver and some other parts of the compiler roll their own
functions to detect if they are connected to a TTY. We should instead use
something like the `atty` crate (which we already depend on).<issue_closed>
Status: Issue closed |
facebook/flow | 155263669 | Title: Allow react components' lifecycle methods to be async
Question:
username_0: I tried to declare `componentWillMount` in a component as async but got a flow error: `Promise This type is incompatible with undefined`. I found out that it's caused by this definition:
```js
componentWillMount(): void;
```
in https://github.com/facebook/flow/blob/master/lib/react.js
Shouldn't it be possible for components' lifecycle methods which return void to return a Promise instead?
Answers:
username_1: I find the strictness about void functions useful as it's caught lots of cases where I've misunderstood a method or callback and tried returning what I thought was an important value.
Or if the React definitions were changed to have the return type of `void|Promise`, I think that would imply that React handled promises specially. I just had to double-check that wasn't the case.
You could still do async stuff inside a void returning function by using an immediately-invoked async function:
```js
class X extends React.Component {
componentWillMount() {
(async ()=>{
console.log('foo');
await delay(123);
console.log('bar');
})();
}
}
```
username_2: I agree with the original post, I think flow should let us do async lifecycle functions in React. See https://twitter.com/Vjeux/status/772202716479139840
username_2: cc @thejameskyle :>
username_3: On the other hand, if React don't care about returned value shouldn't it be `*`?
username_4: I think this could actually be misleading, like react is actually going to wait until promise is resolved
username_5: I think this isn't actually a great idea, even without issue with Flow. We should unsubscribe / cancel these async operations in `componentWillUnmount`. Also person who reads `async componentDidMount` may assume that React does something with returned promise.
So maybe Flow shouldn't encourage such pattern.
username_6: There's precedent in callback functions to annotate the callback as `mixed` when the caller doesn't care. The justification there is that `() => e` may be used solely for the side-effects of `e`. If `e` has a return value, it doesn't hurt anything to return it. This lifecycle method is a callback in spirit, as far as I can tell, so any reasoning that applies to callbacks *should* apply here.
The same justification applies here, I suppose. I'm of two minds here, really. On one hand, I think Flow should concern itself with type *errors* primarily and avoid too much hand-holding. If something works at runtime, we should have a good reason to make it a type error.
On the other hand, I *personally* believe that `() => void` is an useful signifier of "side effects are happening here" and changing that to `() => mixed` communicates poorly by not guiding developers away from useless code like `[1, 2, 3].forEach(x => x + 1)`. For my own code, I would use `() => void`, but I understand why the community decided to go the other way.
So I think there's some justification for changing the return type from `void` to `mixed`, but the same justification would convince me to similarly change `setState`, `forceUpdate`, etc.
Another option is to change the return type to `void | Promise<void>`, but I don't like that idea. That type signature gives the impression that React will work to make async side effects safe, but it won't. I think it's misleading / doesn't represent the "actual" types.
username_4: This could also be unsafe in context of lib defs. In future versions of a library returning something other than `undefined` could have a special meaning which could lead to unexpected behaviour.
username_7: Probably worth mentioning that this is slightly annoying for tests. If you want to do something like:
```js
componentDidMount() {
return doSomething().then(() => {
this.setState({ some: 'state' })
})
}
```
An ideal enzyme test would be something like this:
```js
it('fetches something then sets state', async () => {
const component = shallowRenderComponent()
await component.instance().componentDidMount()
expect(component.state()).toBe({ some: 'state' })
})
```
But because of this flow restriction, you instead have to split out the async logic into another testable method:
```js
componentDidMount() {
this.doSomething()
}
doSomething() {
return doSomething().then(() => {
this.setState({ some: 'state' })
})
}
```
And the test becomes:
```js
it('calls this.doSomething on mount', () => {
const component = shallowRenderComponent()
const doSomething = jest.fn()
component.instance().doSomething = doSomething
component.instance().componentDidMount()
expect(doSomething).toHaveBeenCalled()
})
it('fetches something then sets state', async () => {
const component = shallowRenderComponent()
await component.instance().doSomething()
expect(component.state()).toBe({ some: 'state' })
})
```
Not a deal breaker, but kind of a burden for seemingly not much gain.
username_8: Hello and thank you for the flow type checker. I would like to know if there is a solution for this issue or a workaround?
username_4: Here is the workaround:
```js
class X extends React.Component<{}> {
componentWillMount() {
this._componentWillMount();
}
async _componentWillMount() {
}
}
```
username_9: Are there any negative consequences to using suppressions like `$FlowFixMe` instead of an IIFE (@username_1) or a sub-function (@username_4)? Wouldn't this be the easiest and fastest workaround provided you have already setup your _`.flowconfig`_ to support it? |
theglobaljukebox/cantometrics | 1008209225 | Title: Kate - POSSIBLE CONFUSION OF SONGS ASSIGNED TO GJB cultures 10286, 27688 (the latter currently does not have cantometrics songs assigned to it)
Question:
username_0: Kate's notes:
SUMMARY OF (POSSIBLE) PROBLEM: Of the songs currently attributed to 10286, four songs would seem to be a better match for "Yup'ik", which I actually think should be assigned C_id 27688: 2939, 2940, 2941, 2942; these four songs have average lat lon 65.25, -164.45); The remaining songs currently attributed to 10286 would seem to be a better match for the northern Alaskan inuit group "Tareumiut" and dialect nort2943; this group could keep C_id 10286 but should be renamed "Tareumiut"
10286, 27688 - Notes: From Wikipedia: Tununak is one of four villages on Nelson Island; Nelson island is one of two main islands in Western Alaska (with Nunivak Island) occupied by Central Yupik speakers. Note that none of the recordings appear to have been made on Nelson Island.
27688 - Tununak Qaluyaarmiut--Suggest renaming "Yup'ik", and assigning specific cantometrics songs with matching lat lon (see specific suggestions at right) - Specific suggestion: RENAME (note spelling): Yup'ik; match to B369 Norton Sound Inuit; Cantometrics SONG IDs: 2939, 2940, 2941, 2942; society lat long 65.25, -164.45; Adjust alternative names. NOTE: There are three other Yup'ik speaking societies in D-PLACE: Na6/B299 "Nunivak" (too far south for these recordings); B296 Kuskowagmut (still fairly far south for the recordings). Despite the fact that these three societies are too far south to be good matches for the recording locations, more research might indicate they could be matched based on cultural similarity.
10286 - Yu'pik--Suggest renaming to "Tareumiut" - Specific suggestion: RENAME: Tareumiut; match to Na2 Tareumiut; dialect nort2943; Cantometrics SONG IDs:1248, 2937, 2938, 2943, 2944, 2945, 2946; Society lat lon: 71.38;-156.48; Adjust alternative names
Answers:
username_0: Two issues I see with this:
1) While society 27688 (Tununak Qaluyaarmiut) is not currently assigned any Cantometrics songs, it has a Choreometrics ID number, meaning that this culture probably came from Choreometrics metadata. Since we haven't gone through and cleaned the Choreometrics data and metadata yet, we don't know for sure how accurate this culture info is, but if there are indeed Choreometrics data for this society I don't think we should rename it to Yup'ik and change its metadata as suggested. Instead, I suggest leaving songs 2939, 2940, 2941, and 2942 as society 10286 (Yu'pik), and correcting the spelling (Yup'ik) and matching to the D-PLACE ids Kate suggested.
2) I agree that songs 1248, 2937, 2938, 2944, 2945, 2946, 2943 are from Point Barrow and would be a better match with a society located there. The liner notes state: ""The Point Barrow people live at the most northern point of Alaska on the Arctic Ocean and are called Nuwungmiut."" I can't find much info about Nuwungmiut, and I don't know how much to trust the Folkways liner notes, but would this be a safer name choice for these four songs than Tareumiut? And in that case would we still match it with Na2 Tareumiut? Regardless of the name, I suggest creating a new society with a new id for these 7 songs, rather than reusing an old one, for the purpose of keeping 27688 (Tununak Qaluyaarmiut) as is for when Choreometrics data is added in the future.
--Stella
username_1: --
www.culturalequity.org
https://archive.culturalequity.org
www.theglobaljukebox.org
*“The precise role of the artist is to illuminate that darkness, blaze
roads through that vast forest, so that we will not, in all our doing, lose
sight of its purpose, which is to make the world a more human dwelling
place." --- <NAME>*
username_0: Society 10286 - Corrected spelling to "Yup'ik"; assigned society lat long 65.25, -164.45; matched to D-PLACE xd1014; language kusk1241. Songs 2939, 2940, 2941, and 2942 are assigned to this society.
Created new society - named Nuwungmiut (Tareumiut as alt name); assigned society lat-lon 71.38, -156.48; matched to D-PLACE xd1036; language nort2944. Songs 1248, 2937, 2938, 2944, 2945, 2946, 2943 assigned to this society.
Status: Issue closed
|
godotengine/godot | 49768663 | Title: document transformation matrix (including quaternions) and raycasting
Question:
username_0: Raycasting and matrix operations are very interesting features,
but require some love in documentation area. It is very hard to find odds/ends for
someone without game engine experience (but with average math knowledge) like myself.
Answers:
username_1: Ping @godotengine/documentation :)
username_1: Cloned in godot-docs repo.
Status: Issue closed
|
OrchardCMS/OrchardCore | 406560176 | Title: how can i set farsi or arabic calender ??
Question:
username_0: i need to use another calendar like Persian or Arabic calendar. how can i do this?
Answers:
username_1: هنوز کامل نشده تو خود orchardcmsهست.
از قسمت culture بزاری فارسی تنظیم میشه
username_0: thank you Mehdi, but it should able to save English (for query and search) and Persian (just for showing in ui ) date together.
username_2: We support calendars in O1
- https://github.com/OrchardCMS/Orchard/blob/dev/src/Orchard/Localization/Services/DefaultCalendarManager.cs
- and `ICalendarSelector`
We should do the same here
/cc @DaRosenberg as you implemented it in O1 should be proud
username_0: شما استفاده کردی؟ میشه برای کور ایتفاده کرد>؟
username_0: @username_2 @DaRosenberg @username_1
it's easy to implement? i need Persian calendar and if it easy i will do that, unfortunately, I'm not familiar with the orchard core coding.
username_2: It might be trivial because we already have this in O1 and it should do the same thing. The only non-trivial thing that was done is the parsing of dates that was completely custom in O1, but you could start with the dotnet core implementation if you know that it works for you.
You would also add a calendar setting (like we added a TimeZone setting).
Technically just reproduce what this PR did:
https://github.com/OrchardCMS/Orchard/commit/a306759748b09e0ba07bb8b729073daddcd28687
You should look at these:
https://github.com/OrchardCMS/Orchard/blob/dev/src/Orchard/Localization/Services/ICalendarSelector.cs
https://github.com/OrchardCMS/Orchard/blob/dev/src/Orchard/Localization/Services/SiteCalendarSelector.cs
https://github.com/OrchardCMS/Orchard/blob/5b5eda6ae36dc2d7906599882165ef1e985cdc8c/src/Orchard/Localization/Services/DefaultCalendarManager.cs
Then change the DateTime shape to use this calendar.
All in all, could take a couple hours if you are familiar with Orchard.
Status: Issue closed
|
immersive-web/webxr-input-profiles | 1092606938 | Title: touch-v3 controller assets have rotation/position offset when using Quest + Link
Question:
username_0: **Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
1. Attach Oculus Quest to desktop PC and enable Link
1. Go to https://immersive-web.github.io/webxr-samples/input-profiles.html
2. Click on "Enter VR"
3. Place controller circle on a flat surface
4. Observe position and rotation error in VR compared to real world
5. Place controller circles against each other
4. Observe position and rotation error in VR compared to real world
On Quest in the Oculus browser, orientation is correct; when you repeat the above on Quest with Link disabled you get a pretty perfect match between real world and virtual objects.
**Expected behavior**
Quest + Link controller model should match real-world orientation and position.
**Screenshots**
_In VR via Quest + Link_

_Oculus Home via Quest + Link controller orientation_

_Real-world orientation_

**Version**
Latest used on https://immersive-web.github.io/webxr-samples/input-profiles.html
**Desktop (please complete the following information):**
- OS: [Windows]
- 3D engine: [all]
- Browser [chrome]
Answers:
username_1: @username_2 do you know if this is an issue with Oculus controllers?
username_2: @Artyom17 and I have brought this up before as something that WebXR could address..
The controllers need an additional offset to be perfectly placed and we ended up hardcoding that in our code.
username_2: For reference, this is the [issue](https://github.com/immersive-web/webxr/issues/958).
username_0: Thanks for linking! Just to confirm that I get this right – the Oculus Browser on Quest applies custom offsets so that the controller models actually line up with the controllers? And the "better" fix would be that the offsets are fixed in this repo, and then the offset in Oculus Browser has to be removed? |
jlippold/tweakCompatible | 406015737 | Title: `Activate Link` working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "org.rdharris.activatelink",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "org.rdharris.activatelink",
"deviceId": "iPhone10,1",
"url": "http://cydia.saurik.com/package/org.rdharris.activatelink/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": true,
"packageName": "Activate Link",
"category": "Tweaks",
"repository": "BigBoss",
"name": "Activate Link",
"installed": "1.4.1-1",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "org.rdharris.activatelink",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Open a link with an activator action",
"latest": "1.4.1-1",
"author": "rjharris",
"packageStatus": "Working"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
michaelhyatt/elastic-apm-mule4-agent | 651964824 | Title: Support for ELASTIC_APM_ENABLE_LOG_CORRELATION - log4j2
Question:
username_0: Hi - this is pretty much a duplicate of a similar issue in the mule3-agent.
https://github.com/username_1/elastic-apm-mule3-agent/issues/24
Regardless of if I have `ELASTIC_APM_ENABLE_LOG_CORRELATION=true`, the `%X{trace.id}` and `%X{transaction.id}` are always empty. The `%X{correlationId}` does work.
Mule Kernel 4.2.0
AdoptOpenJDK 1.8
mule4-agent 0.0.2
mule-apikit-module 1.3.3
Answers:
username_1: @username_0 I created a new release that populates MDC with `trace.id` and `transaction.id`, please give it a go:
https://github.com/username_1/elastic-apm-mule4-agent/releases/tag/v0.0.3
username_0: @username_1 sorry I just saw this comment, I'm not accustomed to checking my Github notifications.
I will test it out in the next few days and let you know how it goes 👍
username_0: Hello,
Sorry it has taken me so long. I have come back around to testing version 0.0.3.
Alas, it is still not working for me.
mule kernel log:
```
**********************************************************************
* Started app *
* 'mule4.template.project.api-1.0.0-SNAPSHOT-mule-application' *
* Application plugins: *
* - Sockets : 1.1.5 *
* - HTTP : 1.5.3 *
* - APIKit : 1.3.3 *
* Application libraries: *
* - apm-agent-attach-1.17.0.jar *
* - jna-5.3.1.jar *
* - jna-platform-5.3.1.jar *
* - apm-agent-api-1.17.0.jar *
* - mule4-agent-0.0.3.jar *
**********************************************************************
```
log4j2.xml
```
<PatternLayout pattern="%d [%t] %-5p %c - %X{trace.id} %X{transaction.id} CorrelationID=%X{correlationId}, Message=%m%n" />
```
mule4.template.application.log - trace.id and transaction.id appear as empty strings.
```
2020-08-31 11:33:14,117 [[MuleRuntime].cpuLight.15: [mule4.template.project.api-1.0.0-SNAPSHOT-mule-application].post:\request:application\json:api-config.CPU_LITE @4dbe76a0] INFO org.mule.runtime.core.internal.processor.LoggerMessageProcessor - CorrelationID=e09e847b-b605-4f16-86b0-971698076fcd, Message={"Name":"name","Address":"address"}
```
username_0: After digging a little deeper and attaching a debugger, I'm finding this behaviour:
1. The MDC.put is being called for trace.id and transaction.id
2. Looking at org.apache.logging.log4j.ThreadContext class, the correlationId is put many times within the flow, whereas trace.id is only put once.
3. On the MDC.remove, the trace.id and transaction.id aren't there anymore, though the correlationId is.
Possibly the thread that put the ids is a different thread to the mule logger component, and different to the thread that ends the transaction.
system property `log4j2.isThreadContextMapInheritable=true` does not seem to make any difference.
username_1: Could it be the AsyncLogger? If not, it may require extending the log4j:
```
Log4j 2.7 adds a flexible mechanism to tag logging statements with context
data coming from other sources than the ThreadContext. See the manual page
on extending Log4j
<https://logging.apache.org/log4j/2.x/manual/extending.html#Custom_ContextDataInjector>
for
details.
```
If there was only a way to log the trace.id value in the payload of the
message, the Logs UI in Kibana would pick it up, since the search string it
is using is trace.id : "XXX" OR "XXX" which caters for trace.id logged in
the message field of the logged event. |
mozilla/Reps | 423384089 | Title: Plan to move inactive rep to Alumni (Q2 2019)
Question:
username_0: Goal: Seasonal moving inactive rep (no report more than 12M) to Alumni
## Related Issues & Links:
- https://discourse.mozilla.org/t/inactive-reps-to-be-moved-as-alumni/34090/
- #348
- #362
## Roles:
Responsible: @username_0
Accountable:
Supporting:
Consulted: @couci
Informed:
## Required:
- [ ] list of inactive rep (Due: )
- [x] email template to ask for update (owner Irvin)
- [ ] send out notice email (owner: Irvin)
- [ ] announce the preliminary list on Discourse (owner: Irvin)
- [ ] get final list of inactive rep (owner, Due: )
- [ ] move innactive list to Alumni (owner: username_0, Due: )
- [x] create a procedure to auto-generate alumni cert and draft email (owner: Irvin)
- [ ] send out alumni certificationn (owner: Irvin, Due: )
- [ ] remove alumni from Mozillians NDA group (owner: Konstantina Due: )
Answers:
username_0: blocked by https://github.com/mozilla/remo/pull/1507
username_0: This is the preliminary list of Reps who will be graduate
https://discourse.mozilla.org/t/inactive-reps-who-will-be-graduated-to-alumni-after-mid-may/40024
username_1: Can this be closed then?
username_0: no, we haven't move them due to latest campaign before All Hands. Will proceed later this week.
username_0: Totally moved 25 reps to alumni
username_0: over to @couci for NDA removal
username_0: the list is here https://docs.google.com/spreadsheets/d/1x5YfxZSINc6pUuJTJSJ3A8EiZwsxYxXGW6850jYCvlQ/edit?pli=1#gid=2085532218 (sheet 2 line 21-45)
username_0: Konstantina done removing NDA access. I will close this issue once I create task for Q4.
Status: Issue closed
username_0: closed with https://github.com/mozilla/Reps/issues/385 |
michaelrambeau/bestofjs | 543768281 | Title: react-diagrams
Question:
username_0: Hi there,
Couldn't find this one and seems an interesting project:
https://github.com/projectstorm/react-diagrams
Maybe this one would be a good fit to add to best of JS.
Cheers
Answers:
username_1: Hello Artur @username_0
Thank you for the recommandation, it will be available on _Best of JavaScript_ very soon.
username_1: @username_0 Happy new year 2020 Artur!
It's online, please check the charting tag: https://bestofjs.org/projects?tags=chart
Maybe, in addition of "charting", a more specific tag like "diagram" (or flowchart?) would be useful, what do you think?
username_0: @username_1 Happy New Year 2020 Michael! :)
Great, thanks! A diagram tag would be great, it's always nice searching and seeing that there's a related tag. For flowchart or diagram I'm not sure. Diagram seems better to me, but I suppose that's my natural tendency to look for diagram.
Status: Issue closed
username_1: Hello Artur @username_0
I forgot to tell you that now we have a tag called "Diagram / Flow chart":
https://bestofjs.org/projects?tags=diagram
Sorry for the very late reply, let me know if it makes sense, thank you!
username_0: Hi Michael @username_1
That's great! Thanks for doing this. The site and the mailing list are really nice and a great way to keep up with stuff that's trending.
username_1: Thank you for your support Artur! |
gradle/gradle | 256131000 | Title: Improve markdown rendering support
Question:
username_0: In https://github.com/gradle/gradle/commit/73d54c86eca92ae26decccd774f49bfc2b01c314 we can see our markdown render can't process `**1.1.4-3**` correctly and we have to use `<b>1.1.4-3</b>` as a workaround. Since `pegdown` is deprecated, we should use something else instead.
Answers:
username_1: I think we should fully move to AsciiDoc. We already do that for the user guide. There's no good reason to stick with Markdown for the release notes.
username_0: That would be excellent.
Status: Issue closed
|
mwydmuch/napkinXC | 878972415 | Title: Support for custom tree (tree_structure in python interface)
Question:
username_0: Hi,
Thanks for writing this software, which is very helpful!
I'm currently experiment the effect of label trees and load trees from file.
Is it possible to pass a string to `tree_structure` parameter in [models.PLT](https://github.com/username_1/napkinXC/blob/6d3d451c1283ea8e2e186daf3dcb39d9867c5a5f/python/napkinxc/models.py#L277) class, so that a custom tree can be loaded? It seems like the current Python interface does not support it.
If possible, I can make a pull request, and it would be nice if some instructions can be given, e.g., where and what to modify.
Cheers,
Han
Answers:
username_1: Hi @username_0, thank you for your kind words.
It's true, that the Python interface doesn't have the `tree_structure` parameter right now (I would like to improve this functionality first before adding it), but constructors accept **kwargs that are passed to the underlying CPP module. This actually allows using all undocumented parameters that are implemented in `src/args.cpp file` also from Python (so for experimental purposes new options can be implemented just in CPP, without the need of updating the Python module).
So you can use the `tree_structure` out of the box, like in this example below that trains two PLTs: first trains one constructed its tree using hierarchical k-means clustering, the second one loads a tree created by for the first one.
```
from napkinxc.datasets import load_dataset
from napkinxc.models import PLT
from napkinxc.measures import precision_at_k
X_train, Y_train = load_dataset("eurlex-4k", "train")
X_test, Y_test = load_dataset("eurlex-4k", "test")
plt = PLT("eurlex-model")
plt.fit(X_train, Y_train)
Y_pred = plt.predict(X_test, top_k=5)
print("Precision at k:", precision_at_k(Y_test, Y_pred, k=5))
plt2 = PLT("eurlex-model2", tree_structure="eurlex-model/tree", verbose=True) # I added the verbose option here as a proof, it will print the confirmation that the tree was loaded from a given file.
plt2.fit(X_train, Y_train)
Y_pred = plt2.predict(X_test, top_k=5)
print("Precision at k:", precision_at_k(Y_test, Y_pred, k=5))
```
When it comes to the tree format, it's pretty strict and limited right now:
* In the first line, it expects 2 numbers space separated: `m` - numbers of labels and `t` - numbers of tree nodes.
* Then `t - 1` lines are expected, each specifies one tree node with two or three numbers space separated: `p` - id of the parent node, `n` - node id, `l` - label id (optional).
* 0 is always the id of the root node. `p` and `n` should be < `t` and `l` < `m`.
Status: Issue closed
|
gatling/gatling | 186028853 | Title: Upgrade sbt 0.13.13
Question:
username_0: https://groups.google.com/forum/#!topic/sbt-dev/Q4ROTNVH5gY
Answers:
username_0: ```
trait Build in package sbt is deprecated: Use .sbt format instead
```
cc @username_1
username_1: Yep, we had it coming for a long time :/
I can help with migration if you want :)
username_0: I don't really get the reasons behind this move.
Anyway, if you have some spare time to help with this, it would be awesome!
username_1: I happen to have a full week of spare time :)
The simplest migration path would be to simply move module definitions in `GatlingBuild` to a `build.sbt` as this is the only deprecation here, `.scala` files contributing to the build are still supported AFAIK.
A possible next step would be to move the common settings to a local AutoPlugin, but it's only a nice to have.
WDYT ?
username_0: I'd rather keep it simple :)
username_0: Thanks a lot!
Status: Issue closed
|
Hrabovszki1023/OKW | 123175565 | Title: [KeyWord] Select Tablecell "Functionalname" "Col" "Row"
Question:
username_0: # Done-list:
## Implementation:
- [ ] as Method in `IOKW_State` defined.
- [ ] `OK` – LFC (Latebound Function Call) implemented
- [ ] `NOK` – LFC implemented
- [ ] `Core`
## Unittests:
- [ ] Normal Usecase
Usecase with ANTLR/OKWParser:
- [ ] Testcase with memorized value.
- [ ] Testcase with environment var.
- [ ] Testcase with `${IGNORE}` and `""` as values.
- [ ] Testcase with `${EMPTY}` (VerifyValue-Keyword)
## Exception-testcases:
- [ ] LFC ist not Implemented -> expected Exception: `OKWFrameObjectMethodNotFoundException`
- [ ] GUI-Object ist not defined in Frame -> expected Exception: `OKWFrameObjectChildNotFoundException`
- [ ] Not allowed value: e.g. „Bandersnatch“ for CheckBoxes: `VerifyExist(„OK“,“Uschi“)` -> expected Exception: `OKWNotAllowedValueException` |
Serviceplatformen/demoservice-client-net | 1094365859 | Title: Updated version?
Question:
username_0: This sample is for .NETFramework 4.5.2 which is going to be EOL in April 2022.
Any chance for an upgrade on this to .net 6?
Answers:
username_1: Hi
If you still have the isssue please contact <EMAIL>.
We are closing the issue feature.
We have just uploaded a new version using .Net v4.7.2
Regards
ServicePlatformen
Status: Issue closed
|
kubeflow/manifests | 1035115935 | Title: Kubeflow installation mysql not starting up due to pvc not in namespace
Question:
username_0: Kubeflow version
```
kubeflow/manifests$ git branch
* (HEAD detached at v1.4.0)
master
```
```
kubectl -n kubeflow describe pod mysql-f7b9b7dd4-j52jr
```
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "mysql-pv-claim" not found.
Warning FailedScheduling 25s default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "mysql-pv-claim" not found.
```
Correction
```
alex@pop-os:~/kubeflow/manifests$ cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: kubeflow
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20G
EOF
```
https://github.com/kubeflow/manifests/blob/d36fc9c0555c936c7b71fd273b8e4604985ebba8/apps/pipeline/upstream/third-party/mysql/base/mysql-pv-claim.yaml
Answers:
username_1: If you installed kubeflow using kustomize the pvc should be create automatically. Are you running a kubernetes multi node cluster or single node cluster?
What type of storage are you using in the cluster? if you are running on single node and using default local node storage you might need to configure local-storage-provisioner -- example: https://github.com/rancher/local-path-provisioner
What do you see when you run -
kubectl get sc
kubectl get pvc
username_0: Sorry to comment late- the pvc was created; but not in the correct namespace. I put it in the correct namespace and it worked |
ryanblenis/MeshCentral-WorkFromHome | 864061315 | Title: RDP Work FROM HOME Tunnel Time Out
Question:
username_0: So is there a timeout in regards to how long a the RDP desktop tunnel stays? I am running mechcentral version 0.7.89
After about 12 hours or so the RDP connection/tunnel that has been placed on the desktop no longer works. It seems like the tcp proxy/forward has timed out.
thanks
shawn
Answers:
username_1: I'm having the same issue. I haven't put a time to it, but it works for a while, then go back to use it and it doesn't work.
If I restart the MeshAgent it appears to work fine.
When I click on the shortcut and it's not working I don't see anything in the MC logs (MC is in debug mode), but once I restart the agent I then see the tunnel being created and it works as intended. |
random-guys/lux | 534738245 | Title: Prefer env vars over actual URL
Question:
username_0: https://github.com/random-guys/lux/blob/9f9e2fdecf014acdce3e988662d76c55fcab63ba/__tests__/lux.spec.ts#L5
This repo is public, it won't make sense to use the actual URL.
Answers:
username_1: That skipped me...could make a PR that uses [this](http://www.dneonline.com/calculator.asmx?WSDL) instead and uses an `$ENV`
Status: Issue closed
|
maarten-kieft/ASMP | 259911145 | Title: Processor crashes after period of time
Question:
username_0: Stacktrace:
`Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/sqlite3/base.py", line 328, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: database is locked
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.4/dist-packages/django/core/management/__init__.py", line 355, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.4/dist-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/usr/bin/asmp/processor/management/commands/runprocessor.py", line 11, in handle
processor.start()
File "/usr/bin/asmp/processor/processor.py", line 30, in start
self.listen(connection)
File "/usr/bin/asmp/processor/processor.py", line 46, in listen
self.process_message(message)
File "/usr/bin/asmp/processor/processor.py", line 57, in process_message
measurement.save()
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 807, in save
force_update=force_update, update_fields=update_fields)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 837, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 923, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/base.py", line 962, in _do_insert
using=using, raw=raw)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/query.py", line 1076, in _insert
return query.get_compiler(using=using).execute_sql(return_id)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/sql/compiler.py", line 1107, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 80, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python3.4/dist-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/sqlite3/base.py", line 328, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: database is locked
`
Status: Issue closed
Answers:
username_0: Cannot reproduce anymore |
robbederks/downzip | 834392412 | Title: Support for direct download (no-zip) of individual URLs
Question:
username_0: Many thanks for creating this tool, which should help our users download multiple files from our website.
I have a somewhat strange request though: can `downzip` be extended to support downloading a single file without creating a zip? The reason for this is that we store our data on another host so chrome disallows direct download from the URL. Whereas `downzip` allows us to download multiple files from the host, it would be convenient for our users to download a single file using the same mechanism. I suppose the logic could be like
1. If `files ` contains multiple files, use .zip
2. if `files` contains a single file and `zipFileName` is unspecified or does not end with `.zip`, download in original format.
Answers:
username_1: Seems pretty straightforward to implement. Would maybe gate this behind a parameter when downzip is instantiated which defaults to false.
Would merge if it's a clean PR!
username_0: I found https://github.com/jimmywarting/StreamSaver.js which shares the same "stream" idea and appears to provide what I need. I would love to have one less dependency if `downzip` can handle both cases but I am afraid that my level of JS is not enough to provide a PR to `downzip`, maybe testing and documentation if you can lead the effort.
username_1: I'm sorry, I don't have the time at the moment to implement this. Will leave it open for if anyone else is interested!
username_0: Maybe you can just outline, in a few sentences, where changes should be made? |
magloire/vidisearch | 315185888 | Title: Fejler ved når jeg vælger adresse fra listen
Question:
username_0: Når jeg vælger adresse fra listen

Får jeg denne fejl i consollen
bundle.js:14239 Uncaught TypeError: Cannot read property 'get' of undefined
at AdresseList.handleClk (bundle.js:14239)
at HTMLUnknownElement.boundFunc (bundle.js:74247)
at Object.ReactErrorUtils.invokeGuardedCallback (bundle.js:74253)
at executeDispatch (bundle.js:68270)
at Object.executeDispatchesInOrder (bundle.js:68293)
at executeDispatchesAndRelease (bundle.js:67701)
at executeDispatchesAndReleaseTopLevel (bundle.js:67712)
at Array.forEach (<anonymous>)
at forEachAccumulated (bundle.js:79540)
at Object.processEventQueue (bundle.js:67912)
Answers:
username_0: Havde glemt at tilføje submodulerne til config
Status: Issue closed
|
oxinabox/MagneticReadHead.jl | 423812081 | Title: Pressing Ctrl + D spams iron> in the console
Question:
username_0: 
Answers:
username_1: Can you do `versioninfo()` for me?
This is the return of https://github.com/username_1/MagneticReadHead.jl/issues/13
which was closed in https://github.com/username_1/MagneticReadHead.jl/pull/15/
username_0: ```
julia> versioninfo()
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
```
I updated to latest master of MagneticReadHead and tried again but same thing.
username_1: As an aside what did you expect <kbd>Ctrl</kbd>+<kbd>d</kbd> to do?
My instinct says `Continue`, does that align with yours?
username_0: I expected it to exit the debugger.
username_1: But should it keep running the function (`Continue`)
or should it terminate imediately (`Abort`)
username_0: I think abort everything and give julia prompt. That was my intention but it doesn't mean it is the best choice :).
username_1: I think you are right,.
It means _I am done_,
e.g. when I press it in julia I exit julia.
username_1: I can't reproduce this on
```
julia> versioninfo()
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin14.5.0)
CPU: Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
```
[](https://asciinema.org/a/vyKed4e1Kn30QyKLGKj1KdCg5?t=25)
You didn't by any chance do something like `include(test/runtests.jl)` before running it?
(A side-effect of how that takes control of readline is when it relinquishes control it doesn't reenable to the protections against this.) |
controversies-of-science/react-worldviewer-app | 251441140 | Title: Image pyramids should only load on click
Question:
username_0: There is potentially an image pyramid at each level of discourse. I need to rethink the way that I'm dealing with this situation: I cannot simply download all five of them -- especially on mobile devices.
I should load the large-format image on load, and force a click (like with the CardText component) to toggle between image pyramids and large-format images.
Part of this ticket should be to standardize the UI for this toggling.
Answers:
username_0: The UI should be a combination of snackbar popup on all relevant pages + a brief overlay that indicates state change on tap, then a loader should appear until loading is complete. The tricky part to this is to make sure that once this data is loaded, it should remain cached.
username_0: It might make sense to have different behavior here based upon where a person is coming from. If they are swiping, then they need to click to zoom, whereas they can zoom right away if they are arriving from outside of the `MainStack`.
username_0: Swiping should disable when deep zoom is activated.
username_0: (There are btw no `small.jpg` versions of the controversy card images.)
username_0: First attempt at solving this problem produces different behavior on actual mobile device compared to Chrome's mobile simulator: OpenSeadragon is capturing the single clicks and throwing them away rather than passing them to the container div.
username_0: It's proven incredibly difficult to toggle deep zoom on mobile with click. What I *am* able to do with ease is toggle the deep zoom according to the zoom level: Click to zoom (which deactivates swiping), then once fully zoomed out, swiping reactivates.
The downside of this is that it is not really intuitive. And further, if the user tries to zoom into the image without first clicking, they will zoom into the app -- which really screws everything up.
So, this is not especially ideal. I'm also not very happy with my UI for showing text and images for feed posts, and there also of course remains this problem with scrolling through the feed posts on mobile.
So, the mobile situation continues to be a serious problem. |
telus/tds-core | 408275913 | Title: TDS/Input does not accept undefined value
Question:
username_0: ## Description
Attempting to use HTML attr `defaultValue` to treat `@tds/core-input` as an [uncontrolled component](https://reactjs.org/docs/uncontrolled-components.html#default-values) but unable.
## Reproduction Steps
```jsx
_onChangeHandler = (val) => {
this.props.updateDataAction(val)
}
render () {
<Input
onChange={this._onChangeHandler}
value={this.props.value}
/>
}
```
Expected input value to not change after user types and `_onChangeHandler` sanitizes the input and saves in redux store. `this.props.value` comes from redux.
After finding out how to use defaultValues, I modified render to the following:
```jsx
_onChangeHandler = (val) => {
this.props.updateDataAction(val)
}
render () {
delete Input.defaultProps.value
<Input
label={label}
defaultValue={this.props.defaultValue}
onChange={this._onChangeHandler}
value={undefined}
/>
}
```
Expected input value to be set once and not change on subsequent renders and not throw any errors. Actual results: default value was not set, tds/shared/formField threw error saying `value` is required.
## Meta
- TDS component version: @tds/core-input: 1.0.13
- Willing to develop solution: Yes
- Has workaround: No (potential workaround is to use vanilla html input and copy tds/core-input styles. Very messy)
- High impact: Yes
## Notes:
Looks like we need to do the following:
1) Remove defaultProp empty string for `tds/core-input` prop `value`
2) Modify [`FormField`](https://github.com/telus/tds-core/blob/master/shared/components/FormField/FormField.jsx) to not require prop `value`
3) Make sure any components using `FormField` does not need `value` to be required || empty string
Answers:
username_0: Fixed in release @tds/[email protected]
Status: Issue closed
|
locuslab/mpc.pytorch | 373562396 | Title: Non-quadratic slew rate line search
Question:
username_0: If I understand correctly, https://github.com/locuslab/mpc.pytorch/commit/206bd939ec1479424221a62ed45ad26830849fcc does the LQR step with the slew rate penalty in the quadratic cost approximation, but does the line search on the unmodified `cost` function without the slew rate penalty.
This is non-trivial to fix because we currently have a time-invariant non-quadratic cost function and no easy way to access the nominal control trajectory within it. If we did, the clean solution would be to write a new cost function with the slew rate penalty. Instead, perhaps the easiest way of fixing this would be to add the slew rate to the `get_cost` function.
Answers:
username_1: That is correct I forgot we were using the true cost for the line search.
username_0: I just added an error message for this case so nobody accidentally starts getting silent errors from this. |
dmccloskey/TensorBase | 443068685 | Title: Option to use pinned or shared host memory in TensorData
Question:
username_0: ### Description
There are many parts of the application that would benefit from pinned host memory (i.e., the TensorCollection data) while other parts that do not (i.e., temporary data that is used transiently). I would be beneficial to have an option in TensorDataGpu to specify whether pinned or shared host memory should be used.
### Objectives
- [ ] Add optional attribute to TensorData that allows the user to specify pinned or shared host memory
- [ ] Unit test coverage for the feature
### Validation
- [ ] Passing unit tests
- [ ] Working Win/Linux builds |
dotnet/cli | 227526167 | Title: Consider lighting up on newer sdk resolution policy than hostfxr carried by VS
Question:
username_0: Today, the msbuild resolver in VS carries its own hostfxr.dll. This was needed so that VS can locate 1.0.x CLIs which don't have the new hostfxr.dll.
One compromise there is that VS has to change in order to get any changes to hostfxr.dll. We could consider probing for a 2.0+ hostfxr.dll that has the new entry point and calling into that instead of the copy that is bundled with VS. This would allow changes in how global.json is interpreted (or even mechanisms other than global.json) to come along and be picked up by VS without updating VS.
@dsplaisted @johnbeisner
Answers:
username_0: We can probably push this out past 2.0.
username_0: It turns out that this won't work because the common case is x86 VS/msbuild and x64 dotnet.
Status: Issue closed
|
n0mer/gradle-git-properties | 922914321 | Title: How to use this with GitLab (branch name always 'HEAD')?
Question:
username_0: Hi there,
really loving this plugin but as GitLab always checks out all branches in detached state, the branch name will always have the value 'HEAD'.

I tried to resolve this by providing a fallback as an environment variable, but this didn't work either:
(gitlab-ci.yml):

(build.gradle.kts)

What am I doing wrong here? Is there any other workaround?
Thank you!
Answers:
username_1: Normally, the plugin will try to use the system environment variable `CI_COMMIT_REF_NAME` when running with GitLab
Ref:
https://github.com/n0mer/gradle-git-properties/blob/d5e95f0b3f36409bcfd7c8b44fd48f1ee2fa3764/src/main/groovy/com/gorylenko/properties/BranchProperty.groovy#L17
But I'm seeing you are providing a banch name which comes from comes from `GIT_BRANCH` system env var
`branch = System.getenv('GIT_BRANCH')`
Can you try to use `CI_COMMIT_REF_NAME` system env var instead?
`branch = System.getenv('CI_COMMIT_REF_NAME')`
I wonder why it didn't work for you, from GitLab document, the branch name should be available under `CI_COMMIT_REF_NAME` system env var
Ref: https://docs.gitlab.com/ee/ci/variables/#predefined-variables-environment-variables
username_0: So maybe it's not a fault of your plugin but from our docker configuration.
username_0: UPDATE: It works! :)
I just need to pass the variable down to the docker file! :)
### Step 1: .git.lab-ci.yml
```
# in "variables" top level element
variables:
GIT_BRANCH: $CI_COMMIT_REF_NAME`
````
...
```
# on job level (calls docker.sh shell script)
variables:
GIT_BRANCH: $CI_COMMIT_REF_NAME
````
### Step 2: docker.sh (which calls the actual docker build)
```
docker build (...) build-arg GIT_BRANCH="${GIT_BRANCH}"
````
### Step 3: docker.build.Dockerfile
```
FROM ...
ARG GIT_BRANCH="detached"
ENV GIT_BRANCH=${GIT_BRANCH}
RUN gradle clean build ...
````
### Step 4: build.gradle.kts / build.gradle
```
gitProperties {
keys = listOf("git.branch", "git.commit.id", "git.commit.time", "git.closest.tag.name")
failOnNoGitDirectory = false
branch = System.getenv("GIT_BRANCH") // fallback for gitlab / docker build
}
````
Now it works :)

username_1: Glad that it works for you now! Thanks for the update.
Status: Issue closed
|
datdota/datdota | 739923531 | Title: Timeout error for match duration
Question:
username_0: Hi,
If you go onto datdota and select matches->duration (http://www.datdota.com/match-durations?default=true) then you get an Cloudflare error page.
Answers:
username_1: Once every ~10-12 days the database indexes basically get so screwed up I take the site down for 15 mins, reindex them, and then put the site up. If basic functionality on the site (like this page) are slow, just poke me on Discord or Twitter and I'll reindex.
username_1: Depends more on usage and long-running big queries - especially when I reparse a few thousand games to extract some more data (so I don't want to shut down mid-parse).
Status: Issue closed
|
dart-lang/language | 438472924 | Title: Nested classes
Question:
username_0: Allow declaring a class in the body of another class.
```dart
class Person {
class Occupation {
String jobTitle;
Occupation(this.jobTitle);
String named(String name) =>
moniker(Person(name, this))
}
String name;
Occupation occupation;
Person(this.name, this.occupation);
static String moniker(Person p) =>
'${p.name} The ${p.occupation.jobTitle}';
}
Person p = Person("Rosie", Person.Occupation("Riveter"));
```
I'd expect that `Occupation` is not an instance variable of `Person`, if that would even be a sensible notion in Dart, instead the outer class acts as a namespace containing the inner class. The inner class cannot be accessed from a `Person` instance. The inner class cannot capture instance variables from the outer class. It can howewer access static methods and fields.
Nested classes could be used for:
* Indicating a relationship between classes / encapsulating classes which only serve to complement another (e.g. Flutter `StatefulWidget`s and their respective `State`s)
* Creating arbitrary namespaces like `export _ as _` but without a library declaration file
* Encapsulating many related methods under a namespace
Answers:
username_1: This is a request for static nested classes. Effectively it only uses the outer class as a namespace, and it has the static members of the outer class in scope. There is nothing that can be done with this feature that cannot be done without, it merely provides a way to create nested namespaces for organizational purposes. I'd probably prefer that you had to prefix the class declaration with `static` so we reserve the possibility of non-static nested classes.
It's unclear whether instance members of `Person` are in the *lexical* scope of `Occupation` members, since Dart traditionally only has one scope and instance members are in scope inside static members, you just can't use them. The same thing should work just as well (or as badly) for static nested classes.
For parsing, there should be no new issues. Because of prefixes, we already have to recognize `id1.id2` as a type in declarations. We now have to recognize `id1.id2....idn` to arbitrary depths, but that is not a problem.
We should also be allowed to nest other declarations, both mixins and typedefs. If we get general type aliases, we also have a way to provide un-nested access to a type for convenience.
```dart
class Foo {
static class Bar {
}
}
typedef FooBar = Foo.Bar;
```
username_2: I've been thinking about the ergonomics of widget/state separation in Flutter. Nested classes could provide a slightly more ergonomic solution if we introduced a different kind of stateful widget, along the following lines (I'm told `flutter_hooks` is doing something similar):
```dart
class Counter extends NewKindOfStatefulWidget<_CounterState> {
static class _CounterState {
_CounterState(this.count);
final int count;
}
Counter(this.name);
final String name;
@override
_CounterState initState() => _CounterState(0);
Widget build(StatefulBuildContext<_CounterState> context) {
return Column(children: [
Text('$name: ${context.state.count}'),
FlatButton(
onTap: () { // this could take `context` to enable replay functionality, a la React hooks.
context.setState(_CounterState(context.state.count + 1));
},
child: Text('Increment'),
),
]);
}
}
```
As @username_1 points out, because the nested class is static, its instance maintains no back-reference to the widget instance. In fact, it shouldn't. Otherwise you wouldn't be able to update widget configuration independently from state. So it's mostly syntactic sugar. There are a couple benefits though:
- Nesting the state class inside the widget class strongly visualizes the relationship between the widget and its state. I think it improves readability.
- The state class can be made private to the widget. While Dart currently lacks class-level privacy, we could add it via `package:meta` that would rely on the nested class syntax.
username_3: I think this is a duplicate of #3755
username_4: For Flutter's BLoC architecture It would be very useful.
With nested classes developers would not prefix event/state classes to avoid ambiguity.
**Now:**
`import 'package:bloc/bloc.dart';
class AuthorizationScreenBloc extends Bloc<AuthorizationScreenEvent, AuthorizationScreenState> {
AuthorizationScreenBloc(AuthorizationScreenState initialState) : super(initialState);
@override
Stream<AuthorizationScreenState> mapEventToState(AuthorizationScreenEvent event) async* {
throw UnimplementedError();
}
}
abstract class AuthorizationScreenEvent {}
class SignInAuthorizationScreenEvent extends AuthorizationScreenEvent {}
abstract class AuthorizationScreenState {}
class SuccessAuthorizationScreenState extends AuthorizationScreenState {}`
**With nested classes:**
`import 'package:bloc/bloc.dart';
class AuthorizationScreenBloc extends Bloc<AuthorizationScreenEvent, AuthorizationScreenState> {
AuthorizationScreenBloc(AuthorizationScreenState initialState) : super(initialState);
@override
Stream<AuthorizationScreenState> mapEventToState(AuthorizationScreenEvent event) async* {
throw UnimplementedError();
}
static class SignInEvent extends AuthorizationScreenEvent {
}
static class SuccessState extends AuthorizationScreenState {
}
}
abstract class AuthorizationScreenEvent {}
abstract class AuthorizationScreenState {}
`
There are 2 main advantages:
1. I can see available events, when AuthorizationSreenBloc.(dot) (It is VERY useful, so you are not compelled to always check Bloc class for events)
2. I should not prefix all the events and states (it's too cumbersome in some blocs)
username_5: Any progress on that?
username_6: is there a plan to support this please?
username_7: There are no current plans to work on this. It is something we might consider in the future, but currently we have a pretty full slate of things to work on.
username_8: I just stumbled upon this when trying to create a nested class full of const values, a sort of tree for static variables.
Since `const` fields are not allowed, I don't see any possibility to do this right now, with the strings being const too.
An example for this could a class, containing const fields for asset paths, so you have code-completion and changing paths/names of assets would cause a build-time error. It would also make things more readable and allow for categorization of const values.
See example
```dart
Widget buildConstWidget(){
return const Center(
child: Image(
// instead of '/assets/images/foobar/coolpicture.png'
image: AssetImage(AssetPaths.images.foobar.coolpicture),
),
);
}
```
The only way to do this currently would be with something like this
```dart
class AssetPaths {
static const _ImagesFolder images = _ImagesFolder();
}
class _ImagesFolder {
const _ImagesFolder();
final launcher = const _ImagesLauncherFolder();
final login = const _ImagesLoginFolder();
final undraw = const _ImagesUndrawFolder();
}
class _ImagesFoobarFolder {
const _ImagesFoobarFolder();
final coolpicture = '/assets/images/foobar/coolpicture.png';
}
```
This however prevents the usage of a `const` constructor in the example above, since the fields are not actually `const` but final.
I understand that nested classes are probably fairly complicated to implement and that you have to prioritize things, but I think having a feature for above use-case would be very beneficial for the maintainability for Dart/Flutter projects in the long term. Maybe this could be solved with a different language feature, such as manually defined namespaces or nested enums ? :thinking:
Something like this
```dart
namespace AssetPaths{
namespace Images {
namespace Folder1 {
static const foobar = 'assets/images/folder1/foobar.png'
}
namespace Folder2 {
static const baz = '/assets/images/folder2/baz.png'
}
}
}
// could then be referenced as `AssetPaths.Images.Folder1.foobar` and assigned to const Constructors
```
username_9: The `as` keyword when used with imports and exports is pretty much the "namespace" functionality you're talking about. Maybe you can have a file called `assets.dart`:
```dart
export "images.dart" as images;
export "videos.dart" as videos;
```
And `images.dart` would have:
```dart
class Folder1 {
static const foobar = 'assets/images/folder1/foobar.png'
}
class Folder2 {
static const baz = '/assets/images/folder2/baz.png'
}
```
That way, you'd be able to do:
```dart
import "assets.dart" as assets;
var path = assets.images.Folder1.foobar;
```
I haven't tested this but I _think_ that should work.
username_1: Narrator voice: "It didn't."
You can't do `export ... as`. The import namespaces you get from `import ... as` only works within the library it's imported in, but only actual declarations can be exported. There is currently no way to nest namespaces. |
kalkih/mini-media-player | 1138561733 | Title: 20222.3 will break mini-media-player
Question:
username_0: mini-media-player uses `paper-menu-button` and `paper-listbox` with the assumption Home Assistant will load these elements.
https://github.com/username_1/mini-media-player/blob/master/src/components/dropdown.js#L57
In 2022.3 these elements are no longer part of Home Assistant, so this will no longer work.
Answers:
username_1: Thanks for the heads up, much appreciated.
Any suggestions on elements registered by HA to replace them with? or should I move in the direction of bundling my own components with the card?
username_2: (despite the `/hacsfile` there, I am pretty sure this is in fact the new bundle I`scp`ed in and changed the UUID get param of; I deleted the cached version through web tools for good measure)
---
So I give up. Someone who has used any of these technologies before can take a crack at it - I'll swap back to `entities` or maybe downgrade idk.
Hopefully that commit can save you some of the busy work though?
[^1]: <img src="https://user-images.githubusercontent.com/43136/154622822-c3e016eb-cd1b-4508-956f-535441fa1d73.png" width=150>
[^2]: I think the typeaheads on the mwc-select seem nice, but I was trying to _limit_ blast radius
username_3: I guess this is what I am experiencing now: https://github.com/home-assistant/frontend/issues/11738
<img width="509" alt="Schermafbeelding 2022-02-19 om 22 32 21" src="https://user-images.githubusercontent.com/33354141/154820057-3c088fcd-1fcb-4ba7-b7c3-df618564fd1a.png">
username_2: Reverting to my backup at core-2022.3.0.dev20220213 (ghcr.io/home-assistant/qemux86-64-homeassistant@sha256:0a4f7cc00c2664ab8edcab19afe87e2f94584b99453256831cc0bd356c7e0aaf; Frontend version: 20220203.0 - latest) did resolve the issue for me for the time being
So we can gather the change is somewhere in here:
https://github.com/home-assistant/frontend/compare/20220203.1...20220214.0
(I mean, it seems like the above explanation could still track, but just saying)
username_2: And I guess this is our template for going forward? https://github.com/home-assistant/frontend/pull/11543/files
username_2: Well more relevantly https://github.com/home-assistant/frontend/pull/11591/files
username_1: Hey guys, thanks for the ideas and the discussion.
So I feel like the right thing going forward is to move away from HA defined elements, as using them cause the card to break every now and then.
(I think?) HA also defines some elements in runtime so user's might experience different behaviour depending on setup.
The drawback being the increased bundle size, but since at least the mwc element are built with Lit (which I already bundle with the card), the difference shouldn't be massive.
[Here's a branch](https://github.com/username_1/mini-media-player/commit/ba4291a47834c5912edf4251e1955f4f6a76fd7d) with `mwc-menu` bundled and with the elements defined under different names to avoid the duplicate elements issue.
It's untested in HA and themeing is probably not working correctly with HA defined variables.
If anyone want to test it out and/or continue to tweak it, please do.
username_2: No dice on the branch as yet:
```
2022-02-22 14:12:09 ERROR (MainThread) [frontend.js.latest.202202200] /hacsfiles/mini-media-player/mini-media-player-bundle.js?hacstag=1485208381151:1:0 NotSupportedError: CustomElementRegistry.define: 'mmp-checkbox' has already been defined as a custom element
2022-02-22 14:12:15 ERROR (MainThread) [frontend.js.latest.202202200] /hacsfiles/mini-media-player/mini-media-player-bundle.js?hacstag=1485208381151:1:0 NotSupportedError: CustomElementRegistry.define: 'mwc-ripple' has already been defined as a custom element
```
Not immediately sure how hard/possible it is to shadow out those sub-elements too.
---
I do (still) suspect tho that it would be okay to both ship the mwc components (so you are _sure_ you can depend on them) and also, in practice, preferentially import the HA-shipped versions (avoiding the double def), using new-fangled module/manifest dynamic import, but I admit I don't fully understand how that (would) work/s
(and the impression that it can work may be informed by looking in devtools at a less-productionized webback build on the core frontend that's potentially an artifact of pulling dev channel builds)
username_1: That's exactly what I did with `mwc-menu` and `mwc-list-item` in that branch, I extend the base classes of those components and define the elements with my custom names so they won't clash with the mwc elements defined by HA.
Altough if these components have other mwc-components as dependecies I'm not sure if this is a viable approach anyway.
https://github.com/username_1/mini-media-player/commit/ba4291a47834c5912edf4251e1955f4f6a76fd7d#diff-0d1e9e111ee4cf01c8b8f80f8a4e0e2fd735e4ae2e82abf2bf8c24f840791e5aR11-R24
The first error you posted seems strange though, `mmp-checkbox` should for sure not be defined more than once, make sure you're not loading `mini-media-player` multiple times, e.g. multiple resource references to `mini-media-player` in your setup.
username_0: Here is an example how `mwc` elements can be used in custom cards in the next Home Assistant version:
https://github.com/custom-cards/boilerplate-card/pull/50/files
username_0: Or a simpler example here:
https://github.com/piitaya/lovelace-mushroom/issues/133#issuecomment-1047826814
We are shipping the [scoped-custom-element-registry polyfill](https://github.com/webcomponents/polyfills/tree/master/packages/scoped-custom-element-registry) with the next HA release
username_1: Thanks @username_0,
Might not be an issue, haven't read up on the Scoped Custom Element Registries spec fully yet.
But will we not potentially still have issues when mwc-elements are imported in the base classes of the mwc-elements? Such as here
https://github.com/material-components/material-web/blob/master/packages/list/mwc-check-list-item-base.ts#L11
username_4: Can confirm this card is broken on 2022.3:

username_0: That's why we also define those, and ignore those imports in the rollup config.
So it needs some "hacks", but it works...
username_1: Until I've bundled the mwc-elements with the card this fix should do: #616
Beta release here:
https://github.com/username_1/mini-media-player/releases/tag/v1.16.0-beta
Let me know
username_4: Jep, beta fixes the issue for me
username_4: Jep, beta fixes the issue for me


username_5: Delete the .gz file
username_6: Reinstalling from HACS worked, thanks!
username_7: upgrading to beta version fixed the issue!
username_8: Update to 2022.3 for me some breaks card....
source from media player fix show on other entity....

username_1: Fix now also available in latest stable release.
Status: Issue closed
username_9: I have upgraded the actual problem is solved but not another problem arose. I can see the listing on the first player but
on the second media player I cannot see the listing(on my mobile phone home assistant app)
cards:
- entity: media_player.sonos_one
hide:
info: false
power: true
runtime: false
shuffle: false
info: scroll
name: Master Bedroom
type: custom:mini-media-player
group: false
- entity: media_player.sonos_one
hide:
controls: true
icon: true
info: true
name: true
power: true
progress: true
source: true
volume: true
volume_stateless: true
type: custom:mini-media-player
source: full
sound_mode: full
artwork: full-cover
group: false
type: vertical-stack
username_10: v16 which was just released does not solve the problem for me; drop-downs in the card config do not work.
username_4: Try clearing your cache
username_10: I did that. I'm seeing the same behaviour on Chrome, Edge & the HA app. HA has been restarted and the caches cleared.
username_9: Guys this solves the issue for me but sometimes it is giving me error saying
https://xxxxxxxxxzzzz/hacsfiles/mini-media-player/mini-media-player-bundle.js?hacstag=1485208381160:1:154147 Uncaught SyntaxError: Failed to execute 'querySelector' on 'DocumentFragment': '#9bpap80z7-menu' is not a valid selector.
username_11: On chrome the problem is solved, but on IOS and Android tablet with fully kiosk still the same problem.
username_4: Highly likely a caching issue?
username_11: On IOS you can not clear the caching? On Tablet I have done it but still it will not work. Or if you have any other ideas?
username_12: Where do you find the .gz file to delete it?
I've updated the the beta and the problem is persisting for me.
username_4: You don't need the beta, just upgrade the card through HACS and _clear caches_
username_12: I've done the upgrade and cleared the caches on my phone and my pc and the problem is persisting. Can't quite work out why it's persisting |
aaratikadu/MessManagmentLaravel | 412576618 | Title: image id in student table only 0
Question:
username_0: ## The image id in student table all time get 0,
the upload function on RegisterController not worl properly # Fix it first
Status: Issue closed
Answers:
username_0: ## The image id in student table all time get 0,
the upload function on RegisterController not worl properly # Fix it first |
prometheus/graphite_exporter | 298282932 | Title: Graphite_exporter: Writing metric on port 9109 over TCP failing
Question:
username_0: My application is sending metrics over tcp on port 9109 in plaintext protocol format :
<metric path> <metric value> <metric timestamp>
I am using method writeBytes() of class **DataOutputStream** to write data on graphite_exporter's 9109 port. There is writing successfully but can not see data on http://localhost:9108/metrics.
Before sending data I have started graphite_exporter to listen on port 9108.
Can someone has any idea why I can not see my metrics on http://localhost:9108/metrics?
Answers:
username_1: What mapping configuration are you using? Do the basic examples [here](https://github.com/prometheus/graphite_exporter#usage) work? Is there anything in the exporter logs about this?
username_0: I have metrics kind of : _servers.usva.instance1.cpu.total.user_
Configuration is as file looks like :
servers.*.*.cpu.total.user
name="servers_cpu_percent"
region="$1"
instance="$2"
job="cpu_job"
Exporter logs stay as it is like :

Status: Issue closed
username_1: I think the mapping does not match, and therefore the metrics are discarded. Try with
```
servers.*.*.cpu.total.user
```
instead? In general, to exclude problems with the mapping, try without the mapping, the metrics should show up even if they're not in a great format.
More long-term, consider using [node exporter](https://github.com/prometheus/node_exporter) or [WMI exporter](https://github.com/martinlindhe/wmi_exporter) to collect CPU utilization metrics. Going through the graphite exporter should be a last resort, if no more native option to get the data you want in Prometheus format is available.
I'll close this issue, since it does not appear to be an issue with the exporter itself. For usage questions, it's better to use the [appropriate channels](https://prometheus.io/community/) where there are more eyes on the problem.
username_0: I have the metrics as you shown servers.*.*.cpu.total.user
username_1: Could you attach the mappings file as it is here?
username_0: Here is the file:
username_0: Here is the file :
[graphite_mapping.txt] (https://github.com/prometheus/graphite_exporter/files/1738955/graphite_mapping.txt)
username_0: @username_1 : issue is resolved, I was sending timestamp 6 hours back. we need to send latest timestamp.
Btw how long back timestamp works for graphite exporter?
username_1: Anything less than the value of the --graphite.sample-expiry flag should work, by default 5 minutes.
Keep in mind that Prometheus does not ingest old samples the way graphite does. Sending anything older than "now" will most likely not do what you expect. |
datalocale/dataviz-finances-gironde | 298916373 | Title: les tests de build échouent suite aux modifications sur les formules
Question:
username_0: J'ai réussi (je pense) à corriger le problème de syntaxe sur le fichier d'aggrégat et le projet build en local sur mon ordi.
En revanche il ne builde pas sur github car les tests associés échouent.
Peut-être que les nouvelles formules sont incorrectes mais je ne pourrais pas le savoir avant d'avoir retravaillé avec Laurence sur mon poste.
Answers:
username_1: | ^
1029 |
1030 |
1031 |
```
Y'a une erreur de syntaxe donc peut-être qu'il manque un caractère ou deux — ou qu'y en a un ou deux en trop quelque part.
username_0: <NAME>,
J'avais effectivement cette même erreur sur mon poste et je pensais l'avoir corrigé.
Elle n'apparaît plus sur mon EsLint dans Atom et le projet se lance correctement en local sur mon poste (npm run watch s'exécute et je peux voir l'outil métier et la dataviz)
Les erreur de buils me semblent plus liées à des erreurs dans le contenu des formules mais je ne suis pas sûr à 100%
Status: Issue closed
|
turlando/airhead | 220663229 | Title: Websocket support for existing APIs updates
Question:
username_0: Send updates about tracks and queue via websockets
Answers:
username_1: #13 will resolve this issue. Changing priority to low.
Status: Issue closed
username_1: Implemented in [326c1b5](https://github.com/username_1/airhead/commit/326c1b5080a0c2333a2597450ab810545906b2c5). |
rakudo/rakudo | 823283175 | Title: `s///` in `race` erroring out and eating RAM
Question:
username_0: ## The Problem
The below is a golf of a memory leak problem I currently observe with <https://github.com/Raku/rakubrew.org>.
I'm not entirely sure I golfed correctly, because the rakubrew.org website has the code in a route and the leak occurs when hitting the website in parallel, and the golf below just uses `race` instead.
```
#!/usr/bin/env raku
my $md-template = 'resources/homepage.md.tmpl'.IO.slurp;
race for ^100000 {
my $page = $md-template;
$page ~~ s:g[ "(only-browser(" (.+) ")only-browser)" ] = "a";
}
```
With the template file filled with <https://gist.github.com/username_0/33ff8521d9c646d90e3204691f25c84c/raw/0d2ca13cf9fb693661583bc6bf77eab82d0472a7/errors.md.tmpl> it mostly immediately aborts with one of the below errors.
When filling the template file with <https://gist.github.com/username_0/33ff8521d9c646d90e3204691f25c84c/raw/0d2ca13cf9fb693661583bc6bf77eab82d0472a7/leaky.md.tmpl> it doesn't error, but fills my 16 GB of RAM in seconds.
I think it's a gradual thing. The longer the file gets the higher the chance that the script survives (longer).
```
A worker in a parallel iteration (hyper or race) initiated here:
in block <unit> at ./test.raku line 4
Died at:
Type check failed in binding to parameter '<anon>'; expected List but got Match (Match.new(
:orig("(((...)
in block at ./test.raku line 6
```
```
A worker in a parallel iteration (hyper or race) initiated here:
in block <unit> at ./test.raku line 4
Died at:
Cannot call method 'fully-reified' on a null object
in block at ./test.raku line 6
```
```
A worker in a parallel iteration (hyper or race) initiated here:
in block <unit> at ./test.raku line 4
Died at:
Substring length (-625) cannot be negative
in block at ./test.raku line 6
```
## Environment
* Operating system: Fedora 33, Linux localhost.localdomain 5.10.19-200.fc33.x86_64 #1 SMP Fri Feb 26 16:21:30 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
* Compiler version (`perl6 -v` or `raku -v`): Welcome to Rakudo(tm) v2021.02.1.
Implementing the Raku(tm) programming language v6.d.
Built on MoarVM version 2021.02.
Answers:
username_0: I tried again on 2021.10. Now the following error joins the rows:
```
A worker in a parallel iteration (hyper or race) initiated here:
in block <unit> at t.raku line 4
Died at:
lang-call cannot invoke object of type 'VMNull' belonging to no language
in block at t.raku line 4
```
username_0: Still happens on 2022.02.
username_1: I believe the leaking is (almost?) entirely because the loop is finishing before any garbage collection happens. So if you add something like `$*VM.request-garbage-collection if $_ %% 1_000` in your loop you should see a dramatically lower peak memory use. |
Zhongdao/Towards-Realtime-MOT | 604557819 | Title: How to decompress Cityperson dataset?
Question:
username_0: I try to decompress cityperson dataset:
`cat Citypersons.z01 Citypersons.z02 Citypersons.z03 Citypersons.zip > c.zip
unzip c.zip`
But I got the following error:
`Archive: c.zip
error: End-of-centdir-64 signature not where expected (prepended bytes?)
(attempting to process anyway)
warning [c.zip]: zipfile claims to be last disk of a multi-part archive;
attempting to process anyway, assuming all parts have been concatenated
together in order. Expect "errors" and warnings...true multi-part support
doesn't exist yet (coming soon).
warning [c.zip]: 9437184000 extra bytes at beginning or within zipfile
(attempting to process anyway)
file #1: bad zipfile offset (local header sig): 9437184004
(attempting to re-compensate)`
I can only get partially unzipped files:
`ls Citypersons/images/train/
tubingen ulm weimar zurich`
Can you provide the checksum of all part of Citypersons?
Answers:
username_1: I got the same problem..
username_2: @username_0 you can check https://github.com/Zhongdao/Towards-Realtime-MOT/issues/139
username_1: I am still getting the same problem after following #139. Only part of the files got unzipped.
username_1: Solved it!! Needed to repair the zip file after concatenation.
username_0: Thank you for your reply. I can decompress it correctly now. The command I use is:
`zip -FF Citypersons.zip --out c.zip`
then
uzip c.zip
Status: Issue closed
|
stormpath/stormpath-documentation | 140311550 | Title: Account Management - PHP
Question:
username_0: Please add in:
1. Code samples found in the appropriate ``/source/code/{language}/account_management`` directory.
2. Any necessary text to the ``accnt_mgmt.rst`` file.
All work should be done in this branch: https://github.com/stormpath/stormpath-documentation/tree/the_one_prod_guide
Status: Issue closed
Answers:
username_1: Merged!
username_0: Added a few spots that you (might have) missed: https://github.com/stormpath/stormpath-documentation/commit/3ab7134d7d6036a635d73984ddba7b593b6a0252
Just search for "todo".
username_0: Please add in:
1. Code samples found in the appropriate ``/source/code/{language}/account_management`` directory.
2. Any necessary text to the ``accnt_mgmt.rst`` file.
All work should be done in this branch: https://github.com/stormpath/stormpath-documentation/tree/the_one_prod_guide
username_0: Now changed to ``(php.todo)``.
Status: Issue closed
|
jina-ai/jina | 690796624 | Title: Enable post-retrieval scoring of document-match pairs
Question:
username_0: **Describe the feature**
Currently jina only supports ranking matches based on the scores, that the retrieval step provides. Adding the possibility to add more query <> match metrics in order to fine-tune the ranking is needed. Possible applications are a simple edit distance or complex deep learning scoring techniques as BERT QA.
Furthermore, it might be necessary, to add a `WeightedRanker`, which takes multiple scores encoded in the `score.operands` field and combines them to one top-level score per match.
**Proposal**
* Add a driver for unpacking the document and the matches (named `ContentMatchDriver`).
* Add a executor interface for scoring the document with the matches (named `ContentMatcher`). This should be configurable to either overwrite the match score or add a score in the `score.operands` field.
* Add a concrete implementation of the `ContentMatcher` in the form of a simple Levenshtein distance in the hub (named `LevenshteinMatcher`).
* Add a concrete implementation of the `ContentMatcher` in the form of a BERT QA scoring in the hub (named `BertQaMatcher´).
Adding a `WeightedRanker` as a consecutive step might be necessary. This could be a simple linear-combination of the existing scores or something like lambda-mart in the long run. Anyhow, I would rather add this as a consecutive task, to not overload this issue.
Answers:
username_1: Important to keep in mind, that Rankers should be chainable to allow different phases of ranking |
lukoerfer/gradle-debugging | 341995346 | Title: Support more light-weight configuration syntaxes
Question:
username_0: A syntax using Groovy map notations would reduce the required number of code lines and therefor increase the user experience:
test {
debug server: true, suspend: true, address: 8000
}
instead of
test {
debug {
server = true
suspend = true
address = 8000
}
} |
dforth/LifeOfConway | 265582474 | Title: Canvas drawing issue? unexpected lines on canvas
Question:
username_0: Not sure if this is a bug in my code or what - seeing strange lines in the canvas when the grid is off. The fact that these are showing where the grid lines should be is strange.
Answers:
username_0: I've seen others calling this a bleed issue with fillRect (which I'm using). One discussion mentioned doing a clearRect over the whole canvas - but this does not fix the issue for me. Ignoring this for now. |
MichaelDeBoey/gatsby-remark-embedder | 718812115 | Title: How to turn off caption of Instagram embeds ?
Question:
username_0: <!--
Thanks for your interest in the project. I appreciate bugs filed and PRs submitted!
Please make sure that you are familiar with and follow the Code of Conduct for
this project (found in the CODE_OF_CONDUCT.md file).
Please fill out this template with all the relevant information so we can
understand what's going on and fix the issue.
I'll probably ask you to submit the fix (after giving some direction). If you've
never done that before, that's great! Check this free short video tutorial to
learn how: http://kcd.im/pull-request
-->
- `gatsby-remark-embedder` version:
- `node` version:
- `npm` (or `yarn`) version:
Relevant code or config
```javascript
```
What you did:
What happened:
<!-- Please provide the full error message/screenshots/anything -->
Reproduction repository:
<!--
If possible, please create a repository that reproduces the issue with the
minimal amount of code possible.
-->
Problem description:
Suggested solution:
Answers:
username_1: Hey @username_0! 👋
Can you elaborate a bit more about the problem please?
Can you provide a reproduction repo and some screenshots about the things you think aren't working? |
RocketModPlugins/Observatory | 168431566 | Title: Error on /report
Question:
username_0: When I try to do /report this error appears
`An error occured while executing report []: System.MissingMethodException: Method not found: 'Rocket.Plugins.Observatory.ReportMeta..ctor'.
at Rocket.Core.Commands.RocketCommandManager+RegisteredRocketCommand.Execute (IRocketPlayer caller, System.String[] command) [0x00000] in <filename unknown>:0
at Rocket.Core.Commands.RocketCommandManager.Execute (IRocketPlayer player, System.String command) [0x00000] in <filename unknown>:0 `
Tryed to update the plugin, libraries but nothing resolved the issue. This only happens on one of my servers the other one is just fine.<issue_closed>
Status: Issue closed |
tangpingl/tangpingl.github.io | 602486720 | Title: 安全漏洞之用户名密码明文传输 — 左灯右行的博客
Question:
username_0: https://username_0.github.io//2019/11/13/%E5%AE%89%E5%85%A8%E6%BC%8F%E6%B4%9E%E4%B9%8B%E5%AF%86%E7%A0%81%E6%98%8E%E6%96%87%E4%BC%A0%E8%BE%93/
近期我负责的项目在客户验收途中使用各种安全扫描工具对其进行安全扫描发现了很多安全漏洞,接下来几篇文章我将介绍下我遇到的安全漏洞以及如何一步一步处理最终达到客户的验收要求。 |
petersonR/bestNormalize | 341305152 | Title: reversing the beta coefficients
Question:
username_0: Firstly, thank you so much for the package..
Secondly, I can understand that inverse = True can reverse the transformation. However, if I transformed the outcome variable and then fitted a linear regression model, how can I interpret the beta coefficients produced by the model, or how can I re-transform it back ?
Answers:
username_1: Interpreting the beta coefficients post-transformation can be tough. It will depend on the specific transformation that you used. If you are using a simple log transformation it's a matter of exponentiating the coefficients, but if you are using an ORQ transformation it's practically impossible to interpret the beta effects.
This is because the transformation function itself will look completely different for different distributions. Furthermore, when you use the ORQ transformation (for instance), the scale of your original unit is not preserved; the only thing that is preserved is the ordering of the observations. However, the reverse ORQ transformation is still possible, so there is still a good option.
My advice would be to check out the package vignette (https://cran.r-project.org/web/packages/bestNormalize/vignettes/bestNormalize.html), specifically the application section, which uses a linear regression on (ORQ) transformed variables. While there is not a (universally) good way to explain mathematically what is happening in this situation, a graphical picture is worth a thousand words; see the last three figures in the vignette for an idea of what I mean. Perhaps you could produce a similar plot for your data and interpret the regression relationship as you see fit?
While this doesn't give you an interpretation on your specific beta estimates, it gives you a sense of the shape of the relationship between x and y after transformations on the scale of the original unit (the `inverse = TRUE` argument is used on predictions from your model onto new data to yield the graph in the vignette.
Hope this helps, thanks for using my package.
Status: Issue closed
|
ionic-team/capacitor | 749425513 | Title: feat: BridgeWebViewClient method override in plugins
Question:
username_0: ## Feature Request
### Description
I need to do client certificate authentication (I used the Cordova plugin `cordova-client-cert-authentication`), which was hooked into WebView's `onReceivedClientCertRequest` method.
An other use-case we have is to handle SSL error hooking into the method `onReceivedSslError`, and proceed with untrusted (e.g. self signed) certificates.
### Platform(s)
We use Ionic only on Android currently (no iOS needed currently).
### Preferred Solution
I'm not fully aware of the internals of capacitor, I don't have any suggestion at the moment. Maybe a delegation to `Plugin` would be a good idea?
### Alternatives
Our current solution it to patch `Bridge` and `BridgeWebViewClient` during build, to perform the desired actions:
* `onReceivedSslError`: proceed on untrusted
* `onReceivedClientCertRequest`: call Cordova's PluginManager, which have the correct delegation to the `cordova-client-cert-authentication` plugin.
### Additional Context
We'd be very thankful.
Answers:
username_1: You can do it now without a custom build. Create a subclass of `BridgeWebChromeClient`:
```java
public class MyBridgeWebChromeClient extends BridgeWebChromeClient {
public MyBridgeWebChromeClient(Bridge bridge) {
super(bridge);
}
// Override any methods you need
}
```
Then create a plugin with a `patchWebViewClient` call:
```java
@PluginMethod
public void patchWebViewClient(PluginCall call) {
class Runner implements Runnable {
public void run() {
bridge.getWebView().setWebChromeClient(new MyBridgeWebChromeClient(bridge));
}
}
bridge.executeOnMainThread(new Runner());
call.success();
}
```
Call `patchWebViewClient` at startup.
Status: Issue closed
username_2: As username_1 commented, it's already possible.
You can do it with a custom plugin (I would suggest doing it in plugins load method instead of in a custom plugin method as load is executed on app start up).
You can also override the `WebViewClient` from `MainActivity.java` instead of using a plugin. |
explosion/spaCy | 663812622 | Title: Adding vectors: pretrain vs. gensim, fast text, etc.
Question:
username_0: Sorry if this is a noobish question.
I'm training a NER and my data has some domain-specific words that are OOV. I can see that the tokenizer returns all-zero vectors for these words.
I was under the impression that the CLI pretrain command would train the tokenizer part of my model and make it learn some new vectors for previously unseen words, etc. Is this correct?
But I have now found this part of the documentation https://spacy.io/usage/vectors-similarity#custom which makes me think that the pretrain command is not for what I thought it was and that I should instead use some of these "open-source libraries, such as Gensim, Fast Text, or Tomas Mikolov’s original word2vec implementation" to train the tokenizer.
I appreciate any clarification or feedback on this.
Thanks,
Fede.
## Your Environment
<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type `python -m spacy info --markdown` and copy-paste the result here.-->
I'm working on Google Colab.
spaCy version: 2.3.2
Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
Python version: 3.6.9
Answers:
username_0: I confirmed the pretrain command does not change the vectors. So I'm not really sure what it does, I guess it trains the layer that is immediately on top of the vectors?
username_0: I expanded on these questions and more, including some confusion regarding the documentation under this stackoverflow question: [Proper way to add new vectors for OOV words](https://stackoverflow.com/questions/63144230/proper-way-to-add-new-vectors-for-oov-words)
I appreciate any feedback on this. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.