repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
MattJeanes/TARDIS | 840910295 | Title: Issue in interior lights
Question:
username_0: [Reported by <NAME>]
in interior metadata, if "nopower" option is false for one of the lights, it applies to all of them (overrides "nopower" at all other lights in Interior.Lights{})
Answers:
username_1: Fixed
Status: Issue closed
|
ros-planning/moveit | 192214232 | Title: Planning for moving the PR2 base with MoveIt!
Question:
username_0: ### Description
Overview of your issue here.
### Your environment
* ROS Distro: [Indigo]
* OS Version: e.g. Ubuntu 14.04
* Source build
Hello,
I would like to use MoveIt for motion planning of the PR2 (only moving the base in a scene).
The example of controlling a mobile base given found in Internet are done with the move_base package using the navigation stack.
I followed the tutorial on moveit site to use the motion_planning_interface to plan the movement of an arm of pr2 but I can not figure out how to implement the planning of the base movement.
Can someone help me? There is an example of the entire robot motion planning using moveit?
I found this thread [planning move base](https://groups.google.com/forum/#!msg/moveit-users/eUdP2UHb1uM/p65UyJkrUZkJ) but I have many doubts about how this solution was implemented.
It is possible also extract the path generated by moveit through OMPL and not the entire trajectory?
Last question, there is a way to send moveit an already created octomap instead of PointCloud generated by the robot's sensors?
thank you
Answers:
username_1: Also: [Planning for moving the PR2 base with MoveIt!](https://groups.google.com/forum/#!topic/moveit-users/dC1MXhhje1s) on `moveit-users`.
username_2: @username_0 please do not double post both here and on the mailing list, and try to keep questions separate: your octomap question is a very different topic of mobile base planning
There has not been much work with mobile base planning and MoveIt!. It is possible with customization and extra development, but not something that is easy to do out of the box AFAIK
username_0: Sorry for the double post, I did not know that they were connected! I'm asking for the planning of the basic movement because running the demo of pr2 contained in the folder pr2_moveit_config you can make the planning of the movement of the base by selecting the base as a group, while I would like to conduct the planning of the base movement but using various interfaces like in tutorials on moveit site, so without having to use the present GUI rviz but using the scheduling parameters within the code. the problem is that from the tutorial I do not understand how to plan the movement of the base being the tutorial totally focused by the arm movements.
username_3: Also maybe relevant from moveit-users:
- [Integration of ROS Navigation stack](https://groups.google.com/forum/#!topic/moveit-users/W0iso6tJv94)
- [Moveit for non holonomic mobile base in 2D](https://groups.google.com/forum/#!topic/moveit-users/NNvRNHeiYrw)
- [Controlling a Mobile Base with MoveIt!](https://groups.google.com/forum/#!topic/ros-by-example/sEsmuVEPxjw)
username_4: This is a broad topic and no specific issue with MoveIt.
I'll close this for now. If someone works on this and wants to have official support (including an example demo) for this in MoveIt, feel free to open this again.
Status: Issue closed
username_5: Hi everybody! coming here after using the non-standard way of planning alongwith base (i.e. modelling with 2prismatic-1revolute joint) and thought this might be the place to put this.
I am trying to sort out how would planning for base should be implemented. From my current understanding, I imagine it requires modifying/extending Robot Model, Robot State and Kinematic Base/Plugin interfaces for differential kinematics for a special group case that is the base.
If somebody has an idea, insights or ideally a rough roadmap of how things should be implemented within MoveIt, it would be good to have it somewhere. (in other words to know if I am able to attempt that). |
AnikHasibul/neo | 382519234 | Title: Pause is not working when piping the input with stdin.
Question:
username_0: It pauses perfectly with `CTRL`+`C`:
```
$ neo example.txt
```
But this example doesn't pause with `CTRL`+`C`
```
$ cat example.txt | neo
```
I think the problem is with `ioutil.ReadAll(os.Stdin)`. It reads the whole `os.Stdin` reader. So `fmt.Scanf` can't do it's job.
Is there any way to fix this bug?
Answers:
username_0: Take a look at these notes:
https://github.com/username_0/neo/commit/1d9848a6072e93cc874fc3b6682679a1e79b179d#diff-7ddfb3e035b42cd70649cc33393fe32c
username_0: After a while in debugging process, I got that the piping data is on `os.Stdin` and the `fmt.Scanf` is also scanning the `os.Stdin`. And the `fmt.Scanf` returns when it reaches to a `\n`.
So, till now it's not possible to pause the program by `fmt.Scanf`. All we can do is getting a key press event, and pause the output.
username_1: While piping it's sending EOF at the end of the line. So the process will not read any other character from the file (stdin).
username_0: Exactly, that's why we need to put the pause option in a keypress event. I think there's no other way to handle it. |
dotnetcore/Magicodes.IE | 785985272 | Title: Excel模板导出报错 - Unterminated string literal (at index 59).
Question:
username_0: 错误信息:

模板如下图所示:

数据模型如下所示:
`
public class AEvaluationSupplierExportViewModel
{
public string Year { get; set; }
public List<AEvaluationSupplierExportDto> SupplierList { get; set; }
}
`
Answers:
username_1: Please note the English and Chinese double quotation marks around x, which will lead to semantic errors.

username_0: 感谢您的帮助,但是当我修改了模板之后,不再出错。修改之后的模板如下所示:

可以导出excel,但是出现另一个奇怪的问题,那行内容不见了。如下图所示:

还请麻烦您再次解答,谢谢!
username_1: @username_0 Unfortunately, I can't reproduce this problem. Please provide a reproducible file.
username_0: [dept_form_summary_mg.xlsx](https://github.com/dotnetcore/Magicodes.IE/files/5818822/dept_form_summary_mg.xlsx)
已附上附件,谢谢您。
username_1: I'm sorry, I found that this is the same problem as #211.
username_0: 谢谢您。只是有些奇怪,我再看看有没有其他解决方案,谢谢。

Status: Issue closed
username_0: 我想是这个Excel模板有问题,我重建了一个模板,这个问题不在出现,很幸运它是正常运行的。非常感谢您的帮助。 @username_1 |
homegamesio/homegames | 543322531 | Title: Investigate performance bottlenecks
Question:
username_0: If you run the perf test, you can see the screen fill up with 1x1 squares of random colors. As each row fills, you can actually _feel_ that each one is slower than the last.
The socket is currently sending out the entirety of the game tree on each update which isn't great (messages reached ~50 KB once the board was almost full), and I was able to count 24 updates in one second at this size. So that's basically 24 fps (very cinematic) and getting slower, but I don't know exactly why yet. Is it the broadcast itself? The tree traversal? I have no idea, we should find out what it is even if we can't fix it easily |
floragunncom/search-guard-ssl | 174863977 | Title: Release for ElasticSearch 2.4.0
Question:
username_0: I'm not sure if there are any breaking changes, but I do not see any releases available for ElasticSearch 2.4.0. This came out earlier this week.
Answers:
username_1: will be released this week
username_1: Released
<pre>bin/plugin install com.floragunn/search-guard-ssl/2.4.0.16<version></pre>
Status: Issue closed
|
w3c/wai-wcag-quickref | 96361817 | Title: Would like to be able to easily go back to the Table of Contents
Question:
username_0: going down the TOC select an item throws focus to the correct section... but back space doesn't return me to where I was... not sure why...
Answers:
username_1: Hi @username_0, is that really the intended behavior? I have never seen the backspace button work this way as far as I can remember.
Feel free to prove me wrong :-)
username_2: The backspace worked for me in Chrome when I selected links in the TOC to get to specific SCs. When I backspaced it pulled me back out to previous view. I did not test this in IE or FF.
username_1: Clarification in the WCAG meeting 2015-09-22: The most important thing is that you can jump quickly back to the TOC in one form or another.
username_1: This should now work (better) as I am now changing the fragment/hash portion of the URL when moving to a SC.
Status: Issue closed
|
iterative/dvc | 1055147671 | Title: Make frozen field configurable
Question:
username_0: # Bug Report
## Issue name
Freeze a stage in `dvc.yaml` right now, it is not possible by using the `params.yaml`.
<!--
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
`PARAM_FROZEN` which is defined in `dvc/schema.py` [here](https://github.com/iterative/dvc/blob/master/dvc/schema.py#L20) and [here](https://github.com/iterative/dvc/blob/master/dvc/schema.py#L87), accepts only bool values for now.
That means that I can do the following:
```yaml
# dvc.yaml
train:
cmd: PYTHONPATH='../../.' python3 training/run_train.py
frozen: true
```
But I cannot have something like this:
```yaml
# params.yaml
model:
freeze: false
```
```yaml
# dvc.yaml
train:
cmd: PYTHONPATH='../../.' python3 training/run_train.py
frozen: ${model.freeze}
```
which gives me the following error:
```
ERROR: 'dvc.yaml' format error: extra keys not allowed @ data['stages']['train']['cmd']
```
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
If you define any stage with the following format, you will get the error:
```yaml
# params.yaml
model:
freeze: false
```
```yaml
# dvc.yaml
train:
cmd: PYTHONPATH='../../.' python3 training/run_train.py
frozen: ${model.freeze}
```
<!--
Step list of how to reproduce the bug
-->
[Truncated]
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
```
DVC version: 2.3.0 (pip)
---------------------------------
Platform: Python 3.8.8 on Linux-5.4.0-90-generic-x86_64-with-glibc2.10
Supports: http, https, ssh
Cache types: symlink
Cache directory: ext4 on /dev/sda
Caches: local
Remotes: ssh
Workspace directory: ext4 on /dev/nvme1n1p6
Repo: dvc, git
Answers:
username_1: This extends to using the templating functionality for any field in the schema that is not of string type.
We are validating against the schema before resolving the templating so, at the moment of validation, all template vars (i.e. `${foo}`) are strings. |
drewads/Developer | 556657568 | Title: Main fileserver does not handle urls that do not end in a filename or a slash
Question:
username_0: If we request a directory, such as http://dev.drewwadsworth.com/test, and do not put a slash at the end, we get a 404 error. A way to resolve this is serve the requested directory/index.html iff there is no file extension (mime type is null).
Answers:
username_0: Seems to be fixed, but make sure nothing else is broken, clean up code, and run through code before closing issue.
Status: Issue closed
|
facebook/react | 870188756 | Title: Add nested update flag to Profiler UI as explicit marker
Question:
username_0: #20163 added a "nested update" phase to the Profiler `onRender` callback (currently gated behind the `enableProfilerNestedUpdatePhase` feature flag). We should surface this phase in the Profiler UI if present though, as nested updates are particularly costly. |
sympy/sympy | 77057364 | Title: Milestones housekeeping
Question:
username_0: Would it be possible to close old milestones in https://github.com/sympy/sympy/milestones ?
I guess the following ones may be closed:
0.7.3, 0.7.4, 0.7.5, 0.7.6, GSoC 2014: Solvers, GSoC 2014 Optics, not for 0.7.3, Not for 0.7.4 |
aframevr/aframe | 114715495 | Title: Using ID and class selectors inside templates leads to conflicts when there are multiple instances of the same template in the scene
Question:
username_0: When creating multiple instances of the same template, we hit an issue with selectors inside those templates conflicting. I discovered this while creating the the video, image and sky templates, which use `<video>` and `<img>` elements plus IDs / classes.
I had a bunch of image template instances. Each with a unique src image specified. But they were all showing the same image. Because each template instance contained an <img> instance with the same id="image".
Solution would be to encapsulate selector scope to template? Or randomize template ID/classes?
Status: Issue closed
Answers:
username_1: Swapping out the current implementation of templates with primitives. |
melexis/mlx90640-library | 426136060 | Title: Documentation of EEPROM address 0x240A
Question:
username_0: CheckEEPROMValid() does the following check:
`deviceSelect = eeData[10] & 0x0040;`
which fails, but the data in the EEPROM is actually correct when parameters are etracted. I have this verified by checking the extracted parameters. As noted on the endianness issue in #31 , My data both in both big and little endian is 0x9904 and 0x0499, but this check will always fail.
Without documentation of this EEPROM address in the datasheet, it is difficult to know the cause of the failure. Maybe it is meant to be:
`deviceSelect = eeData[10] & 0x0004`?
Answers:
username_1: Hello,
Actually with both endiannesses it should pass. The complete code is:
int CheckEEPROMValid(uint16_t *eeData)
{
int deviceSelect;
deviceSelect = eeData[10] & 0x0040;
if(deviceSelect == 0)
{
return 0;
}
return -7;
}
When the EEPROM data is valid the function returns 0.
username_2: me too, my eeData[10] =0x04CD
username_0: @username_1 Thank you for the reply. I did look into the code suficiently. I notice the eeData[10] value in the MLX90640 example data.xlsx that you committed has eeData[10] =0x04CD, my eeData[10]=0x0499, @username_2 eeData[10] =0x04CD .
These cases seem to have a pattern, any insight is appreciated
username_1: Hi,
The encoding at this address is going to change so that check will be modified or removed in the future versions of the driver. Apparently @username_2 already got new devices. You should simply remove the if statement in the MLX90640_ExtractParameters() function. The code should be something like this:
int MLX90640_ExtractParameters(uint16_t *eeData, paramsMLX90640 *mlx90640)
{
int error = 0;
ExtractVDDParameters(eeData, mlx90640);
ExtractPTATParameters(eeData, mlx90640);
ExtractGainParameters(eeData, mlx90640);
ExtractTgcParameters(eeData, mlx90640);
ExtractResolutionParameters(eeData, mlx90640);
ExtractKsTaParameters(eeData, mlx90640);
ExtractKsToParameters(eeData, mlx90640);
ExtractCPParameters(eeData, mlx90640);
ExtractAlphaParameters(eeData, mlx90640);
ExtractOffsetParameters(eeData, mlx90640);
ExtractKtaPixelParameters(eeData, mlx90640);
ExtractKvPixelParameters(eeData, mlx90640);
ExtractCILCParameters(eeData, mlx90640);
error = ExtractDeviatingPixels(eeData, mlx90640);
return error;
}
@username_0 I am pretty sure that eeData[10] = 0x0499 in the example data that I committed. So this is the same value as the one that you are getting and this value should return error = 0 and it should work just fine in your case.
Best regards
username_0: @username_1 Thanks for this. I do confirm that the data in your xlsx file is 0x499, sorry for the mixup. The function works well. I shall now close this issue.
Status: Issue closed
|
huggingface/transformers | 943411845 | Title: Flax - Loading pretrained model overwrites weights of different shapes
Question:
username_0: ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Ubuntu
- Python version: 3.9
### Who can help
@patil-suraj @username_1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @username_1
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @username_1
- pipelines: @LysandreJik
Documentation: @username_1
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @username_1, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Custom FlaxBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
[Truncated]
Steps to reproduce the behavior:
1. Create a custom model by subclassing - just change output shape (lm_head & final_logits_bias)
2. use `CustomModel.from_pretrained('facebook/bart-large-c')
3. check `model.params['final_logits_bias'].shape`, it will come from the pretrained model
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The shape of weights should be checked prior to be overwritten.
Right now my approach is:
* load pre trained model
* init custom model from config
* update manually the weights needed
Answers:
username_1: This should be fixed by the work in #12664
Status: Issue closed
username_0: Closing because it was fixed |
massenergize/frontend-portal | 560619633 | Title: have filter and TO DO list below it follow as you scroll down list
Question:
username_0: ISSUE 1181
Reported on Fri Dec 13 2019 16:30:15 GMT-0500 (EST)
Reported by <NAME>(<EMAIL>)
PRIORITY = Medium
STATUS = - Submitted
Site Url: http://community-dev.massenergize.org/wayland/actions
Description:
For later: can the search/filter and TO DO list below it follow (stay on the screen) as you scroll down list? NOT on Actions page because TODO list s under that, but on Vendors and Testimonials?
User Role: Community User
Notes: beyond 5
Status: Issue closed
Answers:
username_2: Don't see this fix - also we need to talk about it. So reopening for later discussion (labeling as Questoin)
username_2: ISSUE 1181
Reported on Fri Dec 13 2019 16:30:15 GMT-0500 (EST)
Reported by <NAME>(<EMAIL>)
PRIORITY = Medium
STATUS = - Submitted
Site Url: http://community-dev.massenergize.org/wayland/actions
Description:
For later: can the search/filter and TO DO list below it follow (stay on the screen) as you scroll down list? NOT on Actions page because TODO list s under that, but on Vendors and Testimonials?
User Role: Community User
Notes: beyond 5 |
rapidsai/cudf | 1184118060 | Title: Add peak memory tracking to all benchmarks
Question:
username_0: #7770 added support for peak memory usage to cuIO benchmarks using rmm's `statistics_resource_adapter`. It would be nice to be able to expand that to all of our benchmarks so that we could more easily detect regressions in memory usage. This would be particularly useful for the Dask cuDF team, which is always looking to identify bottlenecks from memory usage. [There was already discussion](https://github.com/rapidsai/cudf/pull/7770#issuecomment-874473349) of doing this in #7770, so we should investigate following up now.
Answers:
username_0: CC @galipremsagar who was interested in this data.
@devavret it looks like you made this very easy with the `memory_stats_logger`? @username_1 would you still be in favor of this?
username_1: I don't have a reason not to support this. However it may not benefit the utility of all benchmarks. Does it impact CI benchmark throughput?
username_2: I am reluctant to add this new functionality based on Google Benchmark when we are trying to phase that out.
I would support adding this as a feature to benchmarks ported to NVBench.
username_3: Right. It's effortless with the help of [memory_stats_logger](https://github.com/rapidsai/cudf/blob/branch-22.06/cpp/benchmarks/fixture/benchmark_fixture.hpp#L103-L122). Only two lines to be added:
```cpp
auto mem_stats_logger = cudf::memory_stats_logger(); // init stats logger
state.exec([&](nvbench::launch& launch) {
target_kernel();
});
state.add_element_count(mem_stats_logger.peak_memory_usage(), "Peak Memory"); // report peak memory usage to nvbench
```
username_0: So then maybe making this change is independent of whether a benchmark has been switch from GBench to NVBench? It seems like we could add this now, then when a project switches from GBench to NVBench the only required change are to switch `stat.add_element_count(` for `state.counters["peak_memory_usage"]`.
username_2: There's also the parsing and reporting side that will be different.
I don't want to keep building things on top of GBench when I'm actively trying to get people to switch to NVbench. |
rbind/support | 397059941 | Title: Subdomain reauest
Question:
username_0: ## Netlify website address
unruffled-euclid-9f1c97.netlify.com/
## Preferred rbind.io subdomain
teach-shiny
### Agreement
- [x] By submitting this request, I promise I will at least write one blog post or create one web page on my website after I get the rbind.io subdomain.
Answers:
username_1: Done.
Status: Issue closed
|
aws/aws-cli | 477178988 | Title: typo in docs
Question:
username_0: https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html
In --input-serialization (structure):
```
CSV={FileHeaderInfo=string,Comments=string,QuoteEscapeCharacter=string,RecordDelimiter=string,FieldDelimiter=string,QuoteCharacter=string,AllowQuotedRecordDelimiter=boolean},CompressionType=string,JSON={Type=string},Parquet={}
```
`JSON={Type=string}` should be `JSON={Type= DOCUMENT}`
Answers:
username_1: @username_0,
I think the docs are correct. The shorthand CSV syntax is showing the data types that are expected for each value if you submit as a CSV. The shorthand syntax simply doesn't elaborate on the values of enums here. The JSON syntax is simply more explicit, and does show that the string must be one of two specific values: DOCUMENT or LINES. But both of those are indeed strings. Whether you submit as CSV or JSON, that value must be submitted as a string, and that string must be either "DOCUMENT" or "LINES".
username_0: It's true. I knew it was just the type but when I saw string, I was expecting any arbitrary string not a enum. So the type is a bit too generic and seems confusing. Feel free to close it if you think a generic type is fine. Cheers! |
LSSTDESC/BlendingToolKit | 870917602 | Title: Improve plotting
Question:
username_0: Some functions already exist in plot_utils, but following the rework of the metrics we should add more different ways of plotting the metrics results, and think about which one are interesting to the user.<issue_closed>
Status: Issue closed |
sophxia/cssi | 245217424 | Title: Images Form Styling
Question:
username_0: 

<issue_closed>
Status: Issue closed |
eemeli/yaml | 874412035 | Title: Missing indentation before empty lines in Literal Block Scalar Style
Question:
username_0: **Describe the bug**
When using YAML.stringify there is no indentation for empty lines when using Literal Block Scalar Style. This is breaking YAML specification.
**To Reproduce**
```
const YAML = require('yaml');
const fs = require('fs');
const example = {
textWithEmptyLines: 'line before two empty lines\n\n\nline after two empty lines'
};
YAML.scalarOptions.str.fold.lineWidth = 0;
fs.writeFileSync('result.yaml', YAML.stringify(example));
```
**Actual Behaviour**
```
..line before two empty lines
..line after two empty lines
```
**Expected behaviour**
```
..line before two empty lines
..
..
..line after two empty lines
```
**Versions**
- Environment: Chrome 90.0
- `yaml`: 1.10.2, 2.0.0-5
**Additional context**
YAML spec:

Answers:
username_1: This is working as intended, and does not break the spec. Empty lines in a block literal with indentation up to the expected minimum (two in the example) are caught by the `l-empty(n,block-in)*` part of the `l-nb-literal-text(n)` construction, specifically its `s-indent(<n)` part. Therefore it's perfectly valid to output an empty line as an empty line, i.e. without those spaces.
Is there a different YAML library with which this is causing you issues?
username_2: Hello Eemeli,
the issue is the following: block will be terminated when less indented:

username_1: If you look at examples 6.6, 6.7, or 8.10-8.13, you'll see unindented empty lines not terminating the current block construct.
username_1: Closing, as the current behaviour is as intended.
Status: Issue closed
|
tuna/issues | 189668879 | Title: MacPorts 镜像
Question:
username_0: #### 项目名称与简介(Project Intro.)
[MacPorts](https://www.macports.org/) 是一个 Mac OS 包管理器,
#### 上游地址与镜像方法(How to Mirror)
大致步骤:
1. 用 Rsync 同步文件
2. 用 Web Server 和 Rsync 服务
3. 联系上游以启用镜像
详细文档见:https://trac.macports.org/wiki/Mirroring
#### 其他信息(Other)
- 镜像大小(Mirror Size): 约 1 TB
Answers:
username_1: Syncing
username_0: MacPorts 用户可以修改
- MacPorts Source (用来升级 MacPorts 自身的)
- Portfiles (Package 的描述文件,类似 `apt update`)
的获取方式,需要通过 `rsync`,等你们的 `rsync` 服务好了,我试试。
而更关键的二进制包和源代码包的获取方式用户没法改,需要等开发者修改 MacPorts 的源代码。
username_2: done
Status: Issue closed
username_0: 由于 MacPorts 镜像几乎必须要上游主动添加支持之后才能用。联系方法见:https://trac.macports.org/wiki/Mirroring#Contactus ,请问有没有联系上?
---
另外,同步 Portfiles 的时候遇到一个问题。把 `/opt/local/etc/macports/sources.conf` 的内容改成
rsync://mirrors.tuna.tsinghua.edu.cn/macports/release/tarballs/ports.tar [default]
后,`sudo port -d selfupdate` 会卡在
```
...
---> Updating the ports tree
Synchronizing local ports tree from rsync://mirrors.tuna.tsinghua.edu.cn/macports/release/tarballs/ports.tar
DEBUG: /usr/bin/rsync -rtzv --delete-after --progress --include=/ports.tar --include=/ports.tar.rmd160 --exclude=* rsync://mirrors.tuna.tsinghua.edu.cn/macports/release/tarballs/ /opt/local/var/macports/sources/mirrors.tuna.tsinghua.edu.cn/macports/release/tarballs
...
receiving file list ...
3 files to consider
./
ports.tar
```
试了几次都卡在这里,但是手动执行完全相同的 `rsync` 命令并没有问题。不像是镜像的问题。我现在改成了 HTTP 方式同步
https://mirrors.tuna.tsinghua.edu.cn/macports/release/ports.tar.gz [default]
这种方法没问题。
username_1: 已经发信了,没有回信。
username_3: 刚收到官方回信,分配了三个官方域名
https://pek.cn.distfiles.macports.org/macports/
https://pek.cn.packages.macports.org/macports/
rsync://pek.cn.rsync.macports.org/macports/
username_4: 看着有点像隔壁呵呵
Regards,
Tao
username_1: pek 是机场名
> 在 2016年12月7日,07:28,<NAME> <<EMAIL>> 写道:
>
> 看着有点像隔壁呵呵
>
> Regards,
> Tao
>
> |
mapbox/sumo | 212297010 | Title: Trouble with order by _messageTime clause
Question:
username_0: Trying a CLI query like
```
sumo -q '_sourceCategory = my-thing | order by _messageTime asc' --from 4d
```
... pagination doesn't seem to work properly. It appears that each step back in time returns messages up until present, so it keeps repeat printing the most recent messages. |
alibaba-fusion/next | 575195776 | Title: [Button]按钮组增加投影配置功能
Question:
username_0: ### Component
Button
### Feature Description
按钮组增加投影配置功能,能与普通Button按钮投影的配置一致
<!-- generated by alibaba-fusion-issue-helper. DO NOT REMOVE -->
<!-- component: Button -->
Answers:
username_1: @username_0 是指box-shadow的配置吗?现在按钮组应该是跟随Button的投影配置的,是希望分离吗? |
pulumi/pulumi-kubernetes | 730564078 | Title: PathType not set on
Question:
username_0: I am trying to configure traefik ingress and set `PathType` to `Prefix` for one of the ingress paths but for some reason the `PathType` isn't set.
I'm using Pulumi.Kubernetes 2.6.3, Pulumi 2.12.0 and F#. Example code:
```
IngressArgs(
Metadata = input (
ObjectMeta.createDefaultArgs ingressName
|> ObjectMeta.withAnnotations [
"kubernetes.io/ingress.class", ingressClass
"traefik.ingress.kubernetes.io/router.entrypoints", "websecure, web"
]
),
Spec = input (
IngressSpecArgs(
Rules = inputList [
input (
IngressRuleArgs(
Host = input hostUrl,
Http = input (
HTTPIngressRuleValueArgs(
Paths = inputList [
input (
HTTPIngressPathArgs(
Path = input "/api",
PathType = input "Prefix",
Backend = input (
IngressBackendArgs(
ServiceName = input appName,
ServicePort = inputUnion1Of2 8080
)
)
)
)
]
)
)
)
)
]
)
)
)
```
The ingress is created, but for some reason the `PathType` isn't set.
Answers:
username_0: Probably a non-issue. Apparently we're running k8s 1.17, I think PathType is supported in 1.18. With that said, shouldn't it be possible to fail the deploy when something like this happens? |
scribejava/scribejava | 288450701 | Title: Change ServiceBuilder's default callback URL to null
Question:
username_0: `com.github.scribejava.core.builder.ServiceBuilder`'s default callback URL is `obb` now.
But in some API (e.g. GitHub API), it is useful to change the default callback URL to `null`.
Status: Issue closed
Answers:
username_1: As I see default callback was 'oob' from the very beginning, from the first commit (6 Sep 2010)
I'm using GitHub API in production and I don't need 'null'.
But anyway, callback (redirect_uri) is optional in OAuth2, I have pushed the commit with the fix. https://github.com/scribejava/scribejava/commit/7b8aab5eaba99495518b69f36924e216fe1a037f
Thanks |
selectline-software/selectline-api | 761960838 | Title: Statuscode { "StatusCode": "BadRequest", "ResponseCode": "02-008", "Message": "BadRequest", "Details": null }
Answers:
username_1: Guten Tag,
welche Version haben Sie bei sich im Einsatz?
Mit freundlichen Grüßen,
<NAME>
username_1: Guten Tag,
ich wollte Ihnen mitteilen, dass der Fehler zur Version 20.3.8 behoben wurde.
Die 20.3.8 wird als Hotfix voraussichtlich Anfang nächsten Jahres ausgeliefert werden.
Ein genaues Datum kann ich Ihnen noch nicht nennen.
Danke noch einmal für Ihre Hilfe bei der Suche nach diesem Fehler.
Ein frohes Fest und einen gesunden Rutsch in das neue Jahr 2021.
Beste Grüße,
<NAME>
Status: Issue closed
|
berlindb/core | 644227458 | Title: Upgrade method does not run upgrades
Question:
username_0: It seems like the upgrader never runs database upgrades. It looks like the `array_filter` that filters out the upgrades that have already ran does not filter by the version variable, but by the callback method variable.<issue_closed>
Status: Issue closed |
aditya-7/ng2-file-uploading-with-chunk | 1078747692 | Title: Angular 12/13 support
Question:
username_0: Please can you add support for angular cli 12/13
Answers:
username_1: We have loved the implementation of this module in our angular projects, however we do need the support for Angular 13 as well. This is currently the error we get upon trying to run the application:
```
Error: Error on worker #3: Error: Compiled class declaration is not inside an IIFE: FileSelectDirective in C:/dev/rol/rol5/node_modules/ng2-chunk-file-upload/file-upload/file-select.directive.js
at Esm5RenderingFormatter.addDefinitions (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4068:13)
at file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4314:27
at Array.forEach (<anonymous>)
at Renderer.renderFile (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4311:36)
at file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4299:36
at Array.forEach (<anonymous>)
at Renderer.renderProgram (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4296:46)
at Transformer.transform (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4522:32)
at file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-KWZNY2SK.js:4602:34
at Worker.<anonymous> (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/ngcc/src/execution/cluster/ngcc_cluster_worker.js:70:24)
at ClusterMaster.onWorkerMessage (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-DJRTTRF3.js:1464:15)
at file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-DJRTTRF3.js:1374:71
at EventEmitter.<anonymous> (file:///C:/dev/rol/rol5/node_modules/@angular/compiler-cli/bundles/chunk-DJRTTRF3.js:1532:15)
at EventEmitter.emit (events.js:375:28)
at Worker.<anonymous> (internal/cluster/master.js:182:13)
at Worker.emit (events.js:375:28)
at ChildProcess.<anonymous> (internal/cluster/worker.js:33:12)
at ChildProcess.emit (events.js:375:28)
at emit (internal/child_process.js:910:12)
at processTicksAndRejections (internal/process/task_queues.js:83:21)
```
The error locks up the starting commands and doesn't allow the application to run.
username_1: If this is also helpful I found this after cleaning up some more broken items in our cod:
```
- ng2-chunk-file-upload [module/esm5] (git+https://github.com/aditya-7/ng2-file-uploading-with-chunk.git)
Warning: Invalid constructor parameter decorator in C:/dev/rol/rol5/node_modules/ng2-chunk-file-upload/file-upload/file-select.directive.js:
() => [
{ type: ElementRef, },
]
Warning: Invalid constructor parameter decorator in C:/dev/rol/rol5/node_modules/ng2-chunk-file-upload/file-upload/file-drop.directive.js:
() => [
{ type: ElementRef, },
]
Warning: Invalid constructor parameter decorator in C:/dev/rol/rol5/node_modules/ng2-chunk-file-upload/file-upload/file-upload.module.js:
() => []
```
I'm also taking a look locally to see if I can figure out what the issue is. If so I'll try to submit a pull request. |
SIB-Colombia/mamut | 117659381 | Title: Remove the second drop down list in "Condiciones ambientales" class
Question:
username_0: Remove the second drop-down list box and include a field for documenting numerical data

Answers:
username_1: Fixed in b2fa1d9acd52adab921c0448b362429cdf797de6
Status: Issue closed
|
suriyun-production/mmorpg-kit-docs | 679780044 | Title: Add "Impact Effects" Field also for Melee and Missile
Question:
username_0: You added the Impact Effects Field for Raycast to make specific effect on Hit. This field is missing for Melee and Missile. Would be great when you add this. Right now you cant change effects on hit. its same for all. When you for example have a axe and add sound for chopping wood, then you will hear that sound/effect also when hit a monster. there is no way to make multible sounds. only for the raycast.<issue_closed>
Status: Issue closed |
pandas-dev/pandas | 863154058 | Title: BUG: Conversion of Series dtype from object to Int16 etc. fails
Question:
username_0: - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
```python
import pandas as pd
str_obj_ser = pd.Series(["1", "2", "3", None], dtype="object")
mix_obj_ser = pd.Series([1, "2", 3.0, None], dtype="object")
num_obj_ser = pd.Series([1, 2, 3.0, None], dtype="object")
str_obj_ser.astype("Int16") # Exception
mix_obj_ser.astype("Int16") # different Exception
num_obj_ser.astype("Int16") # works
str_obj_ser.astype("string").astype("Int16") # works
mix_obj_ser.astype("string").astype("Int16") # Exception
num_obj_ser.astype("string").astype("Int16") # Exception
str_obj_ser.astype("string").astype("Float64").astype("Int16") # works
mix_obj_ser.astype("string").astype("Float64").astype("Int16") # works
num_obj_ser.astype("string").astype("Float64").astype("Int16") # works
str_obj_ser.astype("Float64").astype("Int16") # Exception
mix_obj_ser.astype("Float64").astype("Int16") # works
num_obj_ser.astype("Float64").astype("Int16") # works
```
#### Problem description
The conversion of an object-series with some text in it to one of the nullable integer dtypes fails even though all elements of the series are convertable to integers (or to pd.NA).
This issue seems to be related to #40729, but the workaround described there for Floatxx doesn't work in all cases here:
The detour via dtype string is not evough if an element in the object-series is a float because int("3.0") doesn't work (but int(3.0) does). A detour via string and then Float64 is necessary for all examples given above to work (for some cases, but not all, the string step can be omitted).
But even the detour via string and Float64 to Int16 is not guaranteed to always work, e.g. if an element of the series is an object with an ``__int__()`` method (returning a number) and a ``__str__()`` method (returning a description, not an integer literal).
I think the topic of this issue has also been mentioned in the discussion of #39616.
#### Expected Output
The conversion of a series of dtype object to one of the nullable integer dtypes should always work if all elements of the series are convertable to the target dtype.
At least something along the lines of
* element is None, pd.NA, np.nan, ... -> pd.NA
* otherwise -> int(element)
I'd even prefer something like
* element is None, pd.NA, np.nan, ... -> pd.NA
* element is string, bytes or bytearray -> int(element, 0)
* otherwise -> int(element)
such that string literals like "0x7f" work.
As the latter doesn't work with the current string -> Int16 conversion though, that would be more like an enhancement than a bugfix.
[Truncated]
<details>
INSTALLED VERSIONS
------------------
commit : 2cb96529396d93b46abab7bbc73a208e708c642e
python : 3.9.4.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
...
pandas : 1.2.4
numpy : 1.20.2
pytz : 2021.1
dateutil : 2.8.1
...
</details>
Answers:
username_1: Thanks for looking into this with such detail @username_0. Cleaning up this behavior would be great |
any86/any-rule | 967764643 | Title: ipv6 校验有问题
Question:
username_0: 
Answers:
username_0: 其中 (([0-9A-Fa-f]{1,4}:){1,7}:)) 少了个 $
username_0: /^(?:(?:(?:[0-9A-Fa-f]{1,4}:){7}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){6}:[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){5}:([0-9A-Fa-f]{1,4}:)?[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){4}:([0-9A-Fa-f]{1,4}:){0,2}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){3}:([0-9A-Fa-f]{1,4}:){0,3}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){2}:([0-9A-Fa-f]{1,4}:){0,4}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){6}((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|(([0-9A-Fa-f]{1,4}:){0,5}:((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|(::([0-9A-Fa-f]{1,4}:){0,5}((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|([0-9A-Fa-f]{1,4}::([0-9A-Fa-f]{1,4}:){0,5}[0-9A-Fa-f]{1,4})|(::([0-9A-Fa-f]{1,4}:){0,6}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){1,7}:))$|^\[(?:(?:(?:[0-9A-Fa-f]{1,4}:){7}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){6}:[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){5}:([0-9A-Fa-f]{1,4}:)?[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){4}:([0-9A-Fa-f]{1,4}:){0,2}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){3}:([0-9A-Fa-f]{1,4}:){0,3}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){2}:([0-9A-Fa-f]{1,4}:){0,4}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){6}((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|(([0-9A-Fa-f]{1,4}:){0,5}:((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|(::([0-9A-Fa-f]{1,4}:){0,5}((\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b)\.){3}(\b((25[0-5])|(1\d{2})|(2[0-4]\d)|(\d{1,2}))\b))|([0-9A-Fa-f]{1,4}::([0-9A-Fa-f]{1,4}:){0,5}[0-9A-Fa-f]{1,4})|(::([0-9A-Fa-f]{1,4}:){0,6}[0-9A-Fa-f]{1,4})|(([0-9A-Fa-f]{1,4}:){1,7}:))\](?::(?:[0-9]|[1-9][0-9]{1,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5]))?$/i
username_1: 感谢稍后合并
username_1: 
应该哪里拼写有误
username_1: 还是报错哦 |
timotheecour/Nim | 642264777 | Title: {.ignoreNotNil.}: escape hatch for not nil
Question:
username_0: ## goal
escape hatch for not nil annotations, to avoid create worse problems such as contorted code or code with bad performance
## example
```nim
{.push ignoreNotNil.}
... # assume the programmer knows what he's doing
{.pop.}
```
see https://github.com/nim-lang/Nim/pull/13808/files/4f6cdd797cc0db0410644e7f1216e5264968deeb#diff-f48932f809aa3e1ed2576a4fdd754e26
Status: Issue closed
Answers:
username_0: superseded by https://github.com/nim-lang/RFCs/issues/317 |
rust-lang/rust | 602066084 | Title: Tracking Issue for XXX
Question:
username_0: <!--
Thank you for creating a tracking issue! 📜 Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for the RFC "Cargo report future-incompat" (rust-lang/rfcs#2834).
There is no feature gate for the issue (the changes are not language visible).
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also uses as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [ ] Implement the RFC (cc @rust-lang/cargo -- but username_0 will either implement or write up mentoring instructions)
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
XXX --- list all the "unresolved questions" found in the RFC to ensure they are
not forgotten
### Implementation history
<!--
Include a list of all the PRs that were involved in implementing the feature.
-->
Answers:
username_1: I'm interested in working on this.
Would it be better to add additional data to each JSON diagnostic message (e.g. `future_compat: { breakage_release: "1.47.0" }`), or emit a single 'summary' message at the end of compilation?
username_0: Hi @username_1 ; come on over to the [zulip thread for the MCP](https://rust-lang.zulipchat.com/#narrow/stream/233931-t-compiler.2Fmajor-changes/topic/Add.20future-incompat.20entries.20to.20json.20diagno.20compiler-team.23315/near/201408566) and we can discuss more. (In the MCP I went with "single summary" at the end, but we can talk more about the options on zulip.)
username_1: I have a pull request up at https://github.com/rust-lang/rust/pull/75534
username_1: The rustc side of this was implemented in https://github.com/rust-lang/rust/pull/75534
username_2: Also, how do I actually use this? `cargo check --future-incompat-report -Z unstable-options -Z future-incompat-report` has no output other than running the build, and `cargo describe-future-incompatibilities` requires an `id` that I don't know.
username_3: If there is no output, that probably means there aren't any dependencies that hit a future-incompat warning. I think there is only one lint currently enabled (ARRAY_INTO_ITER).
username_4: ```
username_2: Thanks, that shows a warning like expected :)
username_1: We could, but that would get removed when the feature is stabilized. We want to eventually display a '1 dependency has future-incompat warnings' when you run a plain `cargo check` (without any flags) - displaying '0 dependencies with future-incompat warnings' on every Cargo invocation would just be noise.
username_3: I think this is referring to the `--future-incompat-report` flag, which if I understand correctly, is not going to be removed? I think it does seem a little confusing if `--future-incompat-report` is passed and it doesn't print anything.
username_1: I believe @username_5 wanted it to require `-Z unstable-options` for consistency with other unstable options. We need to require `-Z future-incompat-report` because `cargo build -Z future-incompat-report` is different from `cargo build` - the former will show a message if any future-compat warnings were emitted.
username_5: ```
I'm mainly worried about how 90% of this is gibberish if you fall into the bucket of both not having written the dependency and not even really understanding where the dependency came from. Dependency graphs can be somewhat deep nowadays so when you pull in a dependency you could get a future incompat warning for some entirely unrelated crate you have no interest in learning about (e.g. some transitive dependency).
This warning message doesn't tell you how the crate is in your crate graph, it doesn't really provide you any means of how to fix it, and showing the code is somewhat counter-productive since it's highly unlikely I'll actually do anything with the code in hand. Basically what I'm saying is that the current message format makes sense *if you authored the dependency*, but given the intended use case for this feature it seems like that would be a vanishingly rare use case.
Just about the only thing most authors can do is to execute `cargo update` in one way or another and hope that it fixes the issue. If it doesn't fix the issue then you would have a verbose lint that you can't suppress and there's nothing you can do about it except move to an older version of rustc that doesn't emit the warning (which we obviously don't want to encourage).
I don't know of a great fix for this, but I wanted to point out this aspect of where I think the UI is not great and can lead to bad/confusing user experiences.
username_1: The verbose message (with the source code) would only be shown when you explicitly opt in via `--future-incompat-report` or `cargo describe-future-incompatibilities`. The two-line message about the existence of future-compatible messages could be suppressed via `.cargo/config` once the 'Annoyance Modulation' section of the RFC is implemented.
username_1: Now that `cargo tree` is part of `cargo` itself, would it make sense to add a suggestion to run `cargo tree -i <affected_package>`?
username_5: I think you're unfortuantely missing my point. I am aware that "there are warnings" is a very short message at the end of `cargo build`. I'm also aware that the intent is to implement some mechanism so that message doesn't show up as often (but I believe no one has proposed anything concrete for this yet?). My point is that when a user actually sees the message (the verbose one), it seems highly likely they'll be baffled and have no idea what to do.
I would imagine that most users don't want warnings in their code, so if they see a scary-looking warning along the lines of "code will break" they'll be motivated to fix it rather than instantly suppress it and forget about it. Upon trying to investigate the warning Cargo will instruct them to run a further command with the full report, and I'm worried that this full report will be confusing an un-actionable to the vast majority of users (as-is the message today).
You mention that the intention of showing code is to file upstream bug reports, but I agree that the vast majority of users won't do this. That's a huge amount of noise to give to a lot of folks with the hope that someone will actually do something. You also mention that `cargo tree` can be used but I don't think that really does anything about my point which is that the report as-is coming out to users is likely to be construed as mostly gibberish. There's a whole bunch of investigation that *could* be done (e.g. with `cargo tree`, looking at bug trackers, googling, etc), but that's a lot of work to put on end users in my opinion.
Again I don't have an idea of how to solve this, I'm just trying to point out that I feel like the current implementation is MVP-style that doesn't really do much than connect a few wires and the end result I would be uncomfortable stabilizing as-is.
username_1: The 'Annoyance modulation' section of the RFC describes adding a new `[future_incompatibility_report]` to `.cargo/config`. A user who doesn't want to see the message at all could set `future_incompatibility_report.frequency = "never"`, or `future_incompatibility_report.frequency = "weekly"` to see it less often.
username_1: As much as possible, I think we should make sure that end users don't end up in this kind of situation. This means that before enabling this feature for a particular lint, we should ensure that all (known) directly affected crates have released new versions, and that downstream crates have updated their `Cargo.toms` (for major version bumps) as necessary.
I think a good example of this is `array_into_iter` - [based on Crater run results](https://github.com/rust-lang/rust/pull/65819#issuecomment-681132206), there are many projects with commited `Cargo.locks` that depend on an old version of an affected crate (`colored`, `lazy_static`, etc). Running `cargo update` should be enough to fix the vast majority of broken crates, *because* the work has already been done to fix the affected libraries.
I'd like to emphasize that in may cases, the kind of in-depth investigation you're talking about *must* be done by end users. Consider the case of a dependency graph with a large number of private crates, or crates hosted somewhere that's not indexed by Crater. By definition, we have no way of patching these kinds of crates ourselves - and if the compiler starts rejecting the linted code, then end users *will* start to see the kind of confusing errors we're talking about. In this situation, I think a detailed future-incompat report is critical to easing the transition to a new Rust version - the end user will need to do an investigation at some point, but we're providing the necessary information to do that work ahead of time.
However, I agree that the verbosity of the report is (hopefully) overkill for consumers of public dependencies, where the fix should ideally be to just run `cargo update`. Do you think we should 'de-emphasize' the full report in some way? Or perhaps provide an intermediate level of verbosity between the default message and the full report (though I'm not really sure what that would look like).
username_2: Is there a way to cargo to know if certain versions have warnings without compiling them, something like the rustsec database? That would let it suggest precise solutions ("help: run `cargo update -p colored --precise 1.9.0`") instead of having to guess at whether cargo update will help.
username_4: Just some quick ideas:
- `cargo build` would only show a small warning, as it already does right now.
- `cargo build --future-incompat-report` would not show the actual warnings, but only explains what the user can do to make the warning go away.
- `cargo describe-future-incompatibilities --id <id>` would print the actual warnings, as it already does. BUT the id would be per dependency. My reasoning: the actual warning is only useful to the user if they want to report this issue or fix the warning in the dependency. And doing that is only useful one dependency at a time IMO.
To throw some example for `cargo build --future-incompat-report` out there:
```
warning: the following crates contain code that will be rejected by a future version of Rust: colored v1.1.0, lazy_static v1.1.0
These crates are in your dependency tree because:
... some nice cargo-tree like output or something like that ...
To solve this problem, you can try the following things:
- Automatically update your dependencies to the latest compatible version:
- `cargo update -p colored`
- `cargo update -p lazy_static`
- If a minor dependency update does not help, you can try updating to a new
major version of those dependencies. You have to do this manually.
- colored could be updated from 1.1.0 to 2.0.0
- If the issue is not solved by updating the dependencies, a fix has to be
implemented by those dependencies. You can help with that by notifying the
maintainers of this problem (e.g. by creating a bug report) or by proposing a
fix to the maintainers (e.g. by creating a pull request).
- colored
- Repository: https://github.com/mackwic/colored
- Detailed warning: cargo describe-future-incompatibilities --id abc
- lazy_static:
- Repository: https://github.com/rust-lang-nursery/lazy-static.rs
- Detailed warning: cargo describe-future-incompatibilities --id def
- If waiting for an upstream fix is not an option, you can use the `[patch]`
section in `Cargo.toml` to use your own version of the dependency. For more
information, see:
https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html#the-patch-section
- Finally, to simply silence this warning, you can ... (something something
`future_incompatibility_report.frequency = "weekly"`; link to "Annoyance
modulation" docs).
```
A few notes about that:
- For the update stuff, the local crate cache is used. If no newer (major) version exists, that option is not shown. This of course depends on the package cache being somewhat up to date.
- For the "report issue", the `repository` key in the Cargo manifest is used.
- Imagine this text to be enhanced by colors.
- Yes it's long and probably doesn't match the current "cargo style of output", but for a thing that people should see very rarely, I would rather get more information than having beautifully consistent output.
username_4: This would be really great, but is probably a lot of work. And just `cargo check`ing all newer versions of that dependency is probably out of the question.
Let me take this suggestion as an example to underline a point where I agree with @username_1 (assuming I understood them correctly): I think this "cargo report future-incompat" feature is already very useful even before we have perfected the UX around it. I'd rather have an unpolished cargo warning telling me something is wrong with a dependency a few months in advance, instead of seeing a really cryptic error message in a dependency during `cargo build`.
That said, I basically agree with all points @username_5 brought up and think we can make the user experience a lot smoother.
username_1: There are a lot of items under 'Unresolved Questions', but I don't think most of them need to block stabilization. Integrating cargo warnings and emitting JSON messages can all be added later - displaying some future-incompat messages is better than displaying none.
Performance-wise: we are already unconditionally replaying the output cache regardless of whether or not the feature is enabled, and no one has run into performance issues are far as I know. Any performance issues caused by displaying lots of future-incompat messages would also show up in a large workspace with normal warnings. Unless anyone has run into concrete problems, or we have benchmarks showing something concerning, I don't think there's anything actionable here.
I think the main things that need to be done are:
* Implement the 'Annoyance modulation' section of the RFC (or decide that we want to do something else).
* Decide if we want to replace the command with a `cargo report` subcommand (I'm not sure if any work has been done on it yet).
* Come up with a solution to the problem of displaying verbose/useless messages to users (I like @username_4's suggestion).
As long as the output is not actively confusing, I think it would be better to stabilize a 'minimal' version of this, and then continue to work on adding additional features. Having some form of this feature available will allow us to make progress towards several desirable features (`array_into_iter` and removing proc-macro back-compat hacks, for example)
username_3: I have posted a proposal to rename `cargo describe-future-incompatibilities` to `cargo report future-incompatibilities` at https://github.com/rust-lang/cargo/pull/9438.
username_3: Posted #86478 to make it easier for Cargo to test the reporting infrastructure (so it doesn't need to keep switching lints to test).
username_3: Posted https://github.com/rust-lang/cargo/pull/9606 with various fixes and updates on the cargo side, intended to make things easier/smoother.
username_3: @username_1 I was wondering if you are interested in stabilizing this. I don't think there are any major blockers, and at least on the Cargo side we feel comfortable moving forward as long as the compiler team is very conservative with enabling new lints until we've had some more real-world experience.
username_1: @username_3: I'm currently working on a PR that improve the output, based on @username_4's suggestion in https://github.com/rust-lang/rust/issues/71249#issuecomment-803000704. Once that's done, I think this feature should be ready to stabilize.
username_3: ```
</pre>
</details>
Running with `--future-incompat-report` is similar, but does not show the exact compiler warnings.
`future-incompat` is available as an alias in `cargo report` for those whose fingers get twisted while typing the full `future-incompatibilities`.
`cargo report future-incompat` without `--id` will display the most recent report.
The report command also supports the `--package` option to only show the warnings for a single package.
When stabilized, the `-Z` flags will no longer be necessary.
## Report interface
Currently, `rustc` will emit special JSON data when the `-Z emit-future-incompat-report` flag is included. When stabilized, it is intended for this to become `--json future-incompat`. The JSON structure looks like:
```javascript
{
/* An array of objects describing a warning that will become a hard error
in the future.
*/
"future_incompat_report":
[
{
/* A diagnostic structure as defined in
https://doc.rust-lang.org/rustc/json.html#diagnostics
*/
"diagnostic": {...},
}
]
}
```
This structure is somewhat bare now, but is intended to support additions in the future.
This is emitted towards the end of compilation.
Cargo intercepts these messages and stores them on disk, and reports a message to the console before exiting.
## Warning silencing
In situations where a user cannot update the offending code, they can silence the warning using a cargo config value:
```toml
[future-incompat-report]
# Value is "never" or "always"
frequency = "never"
```
## Warning opt-in
Currently there are only two warnings which will trigger a report: [`proc_macro_derive_resolution_fallback`](https://doc.rust-lang.org/nightly/rustc/lints/listing/deny-by-default.html#proc-macro-derive-resolution-fallback) and [`proc_macro_back_compat`](https://doc.rust-lang.org/nightly/rustc/lints/listing/deny-by-default.html#proc-macro-back-compat).
A lint must explicitly opt-in to triggering a report using the [`FutureIncompatibilityReason::FutureReleaseErrorReportNow`](https://github.com/rust-lang/rust/blob/2b643e987173b36cb0279a018579372e31a35776/compiler/rustc_lint_defs/src/lib.rs#L163-L165) option.
## Implementation
[Truncated]
* The "deadline" from the report was removed (the indication of a date when it will become a hard error). Cargo doesn't need that in a structured form, and I am skeptical that we will be able to accurately predict when changes will be made in the future. If a date is indeed wanted, the text can be added to the lint.
* The "annoyance modulation" is relatively basic, and does not support any sort of temporary silencing. This can be added in the future if desired.
## Known bugs
* Color support in `cargo report future-incompatibilities` is not detected properly, and will emit color escape codes when piping the output to a file (https://github.com/rust-lang/cargo/issues/9960).
## Unresolved questions
The questions above that aren't resolved:
* Cargo itself emitting its own incompatibility reports: This can be added later if needed.
* Interaction with editors: In some circumstances, a user may not see these warnings since some editors only display JSON messages, and this warning is text only. I think this shouldn't be too bad, as I think most users eventually look at console output. This is a long-running issue with Cargo not emitting its own messages as JSON.
* Performance impact: It is not expected that this should have any performance impact. In the vast majority of cases, there is no cached output for dependencies because of cap-lints. It is also expected that these warnings should be exceedingly rare.
* Telling the user the CARGO_TARGET_DIR to use: I think if a user is using a custom target directory, they should know how that works. One example where this may not be obvious is rustc's own `x.py` build system. It is not known how common complex build environments like `x.py` are. I think this is something we can iterate on in the future.
* Reports can be confusing or unactionable: We have been iterating on the format of the report to try to make it clearer, with actionable steps. However, this is definitely a major concern, and something we should be careful about. I think we can iterate more based on real-world user reports of any confusion or frustration.
## Other notes
* The currently isn't a convenient way to forbid only these warnings that are in the `FutureReleaseErrorReportNow` category. You can do something like `RUSTFLAGS="-F future-incompatible"`, but that will forbid all future-incompatible warnings. Perhaps this is something to consider in the future? I imagine either adding a dedicated lint group, or a setting in the `[future-incompat-report]` config section are options.
username_1: cc @estebank @michaelwoerister @nagisa @nikomatsakis @username_0 @wesleywiser - when you get a chance, can you review the FCP?
username_1: This has now been stabilized in both rustc and cargo!
Status: Issue closed
username_3: 🎉 This is now available in the latest nightly release (2021-12-09). Thanks @username_1!
Closing as this is now complete. |
xingyizhou/CenterTrack | 915185642 | Title: Regarding model input'pre_hms, pre_inds'
Question:
username_0: 关于detector.py 中的output, dets, forward_time = self.process(images, self.pre_images, pre_hms, pre_inds, return_time=True),
使用mot数据集测试程序时,设置--pre_hm为false,输入images与self.pre_images相同,pre_hms和pre_inds为None,与论文中模型的输入为相邻两帧和heatmap不太一致,没有理解。希望得到你的回复。<issue_closed>
Status: Issue closed |
odalic/odalic-ui | 185361307 | Title: Highlight the task that was just created when the table is displayed afterwards
Question:
username_0: Even when I implement a stable task order on the server, it might be the case that the order will depend on the task name and not the time of creation. This might make it hard to spot the newly created task in the table when the table is displayed immediately after the creation, because the user cannot rely that it will appear at certain position (first row for example), but must look it up in the table by its name.<issue_closed>
Status: Issue closed |
carpentries/training-template | 328484873 | Title: Add survey links to template
Question:
username_0: Now that we have pre/post surveys for instructor trainees can we add them to this template (just like we have for the standard workshop template)?
Can we also have them append the workshop slug (again, like they do for the workshop template)?
Happy to do whatever is needed to make this happen but I'm not sure if there's more to it than adding a link in the template.
Answers:
username_0: I have this same issue in two places
https://github.com/carpentries/instructor-training/issues/698 |
MrSaints/kubeseal-web | 779115230 | Title: Better indicators when a raw secret has changed, and the sealed secret is dirty
Question:
username_0: It can get a little confusing when sealing multiple secrets because there is no clear indicator if the updated raw secret was successfully sealed, and if we even triggered the sealing.
Some ideas:
- Reset the sealed secret output if the raw secret changes
- Disable the seal button after sealing, and only re-enable it if the raw secret changes
- Include an indicator (e.g. icon or message) that the raw secret is "dirty" and needs sealing
- Show a sealed success message or a loading state (maybe something similar to React Suspense, only show it if it takes longer than normal)<issue_closed>
Status: Issue closed |
dotnet/roslyn | 177290031 | Title: Linq used in DataFlowPass.cs
Question:
username_0: A recent change to DataFlowPass.cs added this assertion:
```cs
Debug.Assert(localsOpt.Where(l => l.DeclarationKind == LocalDeclarationKind.UsingVariable).All(_usedVariables.Contains));
```
Even though it is executed only in debug mode, we avoid Linq on hot paths in the compiler. Please rewrite this or remove it.
(The simplest fix would be to fuse with the immediately preceding loop. Or delete the assertion, as it tests a condition obviously established by the previous loop).
Answers:
username_1: In my opinion, it is acceptable to use Linq in asserts.
Status: Issue closed
username_0: A recent change to DataFlowPass.cs added this assertion:
```cs
Debug.Assert(localsOpt.Where(l => l.DeclarationKind == LocalDeclarationKind.UsingVariable).All(_usedVariables.Contains));
```
Even though it is executed only in debug mode, we avoid Linq on hot paths in the compiler. Please rewrite this or remove it.
(The simplest fix would be to fuse with the immediately preceding loop. Or delete the assertion, as it tests a condition obviously established by the previous loop).
username_0: Fixed in https://github.com/dotnet/roslyn/commit/645f5add37ccd31f1f85cc5e4f8fa6c0e9c1baf1
Status: Issue closed
|
polyrhythm-project/polyrhythm-website | 558399747 | Title: Missing "Bartok Pizz" articulation symbol
Question:
username_0: Ex. T346, Bartok, all Bartok Pizz are missing from VHV
PDF
<img width="259" alt="Screen Shot 2020-01-31 at 2 09 43 PM" src="https://user-images.githubusercontent.com/59900770/73577979-619e3300-4433-11ea-8b2c-fe014e708f91.png">
VHV
<img width="532" alt="Screen Shot 2020-01-31 at 2 09 59 PM" src="https://user-images.githubusercontent.com/59900770/73577988-6662e700-4433-11ea-90a6-a678fb055480.png">
Answers:
username_1: In the MusicXML they are called "snap-pizzacato":
```
<technical>
<snap-pizzicato default-y='10' relative-x='2'/>
</technical>
```
in MEI it is called a "snap" articulation:
https://music-encoding.org/guidelines/v3/data-types/data.articulation.html
In Humdrum there is probably no representation, but I will probably make `''` mean a Bartok pizz.
Verovio currently does not display Bartok pizz.
username_2: T576, Prokofiev - here is another unusual articulation marking that is not showing up
SIB:

VHV:

username_1: Those symbols may be alternate up/down bow symbols. Or alternately they could be alternate styles for arpeggiation: either bowed or fingered (pizz.) arpeggiation. |
conan-io/conan | 529680452 | Title: [bug] Cannot Get Started With Conan Due To bzip2
Question:
username_0: ### Environment Details (include every applicable attribute)
* Operating System+version: Mac OS Catalina 10.15
* Compiler+version: `gcc --version`
```
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.0 (clang-1100.0.33.8)
Target: x86_64-apple-darwin19.0.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
* Conan version: Conan version 1.20.4
* Python version: 3.7
### Steps to reproduce (Include if Applicable)
Try to use Conan to install Boost
It doesn't `Just Work`™️
### Logs (Executed commands with output) (Include/Attach if Applicable)
```
conan install boost/1.67.0@conan/stable
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=apple-clang
compiler.libcxx=libc++
compiler.version=11.0
os=Macos
os_build=Macos
[options]
[build_requires]
[env]
bzip2/1.0.6@conan/stable: Not found in local cache, looking in remotes...
bzip2/1.0.6@conan/stable: Trying with 'conan-center'...
Downloading conanmanifest.txt: 100%|##########| 163/163 [00:00<00:00, 94.7kB/s]
Downloading conanfile.py: 100%|##########| 2.21k/2.21k [00:00<00:00, 2.21MB/s]
Downloading conan_export.tgz: 100%|##########| 766/766 [00:00<00:00, 963kB/s]
Decompressing conan_export.tgz: 100%|##########| 766/766 [00:00<00:00, 201kB/s]
bzip2/1.0.6@conan/stable: Downloaded recipe revision 0
zlib/1.2.11@conan/stable: Not found in local cache, looking in remotes...
zlib/1.2.11@conan/stable: Trying with 'conan-center'...
Downloading conanmanifest.txt: 100%|##########| 296/296 [00:00<00:00, 244kB/s]
Downloading conanfile.py: 100%|##########| 8.74k/8.74k [00:00<00:00, 6.28MB/s]
Downloading conan_export.tgz: 100%|##########| 766/766 [00:00<00:00, 866kB/s]
Decompressing conan_export.tgz: 100%|##########| 766/766 [00:00<00:00, 587kB/s]
zlib/1.2.11@conan/stable: Downloaded recipe revision 0
Installing package: boost/1.67.0@conan/stable
Requirements
boost/1.67.0@conan/stable from 'conan-center' - Cache
bzip2/1.0.6@conan/stable from 'conan-center' - Downloaded
zlib/1.2.11@conan/stable from 'conan-center' - Downloaded
Packages
boost/1.67.0@conan/stable:1f76c3cab6cf7e3276c780a84295ed1362bd222d - Missing
bzip2/1.0.6@conan/stable:32bef4803d4b079e983ecb27f105881e778bc5a7 - Missing
zlib/1.2.11@conan/stable:f74366f76f700cc6e991285892ad7a23c30e6d47 - Download
bzip2/1.0.6@conan/stable: WARN: Can't find a 'bzip2/1.0.6@conan/stable' package for the specified settings, options and dependencies:
- Settings: arch=x86_64, build_type=Release, compiler=apple-clang, compiler.version=11.0, os=Macos
- Options: build_executable=True, fPIC=True, shared=False
- Dependencies:
- Package ID: 32bef4803d4b079e983ecb27f105881e778bc5a7
ERROR: Missing prebuilt package for 'bzip2/1.0.6@conan/stable'
Try to build it from sources with "--build bzip2"
Or read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package"
```
Answers:
username_1: Hi @username_0,
This issue is because there are still no packages generated for Apple clang 11. You are using an older version of boost that was created previously to the release of the new clang version.
As you see in the trace, packages are marked as "missing" and the message at the button suggests you to build those packages in your machine.
As there more than one package missing, you can build all of them using `conan install boost/1.67.0@conan/stable --build missing`
As a side note, we are moving the packages to a new building service at https://github.com/conan-io/conan-center-index so we can guarantee that packages are updated and fixed as soon as possible with the help of the Conan community. |
BookStackApp/BookStack | 958410576 | Title: Editor callouts not working in some pages
Question:
username_0: **Describe the bug**
We are having an issue that only affects some pages in which when applying one of the 4 callout formatting either nothing happens, or it changes the formatting but not in the desired way. When trying in one of those pages I see that some areas do allow the callouts to be applied but others not.
**Steps To Reproduce**
Open an "affected" page, edit it, highlight some text, try to apply a call out.
Page that works: https://wiki.oceanbuilders.com/books/open-source-projects/page/marine-navigation
Page that has issues: https://wiki.oceanbuilders.com/books/open-source-projects/page/ocean-monitoring-station
**Expected behavior**
The callouts should always be applied to the selected paragraph.
**Screenshots**
The issue can be seen clearly in this video http://files.oceanbuilders.com/f/0d7fb53d72134b61b55d/
**Your Configuration (please complete the following information):**
- Exact BookStack Version (Found in settings): 21.05.3
- PHP Version: 8.0.8
- Hosting Method (Nginx/Apache/Docker): Apache
Please let me know what other information might help!
Important: I also tried deleting all the custom HMTL Head Content from Settings, the problem was still there.
Answers:
username_1: Hi @username_0,
Thanks for providing examples and a video for this, makes it much easier it look into & understand.
Have just been testing this. From what I can see, this comes down to specific blocks of content.
Looking at the content of the page that has issues, the content structure behind the scenes is much more complex. I get the feeling this has been copied in from another source. Usually the editor will attempt to clean-up the content on entry but I think it's had trouble here.
This then leads to trouble when a callout is attempted to be added. Note, This doesn't break callouts on the whole page, just within that area; If you create a new section (After a couple of fresh newlines) at the bottom or the top you'll likely be able to add a callout in these areas. You can kind of fix broken areas by copying the content and pasting in as plain text (For me on FireFox/Linux that's via Ctrl+Shift+V).
username_0: Thanks, that was indeed the issue. I tried in that affected page to copy the text, deleting it and pasting it as plain text and now callouts work in that page.
Thanks @username_1 . I am closing the issue since this doesn't seem to need any action on BookStack development.
Status: Issue closed
|
convox/rack | 313981753 | Title: Using underscore in service name causes ELB listener rule creation to fail
Question:
username_0: It seems if a service name in `services` section contains an underscore, creation of the `BalancerServiceNameListenerRule80` resource fails with the following message:
```
2018-04-13T05:29:14Z system/aws/cfm CREATE_FAILED BalancerAdProcessorListenerRule80 Condition value for field 'host-header' must be a valid hostname
```
convox.yml:
```
services:
ad_processor:
build: .
environment:
...
health: /v1/health
port: 8080
scale:
count: 2
memory: 2048
cpu: 2048
```
The template that creates the resource: https://github.com/convox/rack/blob/7c971d614ed967464f7fe94109f1eb37243205ce/provider/aws/formation/app.json.tmpl#L120
Maybe this could be checked for in a validation step beforehand?
Answers:
username_1: Would be happy to accept this as a PR.
Status: Issue closed
|
goodmami/gtest | 60763647 | Title: Make color disableable
Question:
username_0: Currently gTest always outputs ANSI color escapes. It should behave more like `grep`, which only outputs color (when `--color=auto`) when the output is a TTY and not a file. An option like `--color=always`) could be used to output color escapes even to a file (which can be useful if you want to `cat` the results and get color again).
Answers:
username_0: I wrote the wrong issue number in the merge message, but this was resolved in bf1e8e1907a55d3d20cc10a624145b47167eec7d
Status: Issue closed
|
LUMII-AILab/LVWordNet | 690984388 | Title: Neatšifrētās importa morforeporta problēmas
Question:
username_0: @PeterisP , Tu teici, ka Tu vari izgūt, no kādiem šķirkļiem ir nākuši šie kļūdu paziņojumi. Vajag dabūt šķirkļusarakstu, lai @lrituma var vēlāk salabot.
````
entry null [ 'Saīsinājums' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Saīsinājums' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Vārds svešvalodā' ]
entry null [ 'Saīsinājums' ]
``` |
naser44/1 | 106189525 | Title: نسبة الإصابة بأورام القولون الوراثية تتراوح من 5: 8% وقد تتطلب استئصاله
Question:
username_0: <a href="http://ift.tt/1UOFDPY">نسبة الإصابة بأورام القولون الوراثية تتراوح من 5: 8% وقد تتطلب استئصاله</a> |
Maha-Magdy/space-travelers-hub | 1022986053 | Title: Set up Kanban board
Question:
username_0: You can access the GitHub project at this link:
[Space travel's Hub](https://github.com/Maha-Magdy/space-travelers-hub)
People participating in this project(2):
[Maha-Magdy](https://github.com/Maha-Magdy)
[Herokudev](https://github.com/username_0)
Answers:
username_1: # STATUS: CHANGES REQUIRED ♻️
Hi @username_0 and @Maha-Magdy 👋,
Nice work so far on this milestone of the project 😄, but there are still some changes to do.
### Required Changes ♻️
- [ ] There are 2 members on your team, so I should see `17` cards only, currently, I'm seeing `27` cards, please follow the project requirements step by step.
- [ ] All cards should be assigned to a team member, right now any of the cards are assigned
- It should look something like this:

- Yours look like this (No circle with photo):

- [ ] It is required for each team member to have assigned a card of each of these categories `Setup`, `Fetch data`, `Lists render`, `Actions`, `Conditional components`, `Filters`
- [ ] Remember that the task should be distributed in a fair way. 😄
Cheers and Happy coding!👏👏👏

If anything is not 100% clear please leave your questions or comments in the PR thread, we will be glad to help you 😄.
**_Please, do not open a new Pull Request for re-reviews. You should use the same Pull Request submitted for the first review, either valid or invalid unless it is requested otherwise._**
------
_As described in the [Code reviews limits policy](https://microverse.zendesk.com/hc/en-us/articles/1500004088561) you have a limited number of reviews per project (check the exact number in your Dashboard). If you think that the code review was not fair, you can request a second opinion using [this form](https://airtable.com/shrQAqnBwek5a0O0s)._
username_0: Suggested changes done!
username_0: ok, ready to submit |
BerryFarm/berrymuch | 626717071 | Title: Errors during install on device
Question:
username_0: Just tried to install berrymuch 0.3 release on my PAssport and durin the install i was greeted with many python errors:
```ImportError: cannot import name sha512```
Answers:
username_0: 
username_1: @vaskas any idea what's going wrong ?
username_0: Wrong shell. In term48 works
Status: Issue closed
|
pyjobs/web | 174534048 | Title: Page source, wording
Question:
username_0: Remplacer le texte
```
pyjobs agrège les données à partir de différents jobboard — ce qu'on appelle « sources ».
Si une source manque, vous pouvez l'ajouter. C'est facile et tout est expliqué sur le dépôt github .
Alors n'hésitez pas à contribuer !
```
par :
```
<p>pyjobs agrège les annonces concernant la technologie python à partir de différents jobboards. Vous trouverez ci-dessous la liste des différents jobboards pris en charge par pyjobs, ainsi que le niveau de détail des informations prises en charge</p>
<p>Un site manque ? Ce n'est pas un problème. pyjobs est un logiciel libre auquel vous êtes invités à contribuer :)</p>
``` |
hex-sh/terraform-provider-scaleway | 103130767 | Title: Add a ip resource
Question:
username_0: Depends on https://github.com/scaleway/scaleway-cli/issues/178
Answers:
username_0: Depends on https://github.com/scaleway/scaleway-cli/issues/178
username_1: For your information, we can assign ips using **PUT/PATCH** on **servers** OR **PUT/PATCH** on **ips** resources, but in **scaleway-cli** we will work on the **ips** resource
username_0: That's similar to https://www.terraform.io/docs/providers/aws/r/eip.html so that's good. As it's how people that use terraform expect it to work.
Status: Issue closed
|
promregator/promregator | 323617763 | Title: Data Volume of promregator_request_latency too large / disable it
Question:
username_0: ## Summary / Problem Statement
When scraping many targets, the internal metrics for `promregator_request_latency_*` is creating a huge amount of data volume for Promregator's own metrics. Moreover, the data is sent to Prometheus, where it is stored by default.
## Observed Behavior
The data is provided by default.
## Expected Behavior
The data shall not be provided by default, but generation may be enabled via configuration option.
Answers:
username_0: Will be part of
* 0.3.6 and later
* 0.4.0 and later
Status: Issue closed
|
pocoproject/poco | 149185451 | Title: HTTPServer and TCPServer get dispatcher
Question:
username_0: Hello all, in one of my application I have a TCP and HTTP servers. I need to accept a connection (so to open a socket) in a separated thread (because is a different port) and than let the TCP or HTTP server handle it in the correct way. The way I'm doing now is to get the TCPServerDispatcher* _pDispatcher from the server and push the socket to it.
Do you think is there a way, or will be, to get the dispacher and push an external socket to it?<issue_closed>
Status: Issue closed |
blt/robot_utopia | 64848970 | Title: rebar3 cannot build jiffy
Question:
username_0: ## Description
Jiffy is an lolspeed NIF JSON library. rebar3 can't compile jiffy as it doesn't, as I understand it, bundle a makefile, relying instead on compile hooks which rebar2 supports but rebar3 does not.
## To Reproduce
$ make console
Answers:
username_0: ls: _build/default/lib/jiffy/priv.so: No such file or directory
``` |
adhenrique/react-native-redux-thunk-authentication-example | 448132603 | Title: working of your app.
Question:
username_0: What exactly is the working of your app.
is it like after pressing the acessar button it goes to another screen if it's authorized and then we can press logout button to go back to the previous page.
Answers:
username_1: @username_0 that's it!
Status: Issue closed
username_1: @username_0 i need to update the Readme with more specific details about this repo. Sorry about this.
username_0: @username_1 Thank you! |
nulpoet/mjkey | 752695947 | Title: 太原哪里有出租车票-太原哪里有出租车票
Question:
username_0: 太原哪里有出租车票【徴:ff181一加一⒍⒍⒍】【Q:249⒏一加一357⒌⒋0】学历低加上外貌的原因,很多工作都受限制。最后她找到了一份通讯公司客服的工作,不用见面,只用声音和人打交道。从小乐观的天性,加上后天的积极努力,表妹在单位成了受欢迎的红人。
一些热心的姑妈姨妈总是喜欢牵红线给大
https://github.com/nulpoet/mjkey/issues/534
https://github.com/nulpoet/mjkey/issues/535
https://github.com/nulpoet/mjkey/issues/536 |
lord-kyron/terraform-provider-phpipam | 644561398 | Title: Run this provider on Windows 10 x64
Question:
username_0: Hello,
when I try to run this provider on Windows 10 x64, I get following error:
"The program or feature ... cannot start or run due to incompatibity with 64-bit versions of Windows. Please contact the software vendor to ask if a 64-bit Windows compatible version is available." It looks like it is a 16 Bit app.
As I know, there is no way to run x16 on x64 windows.
Can you please provide an other version of the provider?
Answers:
username_1: I've never used it on windows so I am not sure abot it.
@username_2 - what do you think about that?
username_2: I am personally don't use Windows too.
But, customers can build exe file themselves.
Here described how to build GO application for Windows x64:
https://www.digitalocean.com/community/tutorials/how-to-build-go-executables-for-multiple-platforms-on-ubuntu-16-04
https://stackoverflow.com/questions/41566495/golang-how-to-cross-compile-on-linux-for-windows
Example:
```
GOOS=windows GOARCH=amd64 go build
```
Installation of terraform on Windows:
https://learn.hashicorp.com/terraform/getting-started/install.html
WIndows plugin directory - `%APPDATA%\terraform.d\plugins`
https://www.terraform.io/docs/extend/how-terraform-works.html
I've tested phpipam terraform plugin on windows. Seems all works:
<img width="797" alt="image" src="https://user-images.githubusercontent.com/53462452/86139833-dbc50780-baf8-11ea-87a5-20d620b84ccc.png">
Status: Issue closed
|
jpadilla/pyjwt | 466300368 | Title: Encoding with RSA says, Unable to parse an RSA_JWK from key: <cryptography.hazmat.backends.openssl.rsa._RSAPrivateKey object at 0x0000000007643390>
Question:
username_0: i think that is because i passed an object RSAPrivateKey. what is the correct way to do this.
import os
pwd = os.path.dirname(__file__)
private_key = open(pwd + '/authkey').read().encode('ascii')
priv_rsakey = load_pem_private_key(private_key, password='<PASSWORD>'.encode('utf-8'), backend=default_backend())
#print(priv_rsakey.__dict__)
token = jwt.encode({'id':'the_id'},priv_rsakey,algorithm='RS256')#.decode('utf-8')
return token
`
Answers:
username_1: After playing a bit with EC keys (not RSA, sorry), I've discovered that:
* PEM file content (as `str`) can be passed directly to `jwt.encode` as key argument
* FWIW, I have `PyJWT==1.7.1 cryptography==2.8` installed and backend is openssl
username_2: I got a very similar error using the cryptography backend:
```
jose.exceptions.JWSError: Unable to parse an RSA_JWK from key: <jose.backends.cryptography_backend.CryptographyRSAKey object at 0x10c4de6a0>
```
```python
rsa_key = {
"p": "...",
"kty": "RSA",
"q": "...",
"d": "...",
"e": "AQAB",
"use": "sig",
"kid": "test",
"qi": "...",
"dp": "...",
"alg": "RS256",
"dq": "...",
"n": "..."
}
test_key = jwk.construct(rsa_key)
jws.sign({}, test_key, headers={'kid': 'test'}, algorithm=ALGORITHMS.RS256)
```
Note that i can call `test_key.sign()` just fine. |
zeshan321/ActionHealth | 960144521 | Title: [Suggestion] ItemsAdder placeholders and PAPI placeholders support in styles
Question:
username_0: ItemsAdder - premium plugin, which can break minecraft limits with resource packs and plugin code.
Plugin have 2 placeholders, which, sadly doesn't work in your plugin:
**%img_FONT PICTURE NAME%**, which one provides by PlaceholderAPI, allow use font-pictures (unicode symbols with textures) without cache crap.
**:offset_NUMBER:**, which provides by ItemsAdder itself, allow to offset picture on X-axis (left and right).
I made little addon for ItemsAdder and your plugin, but I can't publish it, because I'm used crutchy way to add textures and offset in it. It won't work on other PC's without cache problems.
ItemsAdder API is free. You can find it here - https://itemsadder.devs.beer/developers/java-api
If you want make some tests, but you don't want to buy ItemsAdder - open support ticket in our discord-support server and we gift you one copy of the plugin.
Thanks you.
ItemsAdder spigot link - https://www.spigotmc.org/resources/✅must-have✅-itemsadder✨custom-items-huds-guis-mobs-3dmodels-emojis-blocks-wings-hats-liquids.73355/
Addon screenshot, which I made:
https://media.discordapp.net/attachments/777899330203025418/872229264261521418/unknown.png
https://media.discordapp.net/attachments/777899330203025418/872229264261521418/unknown.png |
openebs/openebs | 248690296 | Title: Percona pod crashes during demonstration of HA
Question:
username_0: #### Setup:
- K8s 1.7.0
- 1 K8s master & 3 minions
- Percona app was created (ref - https://github.com/openebs/openebs/tree/master/k8s/demo/percona)
- One of the minions holding above jiva replica was shutdown
- Volume goes into `read only` state
- confirmed after looking into jiva controller logs
- Percona app entered into `CrashLoopBackOff` state
#### Workaround:
- logout of iscsi session & login back
- mount the iscsi volume to percona mount path
#### Logs
```bash
ubuntu@kubemaster-01:/vagrant/demo/percona$ kubectl get pod
NAME READY STATUS RESTARTS AGE
maya-apiserver-3516355633-m0hpl 1/1 Running 0 52m
openebs-provisioner-3862123968-cx1vs 1/1 Running 1 6h
percona 0/1 CrashLoopBackOff 15 1h
pvc-cc2824ea-7c22-11e7-8562-021c6f7dbe9d-ctrl-339357979-2fb9c 1/1 Running 0 1h
pvc-cc2824ea-7c22-11e7-8562-021c6f7dbe9d-rep-32159078-43s3b 1/1 Running 0 1h
pvc-cc2824ea-7c22-11e7-8562-021c6f7dbe9d-rep-32159078-d5zkw 1/1 Running 1 1h
```
- kubectl logs pvc-cc2824ea-7c22-11e7-8562-021c6f7dbe9d-ctrl-339357979-2fb9c
```
10.36.0.0 - - [08/Aug/2017:10:22:17 +0000] "GET /v1/stats HTTP/1.1" 200 376
time="2017-08-08T10:22:18Z" level=error msg="Error reading from wire: EOF"
time="2017-08-08T10:22:18Z" level=error msg="Setting replica tcp://10.36.0.1:9502 to ERR due to: EOF"
time="2017-08-08T10:22:18Z" level=info msg="Set replica tcp://10.36.0.1:9502 to mode ERR"
time="2017-08-08T10:22:18Z" level=error msg="Ignoring error because tcp://10.44.0.3:9502 is mode RW: tcp://10.36.0.1:9502: EOF"
time="2017-08-08T10:22:18Z" level=info msg="Removing backend: tcp://10.36.0.1:9502"
time="2017-08-08T10:22:18Z" level=error msg="<nil>"
time="2017-08-08T10:22:18Z" level=info msg="Monitoring stopped tcp://10.36.0.1:9502"
time="2017-08-08T10:22:18Z" level=info msg="Closing: 10.36.0.1:9502"
[0 18 112 0 4 0 0 0 0 10 0 0 0 0 68 0 0 0 0 0]
SENDS SENSE LENGTH = 20
[0 18 112 0 4 0 0 0 0 10 0 0 0 0 68 0 0 0 0 0]
[0 18 112 0 4 0 0 0 0 10 0 0 0 0 68 0 0 0 0 0]
SENDS SENSE LENGTH = 20
[0 18 112 0 4 0 0 0 0 10 0 0 0 0 68 0 0 0 0 0]
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=warning msg="check condition"
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=error msg="Mode: ReadOnly"
time="2017-08-08T10:22:18Z" level=warning msg="check condition"
```<issue_closed>
Status: Issue closed |
ethz-asl/catkin_boost_python_buildtool | 577304765 | Title: CMAKE_MODULE_PATH
Question:
username_0: When I was building the minkindr_python, there are some errors and I couldn't handle them. The errors are as follows:
Errors << minkindr_python:cmake /home/michael/dslam_ws/logs/minkindr_python/build.cmake.021.log
CMake Error at /home/michael/dslam_ws/devel/share/catkin_simple/cmake/catkin_simple-extras.cmake:38 (find_package):
By not providing "Findcatkin_boost_python_buildtool.cmake" in
CMAKE_MODULE_PATH this project has asked CMake to find a package
configuration file provided by "catkin_boost_python_buildtool", but CMake
did not find one.
Could not find a package configuration file provided by
"catkin_boost_python_buildtool" with any of the following names:
catkin_boost_python_buildtoolConfig.cmake
catkin_boost_python_buildtool-config.cmake
Add the installation prefix of "catkin_boost_python_buildtool" to
CMAKE_PREFIX_PATH or set "catkin_boost_python_buildtool_DIR" to a directory
containing one of the above files. If "catkin_boost_python_buildtool"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:5 (catkin_simple)
Answers:
username_1: Could it be that you don't have this repo cloned into your workspace? Did you follow some instructions that might be missing this information? If so please tell us so we can fix the doc there -- assuming it is
Status: Issue closed
username_1: Closing this because it is a duplicate of https://github.com/ethz-asl/minkindr/issues/70 .
username_0: Sorry for the late reply. I have already fixed it as I didn't get the repo in my computer due to some network problems. Thanks you. |
StylishThemes/GitHub-Dark | 422002734 | Title: New/edit file page has mixed backgrounds
Question:
username_0: The `.file-box` class is where the background color is coming from. It's using the value from one of the syntax themes.
* **Browser**: chrome & FF
* **Operating System**: win10
* **Link to page with the issue**: https://github.com/StylishThemes/GitHub-Dark/new/master
* **Syntax theme**: Monokai
* **Screenshot**:

Answers:
username_1: lol you introduced the bug [by yourself](https://github.com/StylishThemes/GitHub-Dark/blame/037f1a2b63e0c5daa9fce6ff7827ed80e181fb7b/github-dark.user.css#L5444) with 1.21.14 :-)
Just wanted to file a bug that `.file-box` should have transparent background and found this already.

username_0: That's just because I did the release, the selector was added in 09ca5032e6bac947b4ceb614a9c5cae8f608d36c by @username_2 so it's their fault 😛
Status: Issue closed
username_2: Made that selector more specific now and did a tweak to remove a ugly margin above codemirror. |
nagrobin1/-Intro-To-IOS-Project-1--Review-My-App | 123774219 | Title: My app is complete. Please review . /cc @codepathreview
Question:
username_0: My app is complete. Please review . /cc @codepathreview
Answers:
username_1: Great work Robbin! I love the app icon and all the optional features you tackled! :)
This pre-work is a preview of our weekly project process. Generally, weekly projects take about 5 hours to complete the required features and an additional 5 hours to complete the optional features. In general, we've seen that the more hours you log, the quicker you improve your proficiency with iOS.
The purpose of this project was to begin to explore Xcode and to get a broad overview of iOS development using Swift. For example, in this project, we explored the following concepts:
Code styling in Swift. You can find some code styling guides here:
[Ray Wenderlich Swift Style Guide](https://github.com/raywenderlich/swift-style-guide)
[Github Swift Style Guide](https://github.com/github/swift-style-guide)
- Views are created in Storyboard, Interface Builder, or programmatically, but they have the same goal: instantiate, initialize, and layout view objects. We use IBOutlets to give names to view objects, similar to giving unique ids to divs in HTML.
- We registered for touch events, which can be done programmatically or via IBActions.
- We explored NSUserDefaults, one of the four persistence strategies in iOS.
- View controllers have a set of methods that are called when it loads, appears, or disappears. These are called view controller lifecycle methods.
Do your views look good on iPhone 4, 5, and 6? We will cover in class how to use Auto Layout to robustly design your views for different screen sizes and OS versions.
After this assignment, you should understand the purpose of IBOutlets and IBActions as well as the basics of designing views and programmatically interacting with the views from the controller.
<NAME>
CodePath |
projectblacklight/blacklight | 505470573 | Title: Update quickstart documentation to use docker
Answers:
username_1: Am I alone in thinking this would be better in the `README`? Maybe take away the label "Quick Start" and replace it with "Run a demo Blacklight application in Docker"? I have a little bit of a worry that "Quick Start" gives a false expectation that running Blacklight will be "easy and quick" but maybe I'm crazy. Also, having this information in the README allows for documentation to be more accurate per the version/commit it is on.
username_2: @username_0 Currently, I thought the plan was to use docker for 1) CI for Blacklight, and 2) local development of Blacklight proper (not of Blacklight apps). If that is true, I don't think we want to amend the Quickstart. Unless we want Blacklight to generate docker-compose.yml for downstream apps, or we want to add docker-compose knowledge as a requirement for getting a Blacklight app up and running. cc: @username_1
username_1: I think it's okay to have docker-compose be _the_ way of doing a "quick start" with Blacklight. It isn't really a "requirement" so much as a "hey here's one super easy way to get going quick" - people can use whatever they want to fulfill the stack requirements of ruby, rails, bundler and solr (`homebrew` or `vagrant` + solr_wrapper for example). Seems like docker-compose would be okay, just my $.02. |
cyang-kth/fmm | 935732650 | Title: Encountered installation issue for mac
Question:
username_0: Hi Can,
I tried to install this package from my mac, but I got the following issue: when I tried the 'make -j4', I got a bunch of errors saying like **"error: constexpr function's return type 'void' is not a literal type"**.
I read the Q&A for make error, but I'm not sure if I went wrong from this step. I checked "which python", it shows **/Applications/Anaconda/anaconda3/bin/python** and also checked "cmake ..", it shows:
"-- CMAKE version 3.21.0-rc2
-- Set CMP0074 state to NEW
-- Set CMP0086 state to NEW
-- Set CMP0078 state to NEW
-- Set CONDA_PREFIX /Applications/Anaconda/anaconda3
-- GDAL headers found at /usr/local/include
-- GDAL library found at /usr/local/Cellar/gdal/3.3.0_2/lib/libgdal.dylib
-- Boost headers found at /usr/local/include
-- Boost library found at Boost::serialization
-- Boost library version 1_76
-- OpenMP_HEADERS found at /Applications/Anaconda/anaconda3/include
-- OpenMP_CXX_LIBRARIES found at /Applications/Anaconda/anaconda3/lib/libomp.dylib
-- Installation folder /usr/local
-- Not install fmm headers
-- Add python cmake information
-- Swig version is 4.0.2
-- Python header found at /Applications/Anaconda/anaconda3/include/python3.8
-- Python library found at /Applications/Anaconda/anaconda3/lib/libpython3.8.dylib
-- Python packages /Applications/Anaconda/anaconda3/lib/python3.8/site-packages
-- Using swig add library
-- Configuring done
-- Generating done"
I don't know how to figure this out so I need your help :)
Answers:
username_0: I'm still stuck with "make -j4", or if that's the problem of my mac os? it's been updated to 11.4
username_1: The problem might be related with boost https://github.com/username_1/fmm/issues/162.
username_0: thanks, but how can I deal with this problem?
username_1: It seems to be that the newer version of boost geometry requires c++14.
https://github.com/mapnik/mapnik/issues/4196
You either downgrade the boost library, or you try to **set c++14 to the compiler by changing this line in the CMakeLists.txt** file.
https://github.com/username_1/fmm/blob/6b43a16ea2b78cbbb03d79eb9fc7a45c4732a8fa/CMakeLists.txt#L30
```
set(CMAKE_CXX_STANDARD 14)
```
username_0: Thanks, Can! That works for me now!
username_1: This issue is closed. The C++ 14 support needs to be added to the CMake file perhaps by detecting the boost library version to adjust this parameter.
Status: Issue closed
|
artipie/artipie | 657882972 | Title: Underscores(_) and - are not allowed in config names
Question:
username_0: Artipie startup command:
```bash
$ docker run --name artipie -it -v $(pwd)/artipie.yaml:/etc/artipie.yml -v $(pwd):/var/artipie -p 8080:80 artipie/artipie:latest
```
Config structure:
```
├── artipie.yaml
├── configs
└── my_helm_repo.yaml
```
__`my_helm_repo.yaml:`__
```yml
repo:
path: "http://localhost:8080/my-helm-repo"
type: helm
storage:
type: fs
path: /var/artipie/data
permissions:
"*":
- "*"
```
__`artipie.yaml:`__
```yml
meta:
storage:
type: fs
path: /var/artipie/configs
layout: flat
```
```bash
$ curl -i -X POST --data-binary "@tomcat-0.4.1.tgz" http://localhost:8080/my_helm_repo
HTTP/1.1 500 Internal Server Error
Content-Length: 62
transfer-encoding: chunked
Request path /my_helm_repo was not matched to /(?:[^/.]+)(/.*)%
```
I think they should be allowed because they are allowed in files names.
Answers:
username_1: @username_2[/z](https://www.username_1.com/u/username_2) this job was assigned to you 5days ago. It will be taken away from you soon, unless you close it, see [§8](http://www.zerocracy.com/policy.html#8). Read [this](http://www.yegor256.com/2014/04/13/no-obligations-principle.html) and [this](http://www.yegor256.com/2014/11/24/principles-of-bug-tracking.html), please.
<!-- https://www.username_1.com/footprint/CT2E6TK9B/9dae4510-2384-4d5b-9c20-5551c763d8f4, version: 0.54.5, hash: ${buildNumber} -->
username_2: @username_0 I'm trying to reproduce this problem using `Pie` instance, so I can do this programatically, but I'm having no success. I did not found any documentation on how to start a `Pie` (without using docker and all). How do I tell to `Pie` where should it look for the yaml config files?
username_0: @username_2, Artipie has a main class called VertxMain, you can run it from IDE. As a different option, you can build and run Artipie as a jar.
Status: Issue closed
|
jugeeya/UltimateTrainingModpack | 691463603 | Title: Visual Bug: SDI Crest points always upwards
Question:
username_0: The grey half crest is a visual to indicate the SDI direction.
With "Set SDI" no matter what direction the CPU is SDIing the crest is always pointing upward even tho the CPU is SDIng in the correct direction.
Example 1: https://i.imgur.com/2aQigxx.png
The Wolf CPU started on the red line and SDId "out" but the crest is pointing upwards.
Example 2: https://i.imgur.com/sgqsupt.png
This is what Player controlled SDI out looks like. The Mod CPU somehow circumvents the DI Line as well.
This is important, because you use the indicator to adjust your drifting during aerials according to the opponents SDI (well at least I do haha and Luigi players for sure)
Answers:
username_0: After the DI Indicator got fixed, can this be fixed next? |
throneteki/throneteki | 490744755 | Title: Azor Ahai Reborn participation
Question:
username_0: Hello,
Stannis has Azor Ahai Reborn, Acolyte is initiating a challenge which makes him paticipate, then the acolyte is killed by <NAME> and Stannis should not participate any more, though he does. Seems like a bug. See screenshot.
Sergey

Answers:
username_1: Thanks for the report and the screenshot! We have tests that say this should work, but they use explicit challenge removal (e.g. Highgarden). This is likely a bug with removal from the card leaving play (in this case, being killed) instead. Will need to look closer.
Status: Issue closed
|
spring-projects/spring-integration | 525702072 | Title: Default MessageChannel bean for @ServiceActivator missing!
Question:
username_0: The following code use do work up to `spring-boot 2.1.10`. But as of now the following code does not find the `MessageChannel` bean anymore:
```
@MessageEndpoint
public class ExampleMessageEndpoint {
//creates a DirectChannel named "exampleChannel" implicit
@ServiceActivator(inputChannel = "exampleChannel")
public String example(String example) {
//...
}
}
@Configuration
@EnableIntegration
public class ExampleConfiguration {
@Bean
public TcpConnectionFactoryFactoryBean factory() throws Exception {
TcpConnectionFactoryFactoryBean f = new TcpConnectionFactoryFactoryBean();
f.setType("server");
f.setPort(port);
f.setUsingNio(true);
//...
return f;
}
//also @Qualifier("exampleChannel") does not work
@Bean
public TcpInboundGateway gateway(TcpConnectionFactoryFactoryBean f, MessageChannel c) throws Exception {
TcpInboundGateway g = new TcpInboundGateway();
g.setConnectionFactory(f.getObject());
g.setRequestChannel(c);
return g;
}
}
```
Error:
```
Parameter 1 of method gateway in ExampleConfiguration required a single bean, but 2 were found:
- nullChannel: defined in null
- errorChannel: defined in null
```
Answers:
username_1: Well, it even doesn't work without a `@Qualifier("exampleChannel")` in Boot 2.1.10:
```
Parameter 0 of method testBean in org.springframework.integration.gh3111.Gh3111Application required a single bean, but 3 were found:
- nullChannel: defined in null
- errorChannel: defined in null
- exampleChannel: a programmatically registered singleton
Action:
Consider marking one of the beans as @Primary, updating the consumer to accept multiple beans, or using @Qualifier to identify the bean that should be consumed
```
But that's indeed not the point.
Technically it should not work even in that older version because we create a bean for that implicit during parsing `@ServiceActivator` in the `BeanPostProcessor`.
I think in the latest Spring Framework version the dependency injection candidate is determined on bean definition parsing phase, not later as it was before.
But that's my guess: I'm debugging both versions now to determine a difference.
Although I'm not sure that there is something we can do unless really document that implicit channels are not autowiring candidates.
BTW, what doc do you claim in your topic, please?
username_1: Well, I found the reason why we fail now: https://github.com/spring-projects/spring-integration/pull/2769
We defer endpoint creation (together with its implicit channel) to the phase when context is ready already.
So, this a behavior change for your use-case to make working some other use-cases which results otherwise much worse than yours...
I need to think more about this, but I'm afraid we would need to postpone the final decision to the next `5.3` version.
One of the idea is still parse messaging annotations for potential channel creation similar way we did before, but really defer an endpoint creation like it is right now.
Any other thoughts?
Thanks
username_0: Or just remove the "implicit direct channel creation" feature and force the user to provide a directchannel directly as a bean?
username_1: That's correct, but as you see that doc doesn't say that this channel is good for autowiring somewhere else.
I don't think it is good to remove such a feature at all: it is there from day first. So, we should try to pursue its goal.
username_1: @username_0 ,
see my comment in the related issue: https://github.com/spring-projects/spring-integration/issues/3130#issuecomment-572789698
May that `@Lazy` can do the trick for your use-case as well:
```
@Bean
public TcpInboundGateway gateway(TcpConnectionFactoryFactoryBean f, @Qualifier("exampleChannel") @Lazy MessageChannel c) throws Exception {
```
BTW, you don't need to inject a `FactoryBean`. You simply can use its target object instead.
So, you won't need that `throws` and `f.getObject()`. That's the way how we really can overcome a limitation with bean method reference.
I mean try code like this:
```
@Bean
public TcpInboundGateway gateway(ConnectionFactory f, @Qualifier("exampleChannel") @Lazy MessageChannel c) {
```
Status: Issue closed
|
type-challenges/type-challenges | 828687349 | Title: 1042 - IsNever
Question:
username_0: This solution seems too simple, but in my opinion, using the standard `Equal` function is the most correct way to create functions of the form `IsNever`, `IsAny`, `IsVoid`, and so on, since `Equal` [extracts the exact internal function of type equality from TS](https://github.com/microsoft/TypeScript/commit/ca3d0d37a7d58692a6daadb2fe6b5dc338cf63e8#diff-d9ab6589e714c71e657f601cf30ff51dfc607fc98419bf72e04f6b0fa92cc4b8), and does not rely on some peculiarities of types (which may hypothetically change in future versions of TS).
[TS Playground](https://www.typescriptlang.org/play?ssl=27&ssc=34&pln=27&pc=1#code/PQKgUABBCMAMAsAmCBaCBJAzgOQKYDdcAnSVFci0gIwE8IALASyIHsaBDCRxgLwFcA1pwAUAASasO3fkICUEAMQBbXABNGfJYr4A7Rix3aALowA2mUqQXWIART65MJg5ajolAB1O4VOoxE4jGg9cDBwCYgAaCAB3JgBjeggjdgFHLh0PPn8gkIgAAwAVfIA6UnQAM2T6UNzQliqiRxZTQkxklgKdCKJ86KajPiJDfKMiBz6IFiMaohjGTFD8ivZzXFLXCAAxFiIIXAAPdk9vAC5N-MujCyg6iABBCABeMLxCIgAebveAPihgYD7A4heJGNQdCBUWrjXCkO4AIWerx6H10qlwFUY3VUfwBQJBYNUEKhEBWazhwVCAGEkVg3sQvnxTKZcYDDgTwUZOiSyYsK<KEY>).
```ts
type IsNever<T> = Equal<never, T>
``` |
graphql-compose/graphql-compose | 1182479613 | Title: No way to pass internal and external types into Scalar with `createScalarTC` method
Question:
username_0: <img width="1553" alt="Screenshot 2022-03-27 at 13 30 49" src="https://user-images.githubusercontent.com/14031838/160277373-f053b665-e771-487a-92c9-06d94cc0d481.png">
As far as I can see createScalarTC is not a generic:
<img width="632" alt="Screenshot 2022-03-27 at 13 33 44" src="https://user-images.githubusercontent.com/14031838/160277405-751c2283-e7f7-4717-a340-d5892bb79581.png">
Temporary solution:
<img width="424" alt="Screenshot 2022-03-27 at 13 34 36" src="https://user-images.githubusercontent.com/14031838/160277430-823511ef-7fe7-4f73-805b-e8730b7e72a4.png"> |
wdjg/coccareerfairapp-server | 315718776 | Title: new company fields
Question:
username_0: - color : String
- type : String (EA is a Games Publisher)
- overview : String (long description of company)
- attending_on : Date (maybe?)
- website : String
- work_auth_desired : Enum or String Array (idk how you like to handle this)
US Citizen
Permanent Resident (U.S.)
EAD - Employment Authorization
Student (F-1) Visa
Employment (H-1) Visa
J-1 Visa
A
- degree_levels_recruited : Enum or String Array
Bachelors
Masters
Doctorate
Post-Doc
Special
Non-Degree
- position_types : Enum or String Array
Professional Full Time Position
Internship Position
Co-op Position
Masters, PhD, & MBA Internship/Co-op Position
Global Internship Position
Undergraduate Research Position
Professional Part Time Position Only
Non-Professional Part-Time/Seasonal Only
There are others, but I don't know how you would want to handle them (or even if you want to). Just look at the "advanced search":
https://gatech-csm.symplicity.com/events/bd49cae15b0a01b96b712e5b37ec0362/employers |
jaquadro/StorageDrawers | 829165942 | Title: drawers not being defined correctly
Question:
username_0: https://github.com/TeamPneumatic/pnc-repressurized/issues/759
Answers:
username_1: Note: this should be as simple as adding:
```java
@Override
public boolean allowsMovement(BlockState state, IBlockReader worldIn, BlockPos pos, PathType type) {
return false;
}
```
to your `BlockDrawers` class. The issue is that drawers appear to have a non-opaque collision shape (at least from what I determined when debugging), but don't override this method. So entities think they can pathfind through them, when they can't. This is particularly noticeable with PneumaticCraft drones, which tend to find this problem very frequently!
(You'll notice that many non-solid vanilla blocks also override this method to return false, for this very reason)
username_2: Fixed in the latest 1.16 release
Status: Issue closed
|
gravitystorm/openstreetmap-carto | 36782266 | Title: Oneway arrows overlap way label
Question:
username_0: As seen in http://www.openstreetmap.org/note/191801 but there's probably other cases.
It would be nice to always have an arrow rendered right next to the label (with RTL handling for added challenge :p). Sounds like it could be tricky to implement though, as we already have so many arrow placement nitpicks.<issue_closed>
Status: Issue closed |
scrapy/scrapy | 204878554 | Title: Scrapy ignores proxy credentials when using "proxy" meta key
Question:
username_0: Code `yield Request(link, meta={'proxy': 'http://user:password@ip:port’})` ignores user:password.
Problem is solved by using header "Proxy-Authorization" with base64, but it is better to implement it inside Scrapy.
Answers:
username_1: ('https', 'username:[email protected]:8888', '10.20.30.40', 8888, '/')
```
```
proxy = request.meta.get('proxy')
if proxy:
_, _, proxyHost, proxyPort, proxyParams = _parse(proxy)
scheme = _parse(request.url)[0]
proxyHost = to_unicode(proxyHost)
omitConnectTunnel = b'noconnect' in proxyParams
if scheme == b'https' and not omitConnectTunnel:
proxyConf = (proxyHost, proxyPort,
request.headers.get(b'Proxy-Authorization', None))
return self._TunnelingAgent(reactor, proxyConf,
contextFactory=self._contextFactory, connectTimeout=timeout,
bindAddress=bindaddress, pool=self._pool)
```
Proxy credentials in proxy URL are correclty processed by `HttpProxyMiddleware` when `http(s)_proxy` env vars are being used,
so it makes sense to me to handle them as well when using "proxy" key direclty.
username_2: Oh right, I was thinking about actually performing the requests, but I see now that checking the headers should be enough. Thanks Paul, I'll add tests similar to those :+1:
Status: Issue closed
|
ueberdosis/tiptap | 1069986647 | Title: Editor breaking plugins that use mark comparison (e.g. prosemirror-codeblock
Question:
username_0: ### What’s the bug you are facing?
I found an issue while using https://github.com/curvenote/prosemirror-codemark that I believe is due to TipTap.
The `prosemirror-codemark` plugin needs a schema mark to compare with to know when "inside code block". In order to work with TipTap, since the schema is created when new Editor() is called, I use getSchema([extensions ...]) to get it beforehand. However, the schema returned has marks that are never equal to any marks of the new schema if compared with prosemirror's markType.isInSet(anotherMark).
more info in repro: https://codesandbox.io/s/tiptap-schema-breaking-mark-isinset-ifyi6?file=/src/App.vue
### How can we reproduce the bug on our side?
1. Open https://codesandbox.io/s/tiptap-schema-breaking-mark-isinset-ifyi6?file=/src/App.vue
2. Click to the left of `code`
3. Move back and forth with left/right arrows
Expected:
Cursor moves around left edge in and out of the code block

Actual:
Cursor jumps inside code block skipping a character (same as if the prosemirror-codeblock extension was not used)

### Can you provide a CodeSandbox?
https://codesandbox.io/s/tiptap-schema-breaking-mark-isinset-ifyi6?file=/src/App.vue
### What did you expect to happen?
Cursor moves around left edge in and out of the code block

### Anything to add? (optional)
See this comment in `src/App.vue` to fix bug:
```
// UNCOMMNENT here and COMMENT below line to fix bug
```
Also see the `// ISSUE: ...` comment in App.vue for more info.
### Did you update your dependencies?
- [X] Yes, I’ve updated my dependencies to use the latest version of all packages.
### Are you sponsoring us?
- [ ] Yes, I’m a sponsor. 💖
Answers:
username_1: It’s way more easier. You don’t want to create another instance of the schema. Use `this.editor.schema` instead. Working demo: https://codesandbox.io/s/tiptap-schema-breaking-mark-isinset-forked-eswr9?file=/src/App.vue
Status: Issue closed
username_0: Nice! I didn't realize `this` is bound and schema is constructed in addExtensions()! Thanks for the prompt response! |
cargomedia/puppet-packages | 53887995 | Title: Fix build
Question:
username_0: According to Jenkins the last successful build was 2016-10-16.
We should fix the problems, and make sure the build stays "green".
@username_1 @username_3 wdyt?
Answers:
username_1: @njm is it already done?
username_0: @username_1 you're asking whether I worked on this already? -> no
username_2: Latest build failures:
```
Failures:
1) Package "puppetdb"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Puppet command failed: Command output contains error: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/spec_helper.rb:75:in `abort'
# ./spec/spec_helper.rb:75:in `rescue in block (3 levels) in <top (required)>'
# ./spec/spec_helper.rb:66:in `block (3 levels) in <top (required)>'
# ./spec/spec_helper.rb:60:in `each'
# ./spec/spec_helper.rb:60:in `block (2 levels) in <top (required)>'
2) Command "timeout --signal=9 30 bash -c "while ! (grep -q 'PuppetDB version' /var/log/puppetdb/puppetdb.log); do sleep 0.5; done"" should return exit status 0
Failure/Error: it { should return_exit_status 0 }
expected Command "timeout --signal=9 30 bash -c "while ! (grep -q 'PuppetDB version' /var/log/puppetdb/puppetdb.log); do sleep 0.5; done"" to return exit status 0
# ./modules/puppet/spec/db/spec.rb:9:in `block (2 levels) in <top (required)>'
3) Port "8080" should be listening
Failure/Error: it { should be_listening }
expected Port "8080" to be listening
# ./modules/puppet/spec/db/spec.rb:13:in `block (2 levels) in <top (required)>'
4) Port "8081" should be listening
Failure/Error: it { should be_listening }
expected Port "8081" to be listening
# ./modules/puppet/spec/db/spec.rb:17:in `block (2 levels) in <top (required)>'
5) Command "s3export" stderr should match "[options] <command> [arguments]"
Failure/Error: its(:stderr) { should match '[options] <command> [arguments]' }
expected "" to match "[options] <command> [arguments]"
# ./modules/s3export_backup/spec/init/spec.rb:5:in `block (2 levels) in <top (required)>'
6) Package "virtualbox-4.3"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Puppet command failed: Command output contains error: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install virtualbox-4.3' returned 100: Reading package lists...`
# ./spec/spec_helper.rb:75:in `abort'
# ./spec/spec_helper.rb:75:in `rescue in block (3 levels) in <top (required)>'
# ./spec/spec_helper.rb:66:in `block (3 levels) in <top (required)>'
# ./spec/spec_helper.rb:60:in `each'
# ./spec/spec_helper.rb:60:in `block (2 levels) in <top (required)>'
Finished in 137 minutes 38 seconds (files took 0.3333 seconds to load)
556 examples, 6 failures
Failed examples:
rspec ./modules/puppet/spec/db/spec.rb:4 # Package "puppetdb"
rspec ./modules/puppet/spec/db/spec.rb:9 # Command "timeout --signal=9 30 bash -c "while ! (grep -q 'PuppetDB version' /var/log/puppetdb/puppetdb.log); do sleep 0.5; done"" should return exit status 0
rspec ./modules/puppet/spec/db/spec.rb:13 # Port "8080" should be listening
rspec ./modules/puppet/spec/db/spec.rb:17 # Port "8081" should be listening
rspec ./modules/s3export_backup/spec/init/spec.rb:5 # Command "s3export" stderr should match "[options] <command> [arguments]"
rspec ./modules/virtualbox/spec/default/spec.rb:4 # Package "virtualbox-4.3"
/usr/bin/ruby1.9.1 -I/var/lib/jenkins/workspace/cargomedia-puppet-packages/.bundle/ruby/1.9.1/gems/rspec-core-3.1.7/lib:/var/lib/jenkins/workspace/cargomedia-puppet-packages/.bundle/ruby/1.9.1/gems/rspec-support-3.1.2/lib /var/lib/jenkins/workspace/cargomedia-puppet-packages/.bundle/ruby/1.9.1/gems/rspec-core-3.1.7/exe/rspec --pattern modules/\*/spec/\*/spec.rb failed
vagrant halt --force
==> wheezy: Forcing shutdown of VM...
Build step 'Execute shell' marked build as failure
Sending e-mails to: <EMAIL>
Finished: FAILURE
```
username_2: Depends on #829
username_2: - `test::puppet::db` in https://github.com/cargomedia/puppet-packages/pull/852
- `test:s3export_backup` fixed
- `test:s3export_backup` seems to not pop anymore (maybe just a bit flaky)
username_2: Today tests brought these errors:
```
Failures:
1) bower Package "bower"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Could not update: Execution of '/usr/bin/npm install --global [email protected]' returned 1: npm http GET https://registry.npmjs.org/bower`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
2) bower Command "bower --version" exit_status
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Could not update: Execution of '/usr/bin/npm install --global [email protected]' returned 1: npm http GET https://registry.npmjs.org/bower`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
3) puppet::master puppetfile File "/etc/puppet/modules/mysql/metadata.json" content
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Puppet::Master::Puppetfile/Exec[librarian update and rsync]: Failed to call refresh: cd /etc/puppet && librarian-puppet update && /usr/local/bin/sync_hiera.sh returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
4) puppet::master puppetfile File "/foobar"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Puppet::Master::Puppetfile/Exec[librarian update and rsync]: Failed to call refresh: cd /etc/puppet && librarian-puppet update && /usr/local/bin/sync_hiera.sh returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
5) puppet::master puppetfile File "/usr/local/bin/sync_hiera.sh"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Puppet::Master::Puppetfile/Exec[librarian update and rsync]: Failed to call refresh: cd /etc/puppet && librarian-puppet update && /usr/local/bin/sync_hiera.sh returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
[Truncated]
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
7) puppet::master puppetfile Cron
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Puppet::Master::Puppetfile/Exec[librarian update and rsync]: Failed to call refresh: cd /etc/puppet && librarian-puppet update && /usr/local/bin/sync_hiera.sh returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
```
Not sure if this can ever succeed with so random http failures.
username_3: I also had the feeling that errors are not consistent, unfortunately - except that it seems to be `http` problems arising mostly when doing some sort of package install/update.. Not sure if we should give another proxy a try (instead of `pòlipo`).. or have the failed test repeat at a later stage? May be after resetting `.proxy-cache` and/or the whole vagrant box?
username_2: Well there were tests which failed always or quite often on ci. I think we solved all of them now so hopefully we can get a green build some day or even often :panda_face:
@username_3 @username_0 wdyt?
username_2: As said let's keep this issue open for couple more days and observe builds.
I will post outputs everyday to see what kind of failures - if any we have.
username_0: :+1:
username_2: Doh!
19th Feb:
```
Failures:
1) deb_multimedia Package "deb-multimedia-keyring"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: apt-key adv --keyserver pgp.mit.edu --recv-keys 65558117 returned 2 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
2) ffmpeg File "/usr/local/bin/ffmpeg"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: apt-key adv --keyserver pgp.mit.edu --recv-keys 65558117 returned 2 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
3) nfs File "/tmp/source/foo"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Main/Node[default]/Nfs::Mount[/tmp/mounted]/Mount::Entry[/tmp/mounted]/Exec[/usr/sbin/mount-check.sh /tmp/mounted]: Failed to call refresh: /usr/sbin/mount-check.sh /tmp/mounted returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
4) nfs Service "nfs-common"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Main/Node[default]/Nfs::Mount[/tmp/mounted]/Mount::Entry[/tmp/mounted]/Exec[/usr/sbin/mount-check.sh /tmp/mounted]: Failed to call refresh: /usr/sbin/mount-check.sh /tmp/mounted returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
5) nfs Service "nfs-common"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: /Stage[main]/Main/Node[default]/Nfs::Mount[/tmp/mounted]/Mount::Entry[/tmp/mounted]/Exec[/usr/sbin/mount-check.sh /tmp/mounted]: Failed to call refresh: /usr/sbin/mount-check.sh /tmp/mounted returned 1 instead of one of [0]`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
[Truncated]
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
29) puppet::db Port "8081"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
Finished in 140 minutes 24 seconds (files took 0.39767 seconds to load)
572 examples, 29 failures
```
username_2: 20th Feb:
```
Failures:
1) puppet::db Package "puppetdb"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
2) puppet::db Service "puppetdb"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
3) puppet::db Service "puppetdb"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
4) puppet::db Command "timeout --signal=9 30 bash -c "while ! (netstat -altp |grep -q 'java'); do sleep 0.5; done"" exit_status
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
5) puppet::db Port "8080"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
6) puppet::db Port "8081"
Failure/Error: Unable to find matching line from backtrace
SystemExit:
Command failed: Puppet command failed: `Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install puppetdb' returned 100: Reading package lists...`
# ./spec/lib/puppet_spec.rb:64:in `abort'
# ./spec/lib/puppet_spec.rb:64:in `rescue in block in apply_manifests'
# ./spec/lib/puppet_spec.rb:54:in `block in apply_manifests'
# ./spec/lib/puppet_spec.rb:46:in `each'
# ./spec/lib/puppet_spec.rb:46:in `apply_manifests'
# ./spec/spec_helper.rb:24:in `block (2 levels) in <top (required)>'
Finished in 150 minutes 1 second (files took 0.39971 seconds to load)
581 examples, 6 failures
```
username_2: Depends on #869
username_2: Hopefully fixed by #869.
username_2: Yay worked!
http://ci.cargomedia.ch:8080/job/cargomedia-puppet-packages/203/
Status: Issue closed
username_0: :star2:
username_3: Shall we add this proxy bypass also to other failing specs?
I see virtualbox is failing in a quite similar way, for instance (see http://ci.cargomedia.ch:8080/job/cargomedia-puppet-packages/205/console) |
ifzhang/FairMOT | 959095400 | Title: got math domain error while trainining on MOT17 and MOT15 datasets
Question:
username_0: While I tried to train on the mentioned datasets, got the following error. Please let me know how to solve it.
File "/share/samiha/FairMOT/src/lib/trains/mot.py", line 34, in __init__
self.emb_scale = math.sqrt(2) * math.log(self.nID - 1)
ValueError: math domain error |
matplotlib/matplotlib | 212466176 | Title: need to backport docathon PRs
Question:
username_0: Rather than backporting each individual docathon PR, we will do one large docathon merge at the end of the week. Let's keep a list here of the docathon PRs that were accepted.
Answers:
username_0: #8219
username_1: #8212
username_2: #8215
username_3: https://github.com/matplotlib/matplotlib/pull/8233
username_4: As mentionned on gitter, I feel uncomfortable backporting documentation pull request. We've identified a bug in the 2.0.0 documentation reference in a ticket (#8235), and if we backport the pull request that fixes this bug, we loose the information allowing to identify when the bug was discovered.
username_4: (That's without considering the risk of backporting documentation that uses new features).
username_2: #8211
username_2: Would backporting to `2.0.x` instead be fine?
username_3: #8228
username_2: The docathon has long gone, so I'm happy to port these all back to `2.0.x` - if anyone has any objections then shout, otherwise I will do them in a couple of days!
username_2: Great, these are all backported, thanks to everyone who did the original PRs, and sorry for the delay on my end...
Status: Issue closed
username_1: Thanks @username_2. |
facebook/relay | 299543979 | Title: Should we use .graphql files everywhere?
Question:
username_0: It seems it's a must for client-only schema extensions.
It seems Nuclide provides a nice DX for .graphql files. https://nuclide.io/docs/languages/graphql
Status: Issue closed
Answers:
username_1: Hi @username_0, I'm not sure what your specific question is. What do you mean by using them everywhere?
You can use .graphql files to define client-schema extensions.
Keep in mind that client-schema extensions are currently undocumented. |
jackc/pgx | 215602521 | Title: Params inside JSON ignored?
Question:
username_0: Hello,
I must be doing something very stupid, because it seems query params inside json structs are being ignored.
```go
package main
import (
"github.com/username_1/pgx"
"log"
)
type N struct {
Name string `json:"name"`
}
func main() {
conn, err := pgx.Connect(pgx.ConnConfig{
Host: "localhost",
User: "howe",
Database: "howe",
})
if err != nil {
log.Fatal(err)
}
var res N
err = conn.QueryRow(`select '{"name": $1}'::json`, "howe").Scan(&res)
if err != nil {
e := err.(pgx.PgError)
log.Fatalf(`\n%s %s: %s:\n%s: %s`, e.Severity, e.Code, e.Message, e.Where, e.Detail)
}
log.Print("Done.")
}
```
This results in the following error:
```
2017/03/20 22:18:43 \nERROR 22P02: invalid input syntax for type json:\nJSON data, line 1: {"name": $...: Token "$" is invalid.
```
It looks like the $1 parameter is being ignored.
Any clues?
Thanks,
Howe
Answers:
username_1: Placeholder variables can't be inside of PostgreSQL text literals. Use `json_build_object`.
Try: `err = conn.QueryRow("select json_build_object('name', $1::text)", "howe").Scan(&res)`
username_0: Ah, thanks. That worked.
Perhaps a note on the Prepare() docs would be helpful.
Thanks again for the quick answer.
Status: Issue closed
|
kairosdb/kairos-carbon | 54552380 | Title: Support for Carbon protocols over UDP
Question:
username_0: Hi
As the original Carbon implementation supports receiving metrics over TCP and UDP, could you add support for UDP too? It helps when receiving metrics from hundreds of senders.
Cheers
Answers:
username_1: I'm adding it as part of the next release.
username_0: Cool, thanks!
Status: Issue closed
username_1: Just added, one issue I found while testing is that you must make sure your max message size is big enough for the udp packet or it will chop it off and kairos will throw a bunch of exceptions because it cannot parse a full metric. |
RPi-Distro/repo | 230183722 | Title: valgrind / QEMU crash: SETEND instruction unsupported (disInstr(arm): unhandled instruction: 0xF1010200)
Question:
username_0: When valgrind is used on the Raspberry Pi for any kind of debugging, valgrind crashes as follows:
==7222==
disInstr(arm): unhandled instruction: 0xF1010200
cond=15(0xF) 27:20=16(0x10) 4:4=0 3:0=0(0x0)
==7222== valgrind: Unrecognised instruction at address 0x48636f4.
==7222== at 0x48636F4: ??? (in /usr/lib/arm-linux-gnueabihf/libarmmem.so)
==7222== Your program just tried to execute an instruction that Valgrind
==7222== did not recognise. There are two possible reasons for this.
==7222== 1. Your program has a bug and erroneously jumped to a non-code
==7222== location. If you are running Memcheck and you just saw a
==7222== warning about a bad jump, it's probably your program's fault.
==7222== 2. The instruction is legitimate but Valgrind doesn't handle it,
==7222== i.e. it's Valgrind's fault. If you think this is the case or
==7222== you are not sure, please let us know and we'll try to fix it.
==7222== Either way, Valgrind will now raise a SIGILL signal which will
==7222== probably kill your program.
==7222==
==7222== Process terminating with default action of signal 4 (SIGILL)
==7222== Illegal opcode at address 0x48636F4
==7222== at 0x48636F4: ??? (in /usr/lib/arm-linux-gnueabihf/libarmmem.so)
==7222==
==7222== HEAP SUMMARY:
==7222== in use at exit: 35,782 bytes in 546 blocks
==7222== total heap usage: 853 allocs, 307 frees, 55,700 bytes allocated
==7222==
==7222== LEAK SUMMARY:
==7222== definitely lost: 0 bytes in 0 blocks
==7222== indirectly lost: 0 bytes in 0 blocks
==7222== possibly lost: 856 bytes in 23 blocks
==7222== still reachable: 34,626 bytes in 520 blocks
==7222== of which reachable via heuristic:
==7222== newarray : 832 bytes in 16 blocks
==7222== suppressed: 0 bytes in 0 blocks
==7222== Rerun with --leak-check=full to see details of leaked memory
==7222==
==7222== For counts of detected and suppressed errors, rerun with: -v
==7222== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Illegal instruction
This is caused by the SETEND instruction, which is impractical to support on valgrind due to severe performance implications as described here:
https://bugs.kde.org/show_bug.cgi?id=322935
It appears that a sub-optimal implementation of memcmp (and possibly other functions) has been implemented in the RPi that break instrumentation and debugging tools. For obvious reasons this renders the RPi severely limited as a teaching tool - the ability to debug effectively is a vital skill.
It is reported that the same bug makes it impossible to run Raspbian under QEMU.
As requested in https://bugs.kde.org/show_bug.cgi?id=322935#c24, please remove the hack from Raspbian so that Raspbian can run in virtualised and debugging environments.
Answers:
username_1: This can be disabled by editing `/etc/ld.so.preload` or removing `raspi-copies-and-fills` . However, this won't be done on the version of Raspbian that we ship, because the performance improvements are too great.
I understand that having to make that modification before using qemu may be a little fiddly, but far from impossible.
username_0: Can you clarify what modification to /etc/ld.so.preload I should do to disable this?
pi@towerofpi6:~ $ cat /etc/ld.so.preload
/usr/lib/arm-linux-gnueabihf/libarmmem.so
While I totally understand the rationale behind an asm optimised memcpy, when that code breaks the debugger, all you end up with is a faster route to the scene of the crash.
username_0: Trying to move /etc/ld.so.preload out of the way makes no difference, valgrind still crashes as before.
username_0: Correction: Trying to move /etc/ld.so.preload out of the way on a totally different machine makes no difference.
Moving it out of the way on the correct machine makes valgrind pass this point.
username_0: I have found someone to have factored out the setend instruction and replaced it with rev instructions:
https://github.com/rsaxvc/arm-mem/commit/b836e465c2fd0bb006b428abce99e31607072834
username_1: The above fix has been merged into raspi-copies-and-fills. |
ARM-software/armnn | 491821022 | Title: Armnn less accurate than Tflite for deeplab?
Question:
username_0: I compared armnn inference and tflite inference on a deeplab model. It looks like armnn output is always offset by 1 pixel in both x and y direction.
My model is modified from a tensorflow hosted model:https://www.tensorflow.org/lite/models/segmentation/overview
I flatten the atrous conv layers and removed BATCH_TO_SPACE and SPACE_TO_BATCH
Input layer: sub_7
Output layer: ResizeBilinear_2
My armnn inference and tflite inference shared the same resize and visualization function, no preprocessing except resize to 257*257, no postprocessing except resize the mask back to input image size.
My model and images are here: https://drive.google.com/open?id=1lf5Ff0al7cXNxotGfHuT6Rs4bEFI9cgg
Here is my result:
input/armnn/tflite:

input/armnn/tflite:

Thanks!
Answers:
username_1: Hi @username_0
sorry for the delay on this but this issue should be resolved by the following Armnn patch:
https://review.mlplatform.org/c/ml/armnn/+/2547
It will also require this ComputeLibrary patch for CpuAcc:
https://review.mlplatform.org/c/ml/ComputeLibrary/+/2538
And this ComputeLibrary patch for GpuAcc:
https://review.mlplatform.org/c/ml/ComputeLibrary/+/2569
The results will not be exactly the same as with tflite but the differences should be marginal. This code will be in the upcoming 20.02 release later this month.
Status: Issue closed
|
desktop/desktop | 945859921 | Title: e Ad hoc On-Demand Distance Vector Routing (AODV) and Dynamic Source Routing (DSR
Question:
username_0: ### Describe the feature or problem you’d like to solve
A clear and concise description of what the feature or problem is. If this is a bug report, please use the bug report template instead.
### Proposed solution
How will it benefit Desktop and its users?
### Additional context
Add any other context like screenshots or mockups are helpful, if applicable.<issue_closed>
Status: Issue closed |
jenkinsci/testrail-plugin | 232576023 | Title: Installing this plugin to Jenkins - does not appear in the list and there is no .hpi for import?
Question:
username_0: I would really like to use this plugin, but the instructions say to install from Jenkins plugin manager, yet is does not appear in the list of plug-ins. I can't see any public .hpi file which could be imported in the Jenkins plug-in manager either.
How can I install this?
Answers:
username_1: I apologize, I last updated those instructions right before I made my first attempts at getting the plugin uploaded. I'm unable to actually upload the plugin for some reason, despite trying for weeks. Until I can get some assistance from others in the Jenkins community the plugin must be installed manually. You'll also have to build it yourself.
I'm sorry for this inconvenience and I will update the instructions momentarily.
Status: Issue closed
username_1: Instructions in ReadMe have been updated.
username_0: Hi - would it be possible to send me a built .hpi? I don't have a maven setup for building?
username_1: Sure, I can do that. I'll put it in a dropbox or something and leave a link in this issue.
username_0: Great, thanks!
username_1: Here ya go!
https://www.dropbox.com/s/tr1eaygt1j47xpj/testrail.hpi?dl=0
Happy testing!
username_2: Hello, can some one send me testrail.hpi? |
karmapa/ketaka-lite | 146254881 | Title: 希望在 navigation bar 的 About KETAKA-Lite 寫上 版本號
Question:
username_0: 希望在 navigation bar 的 About KETAKA-Lite 寫上版本號,感謝

Answers:
username_1: fixed in v0.1.77, the first one is electron version and the other in parentheses is app version.
username_0: Test ok in mac in v0.1.79. Thank you!
Status: Issue closed
|
grace-shopper7/GraceShopper | 329286533 | Title: models
Question:
username_0: For bugs, please include the following:
* What is the expected behavior?
* What is the actual behavior?
* What steps reproduce the behavior?
For features, please specify at least minimal requirements, e.g.:
* "As a user, I want a notification badge showing unread count, so I can easily manage my messages"
* "As a developer, I want linting to work properly with JSX, so I can see when there is a mistake"
* "As an admin, I want a management panel for users, so I can delete spurious accounts"
---
*Issue description here…* |
badges/shields | 892009596 | Title: Add DownloadTracker
Question:
username_0: :clipboard: **Description**
Service: DownloadTracker
Description: DownloadTracker is an expressjs server to track downloads of a file.
Example: 
<!--
A clear and concise description of the new badge.
- Which service is this badge for e.g: GitHub, Travis CI
- What sort of information should this badge show?
Provide an example in plain text e.g: "version | v1.01" or as a static badge
(static badge generator can be found at https://shields.io)
-->
:link: **Data**
EndPoint URL: e.g https://example.com/badge/
Project: e.g ProjectName. Use all for total downloads
Version: e.g 1.0. Do not add it if you want total downloads of a project
Template:
- htttps://example.com/badge/all
- htttps://example.com/badge/:project/
- htttps://example.com/badge/:project/:version
Working Example: https://dl.rocketplugins.space/badge/all
<!--
Where can we get the data from?
- Is there a public API?
- Does the API requires an API key?
- Link to the API documentation.
-->
:microphone: **Motivation**
It may be usefull for an easier way to get a DownloadTracker badge.
The project is new and it's born some days ago.
<!--
Please explain why this feature should be implemented and how it would be used.
- What is the specific use case?
-->
<!-- Love Shields? Please consider donating $10 to sustain our activities:
👉 https://opencollective.com/shields -->
Answers:
username_1: Hello @username_0 ! 👋🏻
Thanks for sharing this suggestion!
The website (I presume [this one](https://dl.rocketplugins.space) seems to be non functional at the minute, I land on an error page stating that the database is not connected. The working example you linked has also returned some sort of method invocation error. Looking at the commits on the repository, it appears that this is a personal project in its very early days.
If you really wanted to integrate native Shields.io badges:
* the service would need to be functional and stable.
* the service would need to have proper online resources, explaining how it works, what its purpose is and how to get in touch with support channels.
* there would need to be an API that is non Shields-specific (see #6368 for more details on what is meant by this).
* said API would need to be documented as per our [guidelines](https://github.com/badges/shields/blob/master/CONTRIBUTING.md#badge-guidelines).
* ideally the service should already have users and momentum.
For the time being, I think that leveraging our endpoint badges is the best course of action (https://shields.io/endpoint), and in a few months we can re-evaluate adding the native Shields.io badges if all aforementioned conditions are met. 😉
Status: Issue closed
|
oxyplot/oxyplot | 628025864 | Title: Windows Forms Tracker doesn't show ampersands
Question:
username_0: ### Steps to reproduce
1. Open the "Examples from the book 'Show me the numbers' > Order Count by Order Size" example
2. Interrogate one of the bars with an `&` in the category name.
Platform: OxyPlot.WindowsForms
.NET version: Any
### Expected behaviour
Ampersands should be displayed in the tracker.
### Actual behaviour
Ampersands are not displayed in the tracker (or, at least, the first one is not).

This should be fixed by setting `UseMneumonic = false` on the tracker label: PR to follow. I can't immediately find any other circumstances where such a change would apply.<issue_closed>
Status: Issue closed |
rust-lang/crates.io | 93734674 | Title: Ugly formatting for the description
Question:
username_0: I tried to write a nice description in [my Cargo.toml](https://github.com/username_0/glium/blob/f1bd7fe11276e290b6d12b3b8b43ef5410796b2b/Cargo.toml#L5-L17):
```toml
description = """
Elegant and safe OpenGL wrapper.
Glium is an intermediate layer between OpenGL and your application. You still need to manually handle
the graphics pipeline, but without having to use OpenGL's old and error-prone API.
Its objectives:
- Be safe to use. Many aspects of OpenGL that can trigger a crash if misused are automatically handled by glium.
- Provide an API that enforces good pratices such as RAII or stateless function calls.
- Be compatible with all OpenGL versions that support shaders, providing unified API when things diverge.
- Avoid all OpenGL errors beforehand.
- Produce optimized OpenGL function calls, and allow the user to easily use modern OpenGL techniques.
"""
```
But on crates.io the white spaces and line breaks are ignored, leading to an ugly mess: https://crates.io/crates/glium
I guess adding a `<pre>` or something wouldn't hurt.
Answers:
username_1: Markdown usually requires a double space for a new paragraph, so if we were to do that, this would still end up looking like this, as a note...
username_0: In reality my Cargo.toml has better spacing. For some reason the extra lines seem to have been eaten by github when I copy-pasted it here.
username_2: Currently we don't allow markdown for various security concerns, but if we could mitigate those then we would just use a normal markdown parser.
username_3: What about a simple Markdown parser that, to start with, only formats *italic*, **bold**, __underlined__, ~~strikethrough text~~ and newlines?
I'm not sure how to go about it, but it's one of the things I wanted to work on eventually for experience
username_4: Links and lists are probably also a good idea
username_5: Is this resolved?
username_6: #869 renders the *readme* markdown, this issue is for supporting markdown in the *description* in Cargo.toml. Unless the readme markdown rendering is sufficient for @username_0's needs?
username_5: It seems like the goal was for the page on crates.io to have a nicely-formatted info section, which is achieved by rendering the readme
Status: Issue closed
username_0: What @username_5 said. |
openebs/openebs | 259778394 | Title: Enhance Maya CLI using Cobra/Kingpin
Question:
username_0: FEATURE REQUEST
**What happened**:
There is a need to enhance the CLI of Maya.
Cobra is a library providing a simple interface to create powerful modern CLI interfaces similar to git & go tools as well as a program to generate applications and command files.
**What you expected to happen**:
A awesome CLI and easy to use.
Answers:
username_1: I had felt cobra to be complex. Reason being functional coding
The solution should simplify the code.
Refer to these libraries as well:
- https://github.com/c-bata/kube-prompt
Status: Issue closed
username_2: FEATURE REQUEST
**What happened**:
There is a need to enhance the CLI of Maya.
Cobra is a library providing a simple interface to create powerful modern CLI interfaces similar to git & go tools as well as a program to generate applications and command files.
**What you expected to happen**:
A awesome CLI and easy to use.
Status: Issue closed
username_2: mayactl now has completely moved to COBRA. vendor package cleanup will be done as part of : https://github.com/openebs/openebs/issues/1152 |
h6ah4i/android-advancedrecyclerview | 105785935 | Title: java.lang.IllegalStateException: already have a wrapped adapter
Question:
username_0: When i repopulate recyclerview with new data it is giving me java.lang.IllegalStateException: already have a wrapped adapter error. I tried to check library code and found that if i have already set adapter in **`RecyclerViewExpandableItemManager`** it will through **`IllegalArgumentException`** error.
Actually i am trying to show data from webservice which will be refreshed after every few time so now if i have already populated data once it will not going to show new data.
can you suggest any solution?
Answers:
username_1: Hi. That is an intended behavior of this library. If you want to refresh the all of the list items, I can suggest two approaches.
### Approach 1
Implement a `setData()` or some sort of method to the adapter, and then call the [`notifyDataSetChanged()`](https://developer.android.com/intl/ja/reference/android/support/v7/widget/RecyclerView.Adapter.html#notifyDataSetChanged()) method.
### Approach 2
Recreate everything. Dispose adapters and `RecyclerViewExpandableItemManager`, then create a new adapter, new wrapped adapter, and a new `RecyclerViewExpandableItemManager` instance.
### Related issue
- [Issue #77 - Changing the data for the expandable recycler view throws ClassCastException on item click](https://github.com/username_1/android-advancedrecyclerview/issues/77)
username_0: Solved the problem. I am editing library according to my requirement too.
Thanks/Regards
Status: Issue closed
|
ruby/irb | 536690102 | Title: pasting in multiline irb is slow
Question:
username_0: Tried to paste a 31-lines snippet to compare a specific benchmark (anon module's #inspect was incredibly slow in < 2.7.0).
What I found is that 2.7.0's irb is unbearably slow to paste even these small snippet.
This is a comparison between Ruby 2.4.3 with pry 0.11.0 and Ruby 2.7.0-preview3 with irb 1.1.0: https://youtu.be/c9ENYX8VVHA .
I did some other tests:
1. ruby-2.4.3 with irb-0.9.6 and ruby-2.7.0-preview3 with irb-1.1.0 `--legacy`: absolutely instantaneous;
1. ruby-2.4.3 with pry-0.11.0: almost instantaneous (I assume parsing for syntax highlighting takes some time); watch video above;
1. ruby-2.7.0 with irb-1.1.0 and irb-1.2.0 (multiline mode): unbearably slow;
Answers:
username_1: I have the same issue, and an additional observation is that the "paste speed" slows even more as the whole payload is gradually pasted.
```
=> irb --version
irb 1.2.1 (2019-12-24)
=> ruby --version
ruby 2.7.0p0 (2019-12-25 revision 647ee6f091) [x86_64-darwin18]
```
I'm using;
- `Mac OS Mojave 10.14.6 (18G87)`
- the default `Terminal` app
- `zsh` as my shell
- `ruby` managed via `rbenv`
Below is code that I see appear progressively slower on the screen with each line, when pasted in a single paste action. It takes ~12 seconds to completely paste into the console.
```
my_hash = {
'id1' => { a: 'foo', b: 'bar' },
'id2' => { a: 'foo', b: 'bar' },
'id3' => { a: 'foo', b: 'bar' },
'id4' => { a: 'foo', b: 'bar' },
'id5' => { a: 'foo', b: 'bar' },
'id6' => { a: 'foo', b: 'bar' },
'id7' => { a: 'foo', b: 'bar' },
'id8' => { a: 'foo', b: 'bar' },
'id10' => { a: 'foo', b: 'bar' },
'id11' => { a: 'foo', b: 'bar' },
'id12' => { a: 'foo', b: 'bar' },
'id13' => { a: 'foo', b: 'bar' },
'id14' => { a: 'foo', b: 'bar' },
'id15' => { a: 'foo', b: 'bar' },
'id16' => { a: 'foo', b: 'bar' },
'id17' => { a: 'foo', b: 'bar' },
'id18' => { a: 'foo', b: 'bar' },
'id19' => { a: 'foo', b: 'bar' },
'id20' => { a: 'foo', b: 'bar' }
}
```
The same full paste operation is immediate using `ruby 2.6.5p114` and `irb`. Using `pry` (instead of `irb`), does not exhibit this issue. It only appears with `irb` since updating to Ruby `2.7.0`.
username_2: I have the same problem... ruby 2.6 is almost instantaneous, 2.7 very slow, seems like each char is inputted individually.
I'm using Linux + Bash + rvm.
username_3: I've managed to reproduce this as well (iTerm2 on 10.14), except for me it alternates between pasting very slowly one character at a time (this bug) or crashing IRB entirely (#46) (and trying to run the paste commands in bash, which is super dangerous). This currently makes Ruby 2.7 completely useless for me, as I can't easily change the IRB version back to 0.9.6 and can't use any 2.7 consoles.
username_4: I believe the issue is known upstream.
username_5: Coloring and auto indenting is wonderful, but pasting something is very slow, its pasting character by character. When I want to interperet something long, switching to `pry`.
username_6: Just posting to reaffirm that this is an issue for me as well, and is preventing the company I work for from upgrading to 2.7.0.
username_7: Experiencing similar issues, sometimes strings don't get pasted completely, even crashing irb with usually something like `/usr/share/rvm/rubies/ruby-2.7.0/lib/ruby/2.7.0/reline/ansi.rb:76:in `block in cursor_pos': undefined method `pre_match' for nil:NilClass (NoMethodError)`
Workaround from @username_1 seems to help.
username_6: Is this something that will be fixed in any minor versions of 2.7, or with 3.0? It really breaks a lot of the way I do things with the console (copy pasting json into variables to execute on). Is there a way to turn off this syntax highlighting?
username_8: @username_6 https://github.com/swrobel/dotfiles/commit/020485aac88045d8234a096fec11b51fb4a11f73
username_9: 2.6.6 vs 2.7.1 `irb` just on localhost is dreadfully slow.
https://www.youtube.com/watch?v=gFHGwKzHY-4
Even with single line things like `puts "1"`, if I paste those in blocks with a keyboard shortcut and hold it down for autorepeat (as at the start of the video above, but with the multiline string excluded) then IRB under Ruby 2.7 is visibly much slower and, after I let go of the keyboard shortcut, keeps pasting for ages as the terminal is miles behind the keyboard buffer.
It's 2020 with multi-GHz CPUs and we can't input characters into a terminal near-instantaneously? Something ain't right `;-)`
username_10: @username_1 solution worked for me.
by default `rails` c does not load any other .irbrc but from the current user's home (~).
so there are 2 ways
1. create a file in current user's home (~)
`cd ~` to go to current user's home (~)
run `vim .irbrc` to create/open file
add this line `IRB.conf[:USE_MULTILINE] = false`
press `ESC` to enter command mode
enter command `:wq!` to save and exit
2. tell application.rb file to load .irbrc from projects root folder
create a .irbrc file in projects root with text `IRB.conf[:USE_MULTILINE] = false`
tell application.rb to use this file instead using
### load .irbrc file
def load_console(app=self)
super
if File.exists?(project_specific_irbrc = File.join(Rails.root, ".irbrc"))
puts "Loading project specific .irbrc ..."
load(project_specific_irbrc)
end
end
username_11: I sometimes have to paste the content of a spreadsheet column into IRB (don't ask why ... please, don't :confounded: ). With about 200 rows it takes way to long with syntax highlighting on.
Thanks @username_10 for the workaround! That helped me out for now.
I think that highlighting and multiline are a great addition to IRB and I hope we can find a way to get it up to speed in the future.
username_12: ```
username_13: FYI, recent Rails can pass this option like so:
```sh
rails console -- --nomultiline
```
See https://github.com/rails/rails/issues/39909#issuecomment-666412792
username_9: I really am rather bemused by the approach here. Yes, I can completely disable all the new stuff in `irb` because it's cripplingly slow and it's helpful to know how. But in that case, why does the "feature" even exist?
Why is nobody talking about ways to fix the feature's performance? We've had formatting and colouring engines for literally decades that ran many orders of magnitude faster; but it's 2020, computers are almost unimaginably powerful and we don't even need something to be coded that _well_, it just has to be coded _to a decent standard_.
This is not in any way an unreasonable expectation, surely? That a 2019 MBP 16" with the fastest CPU option should be slowed down to the point where you can watch it paint individual lines in a _text only interface_ is surely acceptable performance for, well, nobody?
username_14: I have this issue too, like everyone, but even when it's a single line — removing the newlines from the json block.
For whatever reason, the suggestions above didn't work for me. So I wanted to offer another [roundabout] solution:
I simply created a json file and posted it to a cloud storage app (I used S3).
Opened the file with `json = URI.parse('<url-to-the-file>').open { |f| f.read }`.
And parsed: `json = MultiJson.load(json)` (use whatever json tool you're used to).
It's slower, but much faster than pasting 😱
Until this gets figured out, this will be my solution.
username_9: @username_8 that looks very promising! Thanks 👍
username_8: I've released the new version of reline gem including ruby/reline#184 and ruby/reline#186 for speeding up. Please install with `gem install irb reline` and check it.
username_9: @username_8 Yeah, that improves things a lot. Great stuff. Thank you 😄
username_13: …
Duration: 1.208119 seconds
```
username_9: @username_13 Can confirm that this is a fairly pathological case, yeah; pretty nasty on my machine - 47 seconds with improved IRB, 1.2 seconds also with no multiline.
username_15: Even pasting a single line array with 10 hashes is visibly slow.
```
require 'time'
start = Time.now;nil
[{ id: 1, some_string: 'some_value' },{ id: 2, some_string: 'some_value' },{ id: 3, some_string: 'some_value' },{ id: 4, some_string: 'some_value' },{ id: 5, some_string: 'some_value' },{ id: 6, some_string: 'some_value' },{ id: 7, some_string: 'some_value' },{ id: 8, some_string: 'some_value' },{ id: 9, some_string: 'some_value' },{ id: 10, some_string: 'some_value'}]
puts "Duration: #{Time.now - start} seconds"
```
ruby 2.7.2 with reline 0.1.5 takes around 3.5s
ruby 2.7.2 with reline 0.1.8 takes around 1.2s
ruby 2.7.2 with `--nomultiline` takes around 0.3s
So it improved a lot, but still feels not instant for this case.
And I can confirm what @username_13 wrote, pasting big hashes like from a JSON response is still really slow or locks up irb. In a really bad case with 200KB of JSON, `irb --nomultiline` needs 20s to paste it, while irb with reline 0.1.8 crashes after 2min with
```
Traceback (most recent call last):
30: from /.rvm/gems/ruby-2.7.2/bin/ruby_executable_hooks:15:in `<main>'
29: from /.rvm/gems/ruby-2.7.2/bin/ruby_executable_hooks:15:in `eval'
28: from /.rvm/gems/ruby-2.7.2/bin/irb:23:in `<main>'
27: from /.rvm/gems/ruby-2.7.2/bin/irb:23:in `load'
26: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/exe/irb:11:in `<top (required)>'
25: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:400:in `start'
24: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:471:in `run'
23: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:471:in `catch'
22: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:472:in `block in run'
21: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:537:in `eval_input'
20: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:150:in `each_top_level_statement'
19: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:150:in `catch'
18: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:151:in `block in each_top_level_statement'
17: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:151:in `loop'
16: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:154:in `block (2 levels) in each_top_level_statement'
15: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/ruby-lex.rb:182:in `lex'
14: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:518:in `block in eval_input'
13: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:704:in `signal_status'
12: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb.rb:519:in `block (2 levels) in eval_input'
11: from /.rvm/gems/ruby-2.7.2/gems/irb-1.2.7/lib/irb/input-method.rb:294:in `gets'
10: from /.rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/forwardable.rb:235:in `readmultiline'
9: from /.rvm/rubies/ruby-2.7.2/lib/ruby/2.7.0/forwardable.rb:235:in `readmultiline'
8: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline.rb:175:in `readmultiline'
7: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline.rb:209:in `inner_readline'
6: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/line_editor.rb:115:in `reset'
5: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:136:in `cursor_pos'
4: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:136:in `raw'
3: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:146:in `block in cursor_pos'
2: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:146:in `reverse_each'
1: from /.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:147:in `block (2 levels) in cursor_pos'
/.rvm/gems/ruby-2.7.2/gems/reline-0.1.8/lib/reline/ansi.rb:147:in `ungetc': ungetbyte failed (IOError)
```
username_16: `IRB.conf[:USE_MULTILINE] = false` completely kills multiple newlines in a single json block. But maybe this is somewhat related to what "use multiline" meant in the first place ?
Without `IRB.conf[:USE_MULTILINE] = false` when pasting :
```
2.7.0 :012"> Test
2.7.0 :013">
2.7.0 :014">
2.7.0 :015"> Test
2.7.0 :016 > EOF
=> "Test\n\n\nTest\n"
```
and With `IRB.conf[:USE_MULTILINE] = false` when pasting
```
2.7.0 :001 > t = <<~EOF
2.7.0 :002"> Test
2.7.0 :003">
2.7.0 :004">
2.7.0 :005"> Test
2.7.0 :006"> EOF
=> "Test\nTest\n"
```
username_17: This slow pasting issue is fixed in Ruby versions >= 3.0
username_8: The IRB bundled with Ruby 3.0 had many bugs, so I fixed them and released [irb gem 1.3.1](https://rubygems.org/gems/irb) and [reline gem 0.2.1](https://rubygems.org/gems/reline). Please try them by `gem install irb reline`.
username_13: Looking much better with the latest updates!
Retesting [my previous example](https://github.com/ruby/irb/issues/43#issuecomment-713759219) I get:
- Ruby 3.0.0 + irb 1.3.1 + reline 0.2.1: **3.00s**
- Ruby 2.7.2 + irb 1.3.1 + reline 0.2.1: **3.90s**
- Ruby 3.0.0 + `--nomultiline`: **1.48s**
- Ruby 2.6.6: **0.81s**
So, still around 3.5x slower than Ruby 2.6.6 for me, but that's so much better than the previous 33x slower!
username_18: We can't update to Ruby 3 quite yet but just adding `gem "reline", "0.2.2"` to our `Gemfile` helped tremendously.
Thanks for the efforts with this!
username_0: looks like this is fixed, closing it; thanks @username_8 and everyone who pitched in
Status: Issue closed
|
tinymce/tinymce-angular | 404707781 | Title: How to set the Height property?
Question:
username_0: Hi,
Is there I can set the `Height` for the editor?
The TinyMCE editor accepts `height` property in the init() function.
Thanks
Status: Issue closed
Answers:
username_1: You can use the `[init]` input and send in a configuration object with height set.
username_0: @username_1 This is not working! Have you tried it on your side?
username_1: This is working for me.
```ts
<editor [init]="{height: 500}"></editor>
```
username_0: I tried that and it is not working!
username_0: </editor>
`,
})
export class TinyMceTypeComponent extends FieldType implements OnInit {
public config = {};
ngOnInit() {
super.ngOnInit();
this.config = { ...defaultConfig };
}
}
````
username_1: I suspect the ngOnInit is too late so the editor has aldready initialised when you set the config? The editor only reads the config when it initialises, it doesn't change reactively to changes in the config.
username_0: How can I fix that :-) ?
username_0: BTW, I debugged the `tinymce-angular` component and the value of `finalInit` is:
````
...
height: 300
setup: ƒ (editor)
...
```
username_0: Also, I noticed that the `tinymce-angular` component calls `initialise()` inside `AfterViewInit` so it should catch the values set in `OnInit`, no?
username_1: Try putting the height setting in the config property directly and not after init and see if it works.
username_0: Also it doesn't work! I tried that. This is what I have above if you notice the sample code.
username_1: No in the code sample above you set the config values in ngOnInit?
username_0: Correct! Something like this would work?
````
export class TinyMceTypeComponent extends FieldType implements OnInit {
public config = { ...defaultConfig };
ngOnInit() {
super.ngOnInit();
}
}
```
Also, still height is not taking effect.
username_1: Yes that should work, and does work in my testing. Maybe there's some css in your page changing the editor height?
username_0: I don't see any css rules overriding the height!
username_1: Can you create a simple app where I can reproduce this problem?
username_0: Great! Thanks a lot for your time.
username_2: Where are you setting this inline property?
username_3: ```
You can find the docs here: https://www.tiny.cloud/docs/integrations/angular/#exampleinline
username_2: I had looked at the documentation already. Our developer set it up differently. We solved it by extending ViewOnInit and setting the confit then. |
cypress-io/cypress | 550296390 | Title: Stubbed Fetch requests not detected by Cypress, despite being sent as XHR
Question:
username_0: ### Current behavior:
Fetch workaround posted in #95 and on [Stubbing window-fetch](https://github.com/cypress-io/cypress-example-recipes/tree/master/examples/stubbing-spying__window-fetch#readme) does not work when loading the polyfill from Cypress. My application does not have any polyfills and so I opted for doing it like in [polyfill-fetch-from-tests-spec.js](https://github.com/cypress-io/cypress-example-recipes/blob/master/examples/stubbing-spying__window-fetch/cypress/integration/polyfill-fetch-from-tests-spec.js).
### Test code to reproduce
spec.js:
```
describe.only('Blah', () => {
before(() => {
doStuff();
cy.visit('http://root:pass@localhost:8080/foobar/index.html');
});
it('Bleh', () => {
cy.server();
cy.route({
url: 'test.cgi',
method: 'POST'
}).as('test');
cy.get('[data-cy=send').click();
cy.wait('@test');
});
});
```
support/index.js:
```
Cypress.on('window:before:load', win => {
var unfetch = require('unfetch');
delete win.fetch;
win.fetch = (url, options) =>
// Since we are using relative url's in the application, we need to parse these correctly
unfetch(
`http://localhost:8080${url.charAt(0) === '/' ? url : `/foobar/${url}`}`,
options
);
});
```
However Cypress is not picking these up. The requests *are* being send via XHR though, as seen in Chrome:

I also added a control in my React app just for troubleshooting:
```
//Fetch
fetch('test', {method: 'POST'});
//XHR
var http = new XMLHttpRequest();
http.open('POST', 'test', true);
http.send();
```
Both work, but only the latter is being picked up by Cypress. Investigating the requests in the network tab, the *only* difference I can see is that the `Referer` header is the "original" in the XHR variant, while the polyfilled XHR one has that header set to `http://localhost:8080/__cypress/iframes/integration/guide/spec.js`.
Could this be a bug?
### Versions
Cypress 3.70, Chrome 73, Debian Jessie.
Status: Issue closed
Answers:
username_0: Found the solution here: https://github.com/cypress-io/cypress/issues/6167
username_1: Hey @username_0 that link links back to this issue
username_0: @username_1 oops. Updated the link |
tristen/hoverintent | 841524555 | Title: Clicking and clicking-and-holding should be interpreted as no intention to hover
Question:
username_0: Hello,
When a user clicks or clicks-and-holds on a **hoverintent** element, that should be interpreted as no intention to hover. In the vernacular of your [fantastic] library, the interval should be reset upon any type of clicking.
I would argue that any type of clicking signifies that the user does _not_ intend to hover. Do you agree?
The problem I'm encountering is that, on an element where `click` is trapped, sometimes an overlay I created appears when it shouldn't. What's happening is that the user is clicking on the element (or clicking-and-holding) after a hover is initiated but before the interval has elapsed (and the "in" function is called). |
micronaut-projects/micronaut-redis | 992775534 | Title: Support bulk get, put, and invalidate
Question:
username_0: ### Feature description
Would like to take advantage of bulk operations in Redis (mset, mget, and del for multiple keys) using RedisCache.
Answers:
username_0: Has this been considered before? I have proof of concept code I can push in a branch if that would help. |
lunarway/shuttle | 366316249 | Title: Support for specific versions of plan
Question:
username_0: We need to support using a specific version of a plan.
It could be a specific git commit, git tag or similar.
Answers:
username_1: Yeah, it would be awesome to able to lock to a specific tag or commit with fx a `shuttle.lock` file containing the commit. That way we would have the latest when you specify `https://github.com/lunarway/shuttle-example-go-plan.git` as your plan, but I would require a `shuttle upgrade` to get the latest version of the plan.
username_1: It would nice to point to a branch of a plan too
username_2: I first iteration of this could be the branch support instead of specific versioning, as this solves most of our problems when changing plans.
We find the automatic "latest" concept of plans powerfull to speed change rollout, but the problem with this approach is clear when testing out new things in the plans and breaking things for every user of the plan.
If we support branches, we can point a project over to a new feature branch of the plan and test out stuff and when ready to deploy to every one, merge it into `master`.
username_2: Is implemented in #35
Status: Issue closed
|
InventivetalentDev/GlowAPI | 1090982152 | Title: GlowAPI won't start with another plugin also in the server
Question:
username_0: ## What steps will reproduce the problem?
1. Having GlowAPI and the plugin i wrote in the plugins folder at the same time (i also have PacketListenerAPI in the plugins folder)
2. Starting the server
3. Error Happens, an error also happens when i do the command in my plugin without GlowAPI in the plugins folder
## What were you expecting to happen? What happened instead?
I was expecting GlowAPI to start normally but it showed this error instead.
## What version of the plugin are you using? *Type /version <Plugin Name>*
1.5.2-SNAPSHOT
## What Spigot version are you using? *Type /version*
This server is running Paper version git-Paper-110 (MC: 1.18.1) (Implementing API version 1.18.1-R0.1-SNAPSHOT) (Git: 6852c65)
## What plugins ae you using? *Type /plugins*
GlowAPI, PacketListenerApi, TeamGlows (Mine)
## Do you have an error log? Use [pastebin.com](http://pastebin.com). *If you're not sure, upload your whole server log*
https://pastebin.com/GjvTySrC
## Did your client crash? *Upload errors in .minecraft/logs/latest.log as well*
Nope
## Additional information? *(Are you using Bungeecord? Did it work in previous versions? etc.)*
I'm trying to make a plugin and am new so i could be doing many errors. That error doesn't happen if i dont have my plugin in the server and when i package the plugin it shows this warning: https://pastebin.com/ArYn93ED |
Subsets and Splits