repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
spring-projects/spring-data-mongodb | 925011290 | Title: Sorting with ignorecase is rejected despite the documentation
Question:
username_0: In current documentation https://docs.spring.io/spring-data/data-mongodb/docs/current/reference/html/#core.web.basic I see the following table:

It states that it is possible to change case sensitivity for sorting.
Although when trying to use "ignorecase" keyword I am getting the following error:
```
{
"message": "Given sort contained an Order for patient.lastName with ignore case! MongoDB does not support sorting ignoring case currently!"
}
```
The message comes form the Query class: https://github.com/spring-projects/spring-data-mongodb/blob/73a0f0493358dae7040ff3613524ca1450e2a585/spring-data-mongodb/src/main/java/org/springframework/data/mongodb/core/query/Query.java#L208
Either things got changed and there is a way to sort with ignorecase (I guess the collation with strength 1 or 2 can help) or the documentation was copied from somewhere else without considerations of Mongodb specifics.
Please fix the documentation or fix the Query class. |
ucb-bar/hammer | 661127303 | Title: vcs sim flow doesn't include -e configs in dict
Question:
username_0: In a clean environment, if I try to run simulation with this make command:
`$(HAMMER_EXEC) sim -e $(ENV_YML) $(foreach x,$(INPUT_CONFS) , -p $(x)) -p sim_config.yml -p $(src_dir)/rtl_sim_config.lut.yml --obj_dir $(OBJ_DIR)`
VCS runs (!), and then the flow quits with this trace:
```Traceback (most recent call last):
File "./inst-vlsi.py", line 46, in <module>
InstDriver().main()
File "/tools/projects/aryap/hammer-test/hammer/src/hammer-vlsi/hammer_vlsi/cli_driver.py", line 1264, in main
sys.exit(self.run_main_parsed(vars(parser.parse_args(args))))
File "/tools/projects/aryap/hammer-test/hammer/src/hammer-vlsi/hammer_vlsi/cli_driver.py", line 1171, in run_main_parsed
output_config = action_func(driver, errors.append) # type: Optional[dict]
File "/tools/projects/aryap/hammer-test/hammer/src/hammer-vlsi/hammer_vlsi/cli_driver.py", line 528, in action
self.get_full_config(driver, output))
File "/tools/projects/aryap/hammer-test/hammer/src/hammer-vlsi/hammer_vlsi/cli_driver.py", line 339, in get_full_config
output_full = deepdict(driver.project_config)
File "/tools/projects/aryap/hammer-test/hammer/src/hammer-vlsi/hammer_vlsi/driver.py", line 125, in project_config
return hammer_config.combine_configs(self.project_configs)
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 933, in combine_configs
final_dict = reduce(combine_meta, settings_ordered, expanded_config) # type: dict
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 930, in combine_meta
meta_setting + "_meta": lazy_metas[meta_setting + "_meta"]
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 648, in update_and_expand_meta
MetaDirectiveParams(meta_path=meta_dict.get(_CONFIG_PATH_KEY, "unspecified")))
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 244, in subst_action
config_dict[key] = perform_subst(value)
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 241, in perform_subst
newval = subst_str(value, lambda key: config_dict[key])
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 224, in subst_str
return re.sub(__VARIABLE_EXPANSION_REGEX, lambda x: replacement_func(x.group(1)), input_str)
File "/usr/lib64/python3.6/re.py", line 191, in sub
return _compile(pattern, flags).sub(repl, string, count)
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 224, in <lambda>
return re.sub(__VARIABLE_EXPANSION_REGEX, lambda x: replacement_func(x.group(1)), input_str)
File "/tools/projects/aryap/hammer-test/hammer/src/hammer_config/config_src.py", line 241, in <lambda>
newval = subst_str(value, lambda key: config_dict[key])
KeyError: 'synopsys.vcs_home'
```
I printed the entire `config_dict` at the source of the error and the key is indeed missing, but it _is_ defined in the $(ENV_YML) file and that must work since VCS itself runs first.
Changing the inclusion of $(ENV_YML) from with the -e switch to the -p switch makes the problem go away.
It seems that for this code path, the two config dicts are not merged, but they should be?
Answers:
username_1: Maybe this has to do with the fact that VCS is invoked twice (once to generate the executable and once to run it).
This sounds like a real bug that will need to be fixed. |
Azure/azure-cli | 946130480 | Title: az acr build TypeError: 'NoneType' object is not callable
Question:
username_0: ### **This is autogenerated. Please review and update as needed.**
## Describe the bug
**Command Name**
`az acr build`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
'NoneType' object is not callable
Traceback (most recent call last):
File "/opt/az/lib/python3.6/site-packages/knack/cli.py", line 231, in invoke
cmd_result = self.invocation.execute(args)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 657, in execute
raise ex
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 691, in _run_job
result = cmd_copy(params)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 328, in __call__
return self.handler(*args, **kwargs)
File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/opt/az/lib/python3.6/site-packages/azure/cli/command_modules/acr/build.py", line 109, in acr_build
variant=platform_variant
TypeError: 'NoneType' object is not callable
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az acr build --registry {} --image {} {}`
## Expected Behavior
## Environment Summary
```
Linux-5.4.0-1051-azure-x86_64-with-debian-10.2 (Cloud Shell)
Python 3.6.10
Installer: DEB
azure-cli 2.26.0 *
Extensions:
ai-examples 0.2.5
ssh 0.1.5
```
## Additional Context
<!--Please don't remove this:-->
<!--auto-generated-->
Answers:
username_1: route to service team
username_2: The issue was fixed in azure cli [2.26.1](https://docs.microsoft.com/en-us/cli/azure/release-notes-azure-cli?tabs=azure-cli#july-14-2021). Could not upgrade the cli from 2.26.0 to 2.26.1 via `az upgrade`, but `pip install azure-cli` worked without any issue.
username_0: Nice, but it does not work for me ((
see command log below:
Requesting a Cloud Shell.Succeeded.
Connecting terminal...
Welcome to Azure Cloud Shell
Type "az" to use Azure CLI
Type "help" to learn about Cloud Shell
ivan@Azure:~$ az version
{
"azure-cli": "2.26.0",
"azure-cli-core": "2.26.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {
"ai-examples": "0.2.5",
"ssh": "0.1.5"
}
}
ivan@Azure:~$ pip install azure-cli
Defaulting to user installation because normal site-packages is not writeable
Collecting azure-cli
Downloading azure_cli-2.26.1-py3-none-any.whl (2.2 MB)
|████████████████████████████████| 2.2 MB 24.0 MB/s
Collecting azure-mgmt-databoxedge~=0.2.0
Downloading azure_mgmt_databoxedge-0.2.0-py2.py3-none-any.whl (330 kB)
|████████████████████████████████| 330 kB 59.0 MB/s
Collecting azure-functions-devops-build~=0.0.22
Downloading azure_functions_devops_build-0.0.22-py3-none-any.whl (47 kB)
|████████████████████████████████| 47 kB 387 kB/s
Collecting azure-mgmt-compute~=21.0.0
Downloading azure_mgmt_compute-21.0.0-py2.py3-none-any.whl (3.9 MB)
|████████████████████████████████| 3.9 MB 64.3 MB/s
Collecting azure-mgmt-rdbms~=8.1.0b4
Downloading azure_mgmt_rdbms-8.1.0-py2.py3-none-any.whl (636 kB)
|████████████████████████████████| 636 kB 53.6 MB/s
Collecting azure-mgmt-relay~=0.1.0
Downloading azure_mgmt_relay-0.1.0-py2.py3-none-any.whl (36 kB)
Collecting azure-mgmt-security~=0.6.0
Downloading azure_mgmt_security-0.6.0-py2.py3-none-any.whl (229 kB)
|████████████████████████████████| 229 kB 60.9 MB/s
Collecting azure-mgmt-iotcentral~=4.1.0
Downloading azure_mgmt_iotcentral-4.1.0-py2.py3-none-any.whl (18 kB)
Collecting azure-mgmt-datamigration~=4.1.0
Downloading azure_mgmt_datamigration-4.1.0-py2.py3-none-any.whl (132 kB)
|████████████████████████████████| 132 kB 60.1 MB/s
Collecting azure-synapse-artifacts~=0.6.0
Downloading azure_synapse_artifacts-0.6.0-py2.py3-none-any.whl (471 kB)
|████████████████████████████████| 471 kB 45.7 MB/s
Collecting azure-mgmt-sqlvirtualmachine~=0.5.0
Downloading azure_mgmt_sqlvirtualmachine-0.5.0-py2.py3-none-any.whl (34 kB)
Collecting azure-mgmt-imagebuilder~=0.4.0
Downloading azure_mgmt_imagebuilder-0.4.0-py2.py3-none-any.whl (29 kB)
Collecting xmltodict~=0.12
Downloading xmltodict-0.12.0-py2.py3-none-any.whl (9.2 kB)
Collecting azure-mgmt-network~=19.0.0
Downloading azure_mgmt_network-19.0.0-py2.py3-none-any.whl (20.9 MB)
|████████████████████████████████| 20.9 MB 42.0 MB/s
[Truncated]
Requirement already satisfied: pyparsing>=2.0.2 in ./.local/lib/python3.7/site-packages (from packaging~=20.9->azure-cli) (2.4.7)
Requirement already satisfied: bcrypt>=3.1.3 in ./.local/lib/python3.7/site-packages (from paramiko<3.0.0,>=2.0.8->azure-cli-core==2.26.1->azure-cli) (3.2.0)
Requirement already satisfied: deprecated in ./.local/lib/python3.7/site-packages (from PyGithub~=1.38->azure-cli) (1.2.12)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests~=2.25.1->azure-cli-core==2.26.1->azure-cli) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests~=2.25.1->azure-cli-core==2.26.1->azure-cli) (2.10)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.5.0->msrest>=0.6.21->azure-keyvault-administration==4.0.0b3->azure-cli) (3.1.0)
Requirement already satisfied: msal-extensions~=0.3.0 in ./.local/lib/python3.7/site-packages (from azure-identity->azure-cli) (0.3.0)
Requirement already satisfied: wrapt<2,>=1.10 in ./.local/lib/python3.7/site-packages (from deprecated->PyGithub~=1.38->azure-cli) (1.12.1)
Requirement already satisfied: MarkupSafe>=2.0 in ./.local/lib/python3.7/site-packages (from jinja2->azure-functions-devops-build~=0.0.22->azure-cli) (2.0.1)
ivan@Azure:~$ az version
{
"azure-cli": "2.26.0",
"azure-cli-core": "2.26.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {
"ai-examples": "0.2.5",
"ssh": "0.1.5"
}
}
ivan@Azure:~$
username_2: I used `pip install azure-cli` in a bash task in my pipeline yaml file, see image below. Not sure if that should work with CloudShell. Have you tried `az upgrade`? You may use `az upgrade --yes --debug` to get more info about why az upgrade fails. I even tried `sudo apt-get install --only-upgrade -y azure-cli`, but did not help until pip did the work.

username_0: I tried all steps in Cloud Shell, the issue was only in it.
But now I finally notice that az version get me new version
ivan@Azure:~$ az version
{
"azure-cli": "2.26.1",
"azure-cli-core": "2.26.1",
"azure-cli-telemetry": "1.0.6",
"exte
Status: Issue closed
|
jclem/logfmt-elixir | 83565260 | Title: Quoted booleans and numbers should be decoded into strings
Question:
username_0: Currently, `foo="true"` will decode into `%{"foo" => true}`. Because it is quoted, it should decode into `%{"foo" => "true}`.
Status: Issue closed
Answers:
username_0: I'm not sure this is correct, actually. More likely that this is a detail that should be left up to whoever is handling the decoded line, and that nothing should be coerced. |
material-components/material-components-ios | 543941691 | Title: [HeaderStackView] Internal issue: b/145205328
Question:
username_0: This was filed as an internal issue. If you are a Googler, please visit [b/145205328](http://b/145205328) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/145205328](http://b/145205328)<issue_closed>
Status: Issue closed |
COVID19Tracking/issues | 777570592 | Title: [WI] Patch 1/2/2021 Pending, Currently hospitalized, and Currently in ICU
Question:
username_0: State: Wisconsin
Describe this issue: On 1/2 Wisconsin updated Pending, Currently hospitalized, and Currently in ICU after our publish shift.
Source: (will upload official screenshot once available)
<img width="1080" alt="Screen Shot 2021-01-02 at 8 17 49 PM" src="https://user-images.githubusercontent.com/66583275/103471694-1de53500-4d38-11eb-915b-7b97df22797a.png">
Answers:
username_0: BEFORE:
<img width="522" alt="Screen Shot 2021-01-02 at 8 19 45 PM" src="https://user-images.githubusercontent.com/66583275/103471716-38b7a980-4d38-11eb-873e-7b3210c3da95.png">
AFTER:
<img width="523" alt="Screen Shot 2021-01-02 at 8 20 14 PM" src="https://user-images.githubusercontent.com/66583275/103471722-3fdeb780-4d38-11eb-8eb6-44221db3065a.png">
Status: Issue closed
|
microbiomedata/nmdc-server | 876700068 | Title: Portal Homepage orientation
Question:
username_0: Need better orientation: links to NMDC website/github/where to send help desk items/social media feeds/newsletters/funding statement & acknowledgement
Priority - High
Urgency - High
Answers:
username_1: Would like to understand this a bit better. I don't think it's a huge amount of work... but needs a strategy |
AnyFlowApp/AnyFlowApp-issues | 214597882 | Title: 无法连接bug
Question:
username_0: 版本 1.7.1build
ios:9.3.5。iphone6
从appstore刚下版本,填好代理地址 然后启动代理。发现无法联网。直连也不行。
关掉代理,关闭规则 在开启,然后在开启代理 就可以了。
问题:在ios9下 规则(默认规则)是不是没生效?必须重新勾选才可以?
Answers:
username_0: anyflow版本是1.7 build 2
username_0: 这个问题好像一直都存在。
username_0: 还有种重现方式:reset vpn ,然后后台前行推出程序,然后再打开,提示需要安装描述文件,按指纹确认后,会自动启动anyflow 代理。这时打开safari 发现无法打开任何网页。这时再进去重开下代理 又神奇般可以了
username_0: [归档.zip](https://github.com/AnyFlowApp/AnyFlowApp-issues/files/847334/default.zip)
reset vpn 后无法连接的log
username_0: 继续测试, 又出现问题:
1.开启代理, 打开safari , 这时可以打开
2.重开代理,发现又无法打开了.
3.如果继续重开代理的话又可以.
结论:
1,问题基本是以 `"开启代理,联网, 重开代理,不联网,重开代理,不能联网"` 的间隔循环出现问题.
username_0: 以上测试均在默认规则下 进行
username_1: 有一点需要确认一下,这里的开启代理是启动 VPN 的意思吗?还是勾选 Proxy?
username_1: 经确认这里指的是开启关闭 VPN |
ronaldbroens/jenkinsapp | 99876799 | Title: Travis build fails because of use xcode 7 beta
Question:
username_0: Xcode 7 is stil in development and is sadly not yet supported by travis CI, see also this issue:
https://github.com/travis-ci/travis-ci/issues/4055
Answers:
username_0: For now switched to bitrise - this service is free for very small teams (up to 2 team members) and is OK for now |
Azure/azure-sdk-for-js | 955037079 | Title: What's going on with `core-tracing`?
Question:
username_0: What's going on with the `core-tracing` package? Is is it ever going to have a non-preview/actual version (e.g. just x.y.z)? It looks like the package has been in a non-GA state for almost 2 years now.
Having a version like `1.0.0-preview.x` is annoying for customers because unless both y'all keep all of the packages that depend on it up to date with the latest version and customers continually take new versions of those packages, then `node_modules` will contain all the different versions of `core-tracing` since `1.0.0-preview.11` can't be used in lieu of `1.0.0-preview.10` for example.
Answers:
username_1: Hey @username_0 - we appreciate you reaching out about this! I'm sorry to hear about all the disruption. To give you a little bit of history core-tracing relies on OpenTelemetry APIs. Because OpenTelemetry was in preview for so long we had to keep our core-tracing package in preview as well, which has been a source of frustration as (if I understand correctly) we did not expect OTel to stay in preview / beta for as long as it has.
The good news though is that [@opentelemetry/api](https://www.npmjs.com/package/@opentelemetry/api) finally GA'd! We still have some work to do but we are finally unblocked from GAing this package.
Having GA packages depend on preview package is something we don't usually do, and this is an example of why it has not turned out well. I believe it is especially painful for users who depend on more than one Azure package so I definitely sympathize.
We are actively working on ways to minimize the pain, and I can share our plan with you as soon as it is finalized.
I'll keep this issue open, but we _are_ actively working on reducing this pain and hopefully with OpenTelemetry for JS having gone 1.0 finally GAing core-tracing.
username_0: @username_1 Thanks for the response. It looks like OpenTelemetry GA'd about 2 months ago. So do you have any idea on an ETA for 1.0.0 for core-tracing?
username_1: I do not have a concrete date at this time, but our plan is to GA it soon now that OpenTelemetry GA'd. We have a few things we need to adjust internally before we get there.
In the meanwhile I was thinking about removing the direct dependency packages have on core-tracing, having them get the interfaces they need transitively through our other core-packages which are not pinned. That will reduce the duplication in node_modules. That in addition to GAing core-tracing I think will put things in a much better spot going forward. What do you think?
username_0: That sounds like a great idea. |
invertase/react-native-firebase | 457768222 | Title: Module 'Firebase' not found on @import Firebase (6.0.0-alpha.25)
Question:
username_0: Wanting to try out `6.0.0-alpha.25` for Analytics only on an existing RN iOS app that is using an exsting pod file.
Using RN 0.59.5
**Steps I've taken:**
1. `yarn add @react-native-firebase/app`
2. `yarn add @react-native-firebase/analytics@alpha`
3. `react-native link @react-native-firebase/analytics`
- I can see the pod file has been updated with:
`pod 'RNFBAnalytics', :path => '../node_modules/@react-native-firebase/analytics'`
4. Followed these steps to add Firebase credentials. https://invertase.io/oss/react-native-firebase/quick-start/ios-firebase-credentials
**Issue I'm having**
Xcode giving error in `AppDelegate.m`: **Module 'Firebase' not found** on`@import Firebase;`
Recommended in docs:
<img width="947" alt="Screen Shot 2019-06-18 at 6 30 31 PM" src="https://user-images.githubusercontent.com/91259/59730453-4c9fc700-91f7-11e9-86ed-e35501f25ed7.png">
Libraries were not showing up linked in Xcode as I expected, so i linked them manyually:
<img width="203" alt="Screen Shot 2019-06-18 at 6 44 12 PM" src="https://user-images.githubusercontent.com/91259/59731028-7823b100-91f9-11e9-9c63-3b7cd75aae8d.png">
<img width="354" alt="Screen Shot 2019-06-18 at 6 44 39 PM" src="https://user-images.githubusercontent.com/91259/59731027-7823b100-91f9-11e9-8e03-40ba82b779b2.png">
Still getting same error. Am I missing something here?
Answers:
username_1: I'm having same issue
username_2: The doc said supported versions required minimum RN 0.60.0.
[https://invertase.io/oss/react-native-firebase/v6](url)
username_3: react-native link @react-native-firebase/app
https://www.npmjs.com/package/@react-native-firebase/app
username_0: @username_3 - Yes I did, thanks. Seems I've tried just about every suggested trick with and w/o pods. Gonna try and spin up a new app to see if this will work.
username_4: @username_0 did you find a solution? |
mobxjs/mobx-utils | 1155810155 | Title: Override Model getter with ViewModel
Question:
username_0: Hi, I wonder if is possible to do something like this:
```
class BookVM extends ViewModel {
get name() {
return this.model.name.replace('a', 'e');
}
}
class Book {
get name() {
return `${this._name} + ${this._author}`
}
}
```
Is this supported?
Answers:
username_0: I just looked at the source code, it isn't implemented, I just created a small PR that allows this behavior |
dichen001/Paper-Reading | 132776273 | Title: upload picture
Question:
username_0: [4]

[6]
<issue_closed>
Status: Issue closed |
node-red/nrlint | 1009746886 | Title: Nrlint does not allow subraction of Dates
Question:
username_0: ### Current Behavior
Using nrlint 1.0.2 the following code shows the error that the left hand side of an arithmetic operation must of type 'any', 'Number' ...
The function does run ok (it subtracts the millisecond values).
```
const d1 = new Date()
const d2 = new Date(d1.getTime() + 1000)
msg.payload = d2 - d1
return msg;
```
### Expected Behavior
No error should be reported
### Steps To Reproduce
See example flow
### Example flow
```
[{"id":"0ae098f0f8d40b2e","type":"inject","z":"84405ff5.25fa6","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":230,"y":2540,"wires":[["377b3ad15f21e80e"]]},{"id":"377b3ad15f21e80e","type":"function","z":"84405ff5.25fa6","name":"nrlint test","func":"const d1 = new Date()\nconst d2 = new Date(d1.getTime() + 1000)\nmsg.payload = d2 - d1\nreturn msg;","outputs":1,"noerr":2,"initialize":"","finalize":"","libs":[],"x":380,"y":2540,"wires":[["680355ca417c9cf1"]]},{"id":"680355ca417c9cf1","type":"debug","z":"84405ff5.25fa6","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","statusVal":"","statusType":"auto","x":560,"y":2540,"wires":[]}]
```
### Environment
- nrlint version: 1.0.2
- Node-RED version: 2.0.6
- Node.js version: 14.17.6
- npm version: 614.15
- Platform/OS: Ubuntu
- Browser: Firefox
Answers:
username_1: When I run this through nrlint, I don't get any errors.
I *do* so an error when I edit this code using monaco - it complains about the types in the sum. Is that what you're referring to? IN which case, this isn't an nrlint bug - its something to do with monaco and its own built-in javascript validation.
username_0: Yes, it was the types in the sum I was referring to, I should have been more explicit. You are correct, it isn't an nrlint issue, it only appears when using Monaco. I will close this.
Status: Issue closed
|
shoo/cushion | 442957838 | Title: Too many template parameters
Question:
username_0: StateTransitor provides flexibility by switching among State, Event, Handler types, container types and behaviors with template parameters.
However, there are too many parameters, even if you want to change the behavior a little, it is necessary to specify many parameters.
We need a way to easily specify the parameters. Like named parameters.
-------------
StateTransitor は多くのテンプレートパラメータによって、State, Event, ハンドラ、コンテナ、挙動などを切り替えることで柔軟性を持たせています。
しかし、パラメータが多すぎて、少しだけの挙動変更であっても、多くのパラメータ指定が必要な場合があります。
名前付きパラメータのように、簡単にパラメータを指定する方法を提供する必要があります。<issue_closed>
Status: Issue closed |
VATSIM-UK/UK-Sector-File | 774922316 | Title: Separate Cotswold CTA FUA from Permanent Airspace
Question:
username_0: # Summary of issue/change
Separate flexible use airspace in Cotswold CTA from permanent airspace defintion.
# Reference (amendment doc/official source/forum) incl. page number(s)
ENR 6-7
# Affected areas of the sector file (if known)
\ARTCC\High\Cotswold CTA.txt<issue_closed>
Status: Issue closed |
typeorm/typeorm | 344535609 | Title: SQLite keyword should be checked in a case insensitive way
Question:
username_0: **Issue type:**
[ ] question
[x] bug report
[ ] feature request
[ ] documentation issue
**Database system/driver:**
[x] `cordova`
[ ] `mongodb`
[ ] `mssql`
[ ] `mysql` / `mariadb`
[ ] `oracle`
[ ] `postgres`
[ ] `sqlite`
[ ] `sqljs`
[ ] `react-native`
[ ] `expo`
**TypeORM version:**
[x] `latest`
[ ] `@next`
[ ] `0.x.x` (or put your version here)
I have an sqlite database created with another library. The other library has created my tables with "autoincrement" property in lower case letters. When I am now trying to create foreign keys on the tables, the autoincrement statement is lost as typeorm seems to search only for the AUTOINCREMENT keyword in uppercase letters, see [here](https://github.com/typeorm/typeorm/blob/eda2c4bdab2ee9f6b3e7638a74854682a42f4f03/src/driver/sqlite-abstract/AbstractSqliteQueryRunner.ts#L704).
I don't know if SQLite actually allows the keyword not to be UPPERCASE, but for best compatibility I would suggest that typeorm checks this in an case-insensitive manner.
Answers:
username_1: yes we can add that code, please feel free to PR! |
qiniu/android-sdk | 224654168 | Title: 支持v2版本的自动域名获取功能
Question:
username_0: 参考链接:https://uc.qbox.me/v2/query?ak=T3sAzrwItclPGkbuV4pwmszxK7Ki46qRXXGBBQz3&bucket=if-pbl
```
// 20170427105119
// https://uc.qbox.me/v2/query?ak=T3sAzrwItclPGkbuV4pwmszxK7Ki46qRXXGBBQz3&bucket=if-pbl
{
"ttl": 86400,
"io": {
"src": {
"main": [
"iovip.qbox.me"
]
}
},
"up": {
"acc": {
"main": [
"upload.qiniup.com"
],
"backup": [
"upload-nb.qiniup.com",
"upload-xs.qiniup.com"
]
},
"old_acc": {
"main": [
"upload.qbox.me"
],
"info": "compatible to non-SNI device"
},
"old_src": {
"main": [
"up.qbox.me"
],
"info": "compatible to non-SNI device"
},
"src": {
"main": [
"up.qiniup.com"
],
"backup": [
"up-nb.qiniup.com",
"up-xs.qiniup.com"
]
}
}
}
```
Answers:
username_0: ADD PR:https://github.com/qiniu/android-sdk/pull/255
username_0: 已发布版本7.2.4
Status: Issue closed
|
angband/angband | 548474646 | Title: Toggling birth_randarts is broken
Question:
username_0: **Reported by magnate on 20 Sep 2011 22:30 UTC**
1. Create a character with randarts. Find or create an artifact - see that it's random, not one of the standard set. Suicide. Choose to "keep randarts". Exit and restart.
2. At the quickstart screen, press N to go back to the beginning of the birth process. Press = to enter options, and toggle birth_randarts to No. Leave options and finish birth. Create an artifact ... it ought to be a standart, but it isn't, it's a randart.
This doesn't happen if keep_randarts is off. This bug is extant in 3.3.0!!
Answers:
username_0: **Comment by magnate on 21 Sep 2011 10:19 UTC**
This is happening because randarts are loaded from the savefile if birth_keep_randarts is true, and there is as yet no call to re-set them to standarts if birth_randarts is turned off between loading them and starting the game. Not sure why I didn't spot this before, but it doesn't seem to have affected too many people.
username_0: **Comment by magnate on 21 Sep 2011 14:14 UTC**
Fixed in [r9a2a567c] which needs merging to master from http://github.com/magnate/angband/tree/randarts. Not suitable for porting back to 3.3.1 because it messes with savefile loading and saving - and nobody has reported the problem in 3.3.0 so it obviously isn't critical.
username_0: **Comment by magnate on 27 Sep 2011 12:35 UTC**
Finally fixed in [rb7731c694].
Status: Issue closed
|
OpenFeign/feign | 361796873 | Title: SAXParseException: Premature end of file when Status Code is 204
Question:
username_0: Hi There,
I just came across a problem with feign clients JaxbDecoder. While trying to decode it checks first the response status and if it's 404 then it returns a empty value of target type. This should be also the case for status code 204, since it indicates by design a response with no body.
The current implementation missed the case and cause a SaxParseException
```
[org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Premature end of file.]] with root cause
org.xml.sax.SAXParseException: Premature end of file.
```
Thanks.
Answers:
username_1: I believe the root cause of this is that `SynchronousMethodHandler` assumes that all `2xx` responses should be decoded. It may be more appropriate the logic there to explicitly skip decoding on `204 No Content`. Thoughts?
Status: Issue closed
|
DS4PS/course_website | 383388835 | Title: Lab 12 - Trouble Plotting Total Injuries or Fatalities by Hour of the Day
Question:
username_0: Why won't this plot??
```{r}
d2 <-
dat %>%
filter( as.numeric(hour) <= 24 ) %>%
group_by( hour ) %>%
summarize( harm = sum( Totalinjuries > 0 | Totalfatalities > 0 ) )
plot( as.numeric(d2$hour), d2$harm, pch=19, type="b" cex=2, bty="n",
xlab="Time of day in Hours", ylab="Number of Injuries or Fatalities",
main="Total Injuries or Fatalities by Time of Day")
```
Answers:
username_1: You are missing a comma after `type="b"` :-(
```r
plot( as.numeric(d2$hour), d2$harm, pch=19, type="b", cex=2, bty="n",
xlab="Time of day in Hours", ylab="Number of Injuries or Fatalities",
main="Total Injuries or Fatalities by Time of Day")
``` |
CpanelInc/backup-transport-dropbox | 349306546 | Title: WebService::Dropbox install problem.
Question:
username_0: Could you check it please?
[root@1547odhuo ~]# sudo cpan WebService::Dropbox
CPAN: Storable loaded ok (v2.20)
Going to read '/home/.cpan/Metadata'
Database was generated on Sun, 13 Nov 2016 23:41:02 GMT
CPAN: LWP::UserAgent loaded ok (v5.833)
CPAN: Time::HiRes loaded ok (v1.9721)
Fetching with LWP:
http://mirror.funkfreundelandshut.de/cpan//authors/01mailrc.txt.gz
CPAN: YAML loaded ok (v1.18)
Going to read '/home/.cpan/sources/authors/01mailrc.txt.gz'
............................................................................DONE
Fetching with LWP:
http://mirror.funkfreundelandshut.de/cpan//modules/02packages.details.txt.gz
Going to read '/home/.cpan/sources/modules/02packages.details.txt.gz'
Database was generated on Thu, 09 Aug 2018 10:41:05 GMT
.............
New CPAN.pm version (v2.16) available.
[Currently running version is v1.9402]
You might want to try
install CPAN
reload cpan
to both upgrade CPAN.pm and run the new version without leaving
the current session.
...............................................................DONE
Fetching with LWP:
http://mirror.funkfreundelandshut.de/cpan//modules/03modlist.data.gz
Going to read '/home/.cpan/sources/modules/03modlist.data.gz'
DONE
Going to write /home/.cpan/Metadata
Running install for module 'WebService::Dropbox'
Running make for A/AS/ASKADNA/WebService-Dropbox-2.07.tar.gz
Fetching with LWP:
http://mirror.funkfreundelandshut.de/cpan//authors/id/A/AS/ASKADNA/WebService-Dropbox-2.07.tar.gz
CPAN: Digest::SHA loaded ok (v5.47)
Fetching with LWP:
http://mirror.funkfreundelandshut.de/cpan//authors/id/A/AS/ASKADNA/CHECKSUMS
Checksum for /home/.cpan/sources/authors/id/A/AS/ASKADNA/WebService-Dropbox-2.07.tar.gz ok
CPAN: Archive::Tar loaded ok (v1.58)
WebService-Dropbox-2.07/Build.PL
WebService-Dropbox-2.07/Changes
WebService-Dropbox-2.07/HOW_TO_DEVELOPMENT.md
WebService-Dropbox-2.07/LICENSE
WebService-Dropbox-2.07/META.json
WebService-Dropbox-2.07/README.md
WebService-Dropbox-2.07/cpanfile
WebService-Dropbox-2.07/example/cli/README.md
WebService-Dropbox-2.07/example/cli/app.pl
WebService-Dropbox-2.07/example/cli/cpanfile
WebService-Dropbox-2.07/example/cli/cpanfile.snapshot
WebService-Dropbox-2.07/example/mig/mig.pl
WebService-Dropbox-2.07/example/web/README.md
WebService-Dropbox-2.07/example/web/app.psgi
WebService-Dropbox-2.07/example/web/cpanfile
WebService-Dropbox-2.07/lib/WebService/Dropbox.pm
WebService-Dropbox-2.07/lib/WebService/Dropbox/Auth.pm
[Truncated]
/usr/bin/perl Build.PL -- NOT OK
Running Build test
Make had some problems, won't test
Running Build install
Make had some problems, won't install
Running make for A/AS/ASKADNA/WebService-Dropbox-2.07.tar.gz
Warning: Prerequisite 'Module::Build::Tiny => 0.035' for 'ASKADNA/WebService-Dropbox-2.07.tar.gz' failed when processing 'LEONT/Module-Build-Tiny-0.039.tar.gz' with 'writemakefile => NO -- No 'Build' created
'. Continuing, but chances to succeed are limited.
CPAN.pm: Going to build A/AS/ASKADNA/WebService-Dropbox-2.07.tar.gz
**Can't locate Module/Build/Tiny.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Build.PL line 9.
BEGIN failed--compilation aborted at Build.PL line 9.
No 'Build' created ASKADNA/WebService-Dropbox-2.07.tar.gz
/usr/bin/perl Build.PL -- NOT OK
Running Build test
Make had some problems, won't test
Running Build install
Make had some problems, won't install
CPAN: Module::Build loaded ok (v0.35)**
Answers:
username_1: Hi,
I see you are already running as the root user. Do you still face issues when not passing 'sudo' while using root?
If you are still facing issues, could you try installing the packages \`_cpanel-perl-526-Module-Build-Tiny_' or \`_perl-Module-Build-Tiny_' via yum and see if it corrects your issue?
Thanks,
username_0: Thank you for your answer, i tried without sudo but sama problem, after i tried `cpanel-perl-526-Module-Build-Tiny' or `perl-Module-Build-Tiny' via yum but nothing to do... fallowing error. Thank you.
root@1547odhuo ~]# yum install cpanel-perl-526-Module-Build-Tiny
Loaded plugins: fastestmirror, security, universal-hooks
Setting up Install Process
Loading mirror speeds from cached hostfile
* EA4: 172.16.31.10
* cpanel-addons-production-feed: 172.16.31.10
* base: mirror.hosting.com.tr
* extras: mirror.hosting.com.tr
* updates: mirror.hosting.com.tr
Nothing to do
[root@1547odhuo ~]# yum install perl-Module-Build-Tiny
Loaded plugins: fastestmirror, security, universal-hooks
Setting up Install Process
Loading mirror speeds from cached hostfile
* EA4: 172.16.31.10
* cpanel-addons-production-feed: 172.16.31.10
* base: mirror.hosting.com.tr
* extras: mirror.hosting.com.tr
* updates: mirror.hosting.com.tr
No package perl-Module-Build-Tiny available.
Error: Nothing to do
[root@1547odhuo ~]#
username_0: Hi again, i installed App::ModuleBuildTiny (0.023) from cpanel Home »Software »Install a Perl Module
now everything is ok, thank you.
Status: Issue closed
username_1: Glad to hear that! Thanks for reporting back.
username_2: Thank you, that solved my issue too. |
yamixa-gz/entry-task | 802390921 | Title: Убрать overflow
Question:
username_0: https://github.com/username_1/entry-task/blob/f4efa8d24f6aa8c008931ef948d7ace8eb237cce/src/scss/App.scss#L6
верстка не должна вылазить за пределы блока даже при отсутствии этого свойства
Answers:
username_1: fixed
https://github.com/username_1/entry-task/blob/1209b71879450b5f70d0d6b7fef550d2029328b9/src/scss/App.scss#L6
Status: Issue closed
|
PaddlePaddle/Paddle | 294195359 | Title: 'VarDesc' needs the support of holding multiple `LoDTensorsDesc`
Question:
username_0: So far our `VarDesc` can only hold one LoDTensorDesc. Although a `LoDTensorArray` has more than one `LoDTensor` in the runtime, all the `LoDTensor`s share the same shape and data type, so a single `LoDTensorArray` is enough to describe them.
However, since we are now trying to implement readers in C++, the support of multiple LoDTensorDesc in one `VarDesc` becomes necessary. In our design, a reader is held by a Variable, and of course, it can yield more than one LoDTensor once. These LoDTenosrs are likely to have distinct shapes, data types and LoDs. To describe all of them during the compile time, our `VarDesc` must be able to hold more than one `LoDTensorDesc`.<issue_closed>
Status: Issue closed |
bazelbuild/bazel | 232703500 | Title: android_binary error message mentions non-existent attribute
Question:
username_0: When building an android_binary with no manifest attribute, the build fails with a message like this:
```
ERROR: /usr/local/google/home/ajmichael/bazel/a/examples/android/java/bazel/BUILD:12:1: in manifest attribute of android_binary rule //examples/android/java/bazel:hello_world: a resources or manifest attribute is mandatory.
````
However, there is no `resources` attribute on `android_binary`.<issue_closed>
Status: Issue closed |
denolfe/zsh-travis | 151640625 | Title: git-trav:3: command not found: __open
Question:
username_0: Hello,
I am trying to use zsh-travis with antigen and I am having the following error:
git-trav:3: command not found: __open
I am not sure if it might be related that I already used oh-my-zsh before I installed antigen. Perhaps I should try a clean install with antigen first and install oh-my-zsh on top of antigen.
Answers:
username_1: Fixed by d6b9b7c
Status: Issue closed
|
php-coder/mystamps-country | 533956438 | Title: Make everything configurable with env variables
Question:
username_0: To comply with 12 factors (see https://www.12factor.net/config), let's make everything configurable with env variables.
The following line should be modified to not hard-code a host and port:
https://github.com/username_0/mystamps-country/blob/85bf3df53d79e7a36dedb9d499bf62142cc61f97/main.go#L33
Answers:
username_0: I looked at https://github.com/kelseyhightower/envconfig and I've decided to not use it for the following reasons:
- there should be a good reason for introducing a new dependency
- our requirements are small at this moment and they can be solved with a little custom code (+ https://golang.org/pkg/os/#Expand)
- while this library provides some benefits (variables can have a static prefix, auto transformation from string to a particular type, usage helper) we won't benefits much from them as they aren't required in our case
- the library has open issues and pull requests with no response for some time and it seems non-maintained
username_0: One more project to consider: https://github.com/ardanlabs/conf |
alibaba-fusion/next | 415040318 | Title: [Select]0.x
Question:
username_0: - [ ] I have searched the [issues](https://github.com/alibaba-fusion/next/issues) of this repository and believe that this is not a duplicate.
### Version
### Component
Select
### Environment
win7,360browser 10.0
### Reproduction link
[https://alibaba.github.io/ice/0.x/component/select](https://alibaba.github.io/ice/0.x/component/select)
### Steps to reproduce
select多选框,再次点击已选项会去除该选项,但是该选项的√还在
<!-- generated by alibaba-fusion-issue-helper. DO NOT REMOVE -->
<!-- component: Select -->
Status: Issue closed
Answers:
username_1: 0.x问题提到内网gitlab(内网用户)或者反馈给ice团队(外网用户) |
collective/Collective | 594064419 | Title: Looking for help on icalendar PR
Question:
username_0: Hi there,
Very short: I'm looking for someone who would like to review my small PR on icalendar: https://github.com/collective/icalendar/pull/299
I believe this is a helpful fix and could help not only me :)
Answers:
username_1: I merged the PR.
Status: Issue closed
username_0: Thank for your help! :) |
shoumodip/ido.nvim | 803021283 | Title: Bug where `minimal_mode` deletes current buffer when closing window
Question:
username_0: # minimal config
```lua
require "ido"
ido_decorations['separator'] = ' | '
ido_decorations['matchstart'] = '{ '
ido_decorations['matchend'] = ' }'
ido_decorations['marker'] = ' -> '
ido_decorations['moreitems'] = '...'
ido_limit_lines = true
ido_minimal_mode = true
ido_overlap_statusline = true
```
# Steps to reproduce
1. Open ido file browser using `<leader>.`
2. close the ido window by pressing `<esc`
3. second buffer/window is closed in addition to the `ido prompt buffer`
# screen recording
https://user-images.githubusercontent.com/1220084/107155846-032b5f00-697b-11eb-860d-2a93a1193a18.mov
# hypothesis
Probably a bug in the tracking of the buffer / window nr that are terminated.
Answers:
username_1: Check it out now.
And FYI it was actually a bug in the `ido_close_window` function. It does not check whether the minimal mode is active or not. It was originally intended for the non-minimal version of ido and requires to delete the ido buffer when it closes. However in minimal mode, there is no "ido buffer", rendering happens in the echo area instead. Hence it just deletes the current buffer. Thought you would like to know. Cheers :)
username_0: @username_1 that resolves the issue thx !
Status: Issue closed
|
fprime-community/fpp | 1087118407 | Title: Bug In JAR Wrapper Shell
Question:
username_0: The JAR shell wrappers that wrap the FPP tools have an error in them:
```
java -jar /Users/mstarch/code/fprime-infra/fpp/fpp-tools/fpp-locate-defs.jar $@
```
Should read as the following (with quotes around `"$@"`:
```
java -jar /Users/mstarch/code/fprime-infra/fpp/fpp-tools/fpp-locate-defs.jar "$@"
```
This informs bash to pass all arguments maintaining the escaping provided by the above call. Without this, any escaping done in the caller is ignored. This is specifically painful to users with paths that require escaping (e.g. `/home/user/my code/fprime`) will be broken into two partial paths by the existing code, but should be handled properly by the suggested code.
Answers:
username_0: Note: there is a symmetric error in the CMake integration that was masked by this error.
username_1: On the FPP side, this is not a bug/oversight. It is a conscious decision not to support path names with spaces in them.
If we want to support path names with spaces, then I think we will need a fair amount of specification, design, implementation, and testing that has not been attempted. For example:
. On the input side
. What input should fpp-depend see if a user writes `fpp-depend a\ b.fpp c\ d.fpp`?
. Same question for `fpp-depend "a b.fpp" "c d.fpp"?
. Same question for `fpp-depend 'a b.fpp' 'c d.fpp'?
. On the output side
. What should the tools do if a location file has a space in it? e.g., `a b.fpp`? For example, should fpp-depend write out the file as `a b.fpp` or `a\ b.fpp`? If the former, is every tool that uses the output of fpp-depend required to escape spaces before passing them to other tools? Are we doing that? Are we testing the use cases?
As far as I know, we are not using file names with spaces in F Prime. This suggests: (a) we don't need to handle it and (b) we are not testing the relevant use cases. I am not in favor of adding quotation marks to one or two places and hoping/guessing how the rest of it all works.
username_1: Note that if we put in the quotation marks as suggested, then case (1)(i) should work as one might expect, though I haven't tested it. I don't know what would happen in cases (1)(ii) and (1)(iii). Maybe case (1)(iii) would work reasonably. Case (1)(ii) almost certainly would not.
username_0: Within the F´ codebase, I agree, users cannot and should not use files with spaces in them. However, this is an issue of paths. If a user places their fprime code in a folder called "my documents" it should work just as well as using "my_documents".
e.g. `/home/myuser/my documents/fprime` can be used by our users and we must support it. We've seen users do this before and thus far it has worked.
username_1: I would discourage users from doing that, for the reasons I stated above. Basically, putting spaces in paths is like throwing sand in the works of any shell script. One can make shell scripts robust to this sand-throwing, but we have not attempted to do that in any rigorous or systematic way. For example, I doubt that even the FPP installer script will work if FPP is checked out into a path, because of this:
https://github.com/fprime-community/fpp/blob/ee355fc99eb8040157c62e69f58ac6a8435cd981/compiler/install#L24-L25
I'm pretty certain that the FPP unit test scripts will fail too -- although maybe most users won't care.
If we really want to support paths with spaces, then we need to answer all the questions I posed above and test all the use cases, e.g., FPP checked out in a path with a space, locs files containing spaces, etc.
username_1: We have to weigh the cost of specifying, implementing, and testing proper support for spaces-in-paths vs. just telling users not to do it. Is there any need to do it, or is it a nice-to-have?
If we don't support it rigorously, then it won't work. Basically the default for any kind of shell script is that it won't work. Making it work right takes effort.
username_1: Also, what's the reason for this distinction? Since we manage dependencies as full paths, it seems like we should either support spaces-in-paths (including filenames) or not. I don't know why we'd need or want to treat paths "inside" F Prime any differently than paths "outside" F Prime.
My assumption when designing and implementing FPP was no spaces in paths, period.
username_1: It may not take that much work to support spaces in paths, but we should assume it doesn't work unless and until we put in the effort to make it work.
username_1: Embedded `"` and `'` in path names also clash with standard shell programming assumptions. ':' and ',' cause problems too. There may be others. My suggestion, for simplicity and robustness, is to disallow all these problematic characters. There is an easy workaround, e.g., rename `my project` to `my_project`.
Status: Issue closed
username_1: Closing this in light of #105, which fixes the immediate issue.
username_1: Continuing the discussion in #109. |
UCATLAS/xAODAnaHelpers | 318825093 | Title: Interrupted by shell prompt when running locally
Question:
username_0: When running locally using the direct driver, `xAH_run` is repeatedly interrupted by the shell prompt. Pressing `CTRL-D` at this prompt a couple of times gives the error:
`tput: No value for $TERM and no -T specified`, then after another couple of presses `xAH_run` continues further before being interrupted again the same way.
Answers:
username_1: Hi,
Are you using an interactive bash shell or not? https://askubuntu.com/a/592839
username_0: This is from within a ZSH script. The real problem is the dropping to a prompt, not the tput error.
This happens when SampleHandler is querying Rucio, which uses `sh::exec_read`, which calls `gSystem->Exec`, which (on *NIX) uses the C standard `system` function, which calls `/bin/sh -c`.
It appears the command is being incorrectly interpreted, causing the prompt.
username_1: Ok, so this is an EL driver issue. Can you report to PATHelp?
username_0: Nevermind, solved. Turns out SampleHandler calls `lsetup 'rucio -w'` and this wrapper version for some reason sources `.bash_profile`.
I have (had) a `.bash_profile` that then starts `zsh`, so I was getting a prompt.
Status: Issue closed
|
AdobeDocs/experience-manager-cloud-manager.en | 791894274 | Title: Description of Program should be more detailed
Question:
username_0: Issue in ./help/using/first-time-login.md
The new video introducing programs appears to be centred around Cloud Service, which gives the impression that it is a button-click away from creating a new Program. However, in AMS the Program is not tied just to a logical infrastructure but in-effect a physical infrastructure. Could we attempt to enhance the clarity here? It creates an idea that the number of Programs on AMS is elastic which is not strictly the case. Thank you.
Answers:
username_1: Thanks for highlighting this - we will investigate.
username_1: Tracked with CQDOC-17356. |
xxarles/OpusFactorium | 879126112 | Title: Making GlobalVariables the central repo of information
Question:
username_0: This will update GlobalVariables so that it encompass all the components in the screen. It must keep track of the TilePosition of all components and have the proper functions for update once it is working to update the positions and start the animations
Answers:
username_0: Created the script and adding current parts are done directly in it
Status: Issue closed
|
FolkertVanVerseveld/aoe | 351746095 | Title: Misaligned font text
Question:
username_0: Most small label text items are misaligned vertically, but it is a real mystery
to me why this is happening. It looks like libfreetype does not like the fonts,
but we can't much about that if that's the case...
Anyway, it is a known issue (see bugs file), but it would be nice if we can find
a good and permanent workaround...
Answers:
username_0: For illustration purposes, here is the original main menu:

And this is our replica:

Ignore the missing trademark and special symbols in the second picture,
because that is a separate issue.
Status: Issue closed
username_0: Finally fixed by prerendering all TTF data, because libfreetype sucks...
 |
exercism/exercism | 348899540 | Title: Translation to PT-BR
Question:
username_0: Hello
I was talking to some friends (@woliveiras and @laurenmariaferreira) and we would love to translate exercism to pt-br. We want to reach a larger audience that does not speak English (:
I'm opening this issue to know if I can do this and, if it's positive, to getter some help
Thank you
Answers:
username_1: i can help with the translation 👊
username_2: I'd be glad to help to translate if needed :)
username_3: Me too. :)
username_4: Katrina, Nicole and myself are going to have a proper chat about this and
come back with some structured thoughts. In all likelihood that won't be
for a few days though. In the meantime thank you for all offering to help
out. You're all awesome :)
username_5: I can help to translate 😃
username_6: count on me!
username_7: I'll be able to help 😄
username_8: I can help too! :)
username_9: I make myself available too
username_0: @username_8 @username_6 @username_3 @username_5 @username_9
While @username_4 talk with the staff about this, We can organize ourselves via telegram.
Please, contact me on twitter ( `@_rachcl` ) so I can pass the link through DM (:
username_10: I can help 🐗
username_11: I'm on it too!
username_12: I can help too :)
username_0: Hello
I'm just pinging on this issue (:
We are still very excited to do this and we are waiting for your return.
Thank you
username_4: @username_0 @username_11 (and everyone else) Thanks for your enthusiasm and nudges. I want to just emphasise again that achieving this will be hugely complicated. There would be well over 10,000 files that would need translating, and then a plan for how we can move forward without being blocked by translations for future changes. It's going to take us a bit of time to work out if this is a project we feel is something we have the resource to cope with or manage on an ongoing basis atm. We will get back to you with some proper thoughts but ask for your patience as we work through some of the more urgent bugs and issues first. Thanks :)
username_11: Great @username_4, thanks for the feedback, we'll sit tight and wait for new instructions! :)
username_4: Hey everyone. So we've spent some time discussing this today.
I think the first key thing is to be clear that right now, Exercism provides mentoring only in English. Until we can keep up with the amount of submissions we're getting in English, we're not going to expand mentoring to other languages, because we simply won't cope with the demand. In the long-run this will change, but not in the next 6 months.
So what that in mind, the key thing we need is to get a better understanding of which parts of Exercism would improve because of translation and why? If (for now) English is a prerequisite to be mentored, what value will translating parts of the website into a different language give our users, and which bits are the key pain points?
If you could explain that to us a bit more we will hopefully be able to find a way forward :)
username_13: Another Brazilian here. Would love to help translate exercism as well.
@username_4 , I guess we don't need to focus on the mentoring part right now. Lot's of Brazilians know how to read in English pretty well, but feel intimidated by a whole English only website. To narrow down even further I'd say we start by only translating the common pieces of advice/tutorials. It'll help those who at least want to get started. Another thing to think about is that the majority of universities here use just a few languages in their 101 courses (mainly Java, Python, and C/C++.)
username_0: @username_4 We were talking with some of the people the comment on this topic about our motivation to translate exercism to PT-BR (since you are with one already have a very high demand now for students).
Brazil is a big country and many people don't know a second language. Most people that want to work with IT don't know English (yet), and have difficulty accessing good study material. Initiatives like exercism and FreeCodeCamp doesn't exist, which makes more difficult for us.
I know that exercism objective isn't to teach people how to code, but I think you are excellent in making a person more comfortable with a programming language that they are learning (and, consequently, help they fell more prepared to apply for a new job opportunity)
We have a group in Brazil called Training Center that has a lot of people willing to share knowledge, and I believe that if we translate exercism, a lot of these people could mentor too (but I know that student demand is likely to be higher than mentors)
I think the most crucial point to translate is the exercise readme and - the most complicated - It will be interesting if the review of the exercises could be done in Portuguese (maybe signaling that the person who did the exercise did it in PT? And we have to think in a way to differentiate the exercise in PT or EN when you download. Maybe another CLI?)
The most difficult point, in my opinion, is to know there are more than 10 000 files to translate, but if we accept that the PT version will be partially translated for a while, we can translate gradually without impacting the development of the rest of the exercism (:
username_0: And for the Portuguese speakers, we are organizing ourselves in the telegram, but we did not translate anything, because we need to know first if we can (:
(you can send me a DM on twitter `_rachcl` and if we have authorization to translate, I pass you the group link)
username_9: @username_0 I make these words my own.
My sample, when I started in IT I could badly read some words in English and I was limited by learning with low-quality content and in the wrong way.
I believe that making this great content more accessible to the Portuguese speakers' newcomers or mentors in any programing language, goes meeting the purpose of the Exercism
username_11: Yes, I agree! We Brazilians live in a different environment, and the majority part (our junior developers) are scary by a content in others languages. Imagine yourself, is hard enough to start coding, what if all of the content you need is in a foreign language – and believe we don't have a second default language here.
If we can help our beginners' developers to start and/or make their first step easy somehow – like giving them an accessible content – this will just help our community to grow.
username_4: The question in still unsure of is that if the mentoring is in English, rhe
regardless of whether the site is in English, will the beginner still find
the product scary and not use it. Are we not lulling them into a false
sense of security?
username_13: Hi, @username_4. I guess one possibility is to focus on a limited part of exercism. As a starting point we could limit the user to the mentoring part or tell him that we only have mentoring in English. All in all, there's this huge part of exercism that would still be available to any portuguese speaker.
username_14: I'm from Portugal, not Brasil, and I'd be happy to give you guys some help if you ever decide to translate to PT-PT. The website is great and I can see it being widely used in all of Portugal's universities.
Once I get more versed in exercism, I'd love to be able to become a mentor and help some people in my own language (PT-PT or PT-BR, or even english, if necessary).
For those that don't know, PT-PT and PT-BR are just slightly different, but seeing that the effort for PT-BR is being made, tweaking it to PT-PT is a very quick effort.
Let me know your thoughts!
username_15: just throwing in some info for this discussion. We've been using OmegaT (https://omegat.org) for managing our translations (and translation memory) for a few years. It's worth considering as part of how you manage the translations (and the memories can be stored in git for future/multiple translators).
username_16: I'd like to help translating exercism into french also :)
On another website they're using [pootle](https://pootle.translatehouse.org/index.html) to manage the translation by the community. Maybe it's worth looking.
username_17: Hi @username_4 !
I'm an American living in Tokyo and, wondering if there was a way I could contribute to a Japanese translation, found this issue.
I'm actually just now on v2 and learning how the teams edition relates to the personal edition. My understanding is that the teams edition does not include mentoring and that solutions are shared only with team members. (Team members are therefore responsible for mentoring each other.)
Reflecting on what I've learned watching developers who don't speak or read English using tools and services in English, I'm thinking the following:
If translating the product is to be done incrementally, the best place to start is the exercise readme's. The translations should only be made available on the teams edition. This way companies, communities, etc. could create a group, read the exercise descriptions in their native language and receive mentoring in their native language by other people on their team / in their community.
I think having a translated landing page for the teams edition that basically lets the user know that the translation is a work in progress and how they are expected to interact with the product (at least for the time being) would also be helpful.
It's pretty common practice for people here to write Japanese language blog posts explaining how to get started using popular English language services (Here's one for v1 of exercism.io of which I am **not** the author: https://qiita.com/Saayaman/items/4d1cc2b77fe48704ddf0 [all credit goes to the original author]).
Lastly, I have to say that I do have some mixed feelings about doing translations. While having services like Stack Overflow in Japanese has given non-English speaking developers a place to gather and share information, having separate communities like this can also create something like an information silo. That said, coming up with a common language for the entire world to share information in is probably out of scope for this issue :sweat_smile:
While I've spoken mostly on the topic of developers here in Tokyo, I hope that this is useful information re: translation in general.
If the project does get moving, I'd be happy to rally a team together to start on the JP translation.
username_18: *(I have personally wondered about the language-based fragmentation on the internet and thought that maybe the silo view is Americentric. With the [.рф TLD](https://en.wikipedia.org/wiki/Internationalized_domain_name), did we create a silo or give an existing silo a name that some people can't pronounce?)*
username_17: It took me a little while to remember why I said this because it was probably a little off-topic and the context is missing.
At the time I was working at a Japanese company at which I was the only non-Japanese person on the development team and the only member comfortable with using English at work (I'm not just comfortable, it's my native language). I was in charge of on-boarding and education among other things, and I spent a lot of time searching for something like exercism.io, but in Japanese. There are of course services, but in my opinion we just don't have the variety and volume of educational resources that the English-speaking (native or otherwise) development community has. Not all, but much of the material available is translated and that means it's sometimes out of date as well.
One obvious solution is that developers here (wherever they might be from) develop these resources, but there's sort of a catch-22 because in order to know enough about a non-local technology (i.e. not originally made here) to teach it as well as a local would there needs to be again a sort of interpreter in-between the two communities. The most obvious example of a local scene here is of course Ruby and it continues to be a really active community because we don't have the bottleneck that occurs when we're trying to interpret from primary sources. (I say "we" here as I don't have to wait for translations, but I do often need to create them for team-members).
OTOH, I hope I'm only seeing what's in my field of vision and simply over-estimating the volume of English language resources in comparison to other languages. In a way, I hope I am!
Part of me feels very uncomfortable telling people to "just learn English", but it does seem to be a huge advantage for developers and creates tons of opportunities to be a part of an international OSS community. I am fluent in Japanese and I promise I'm not trying to get my co-workers to speak English just because I'm lazy. :smile: There's also a pretty big push here coming from bilingual Japanese developers to get their team comfortable with English and reading primary sources. In a way, every time I write a translation for someone, I prevent this.
Whether we should have one global community or lots of different "centers" that interact with each other / whether there should be a de facto lingua franca and whether it should be English I don't know. If you are interested in this subject please contact me privately as it's a subject I'm always happy to discuss and I'm probably drifting way off-topic for this specific issue. Personally I love the diversity of the open source community and I'm happy to be a part of any project that can help us all share ideas, work together and learn from each other as equals (i.e. not advantaged or disadvantaged by being a native or non-native speaker of X)
username_19: I can help translate English ->> German
username_20: I can Help English <-> Arabic
Ping me when you need anything
username_21: I can help translate to Spanish. Let me know if you need me.
username_22: Hello 👋
With the launch of Exercism v3, we are **closing all issues in this repository** to help give us a clean slate to detect new problems. If this issue is still relevant to Exercism v3 (e.g. it's a feature that we haven't implemented in v3, or a bug that still exists), please reopen it and we will review it and post an update on it as soon as we get chance.
Thanks for helping make Exercism better, and we hope you enjoy v3 🙂
Status: Issue closed
|
WSWCWaterDataExchange/MappingStatesDataToWaDE2.0 | 551219906 | Title: Import California water rights data
Question:
username_0: Here is the CA water rights database access portal. It does not seem to support a mass download but pending a response from <NAME>.
https://www.waterboards.ca.gov/waterrights/water_issues/programs/ewrims/
Here is a cool dashboard for CA water rights. We need to check if the Tableau workbook includes the Excel sheet that contains all the data
https://public.tableau.com/profile/rafael.maestu#!/vizhome/WaterRightsTypesbyWatershedHUC6SENIORRIGHTS/HUC6Dashboard
Status: Issue closed
Answers:
username_1: This document "EWRIMS MASTER FLAT FILE METADATA DRAFT 1-17-20.pdf" says, “Data dictionary [is] pending”, and some of the terms, e.g. LOCATION_METHOD for Sites.CoordinateMethodCV, have been entered into the ‘sites.csv’ ‘ as they are’. If these terms need mapping to different values by a dictionary, then we may need to revise the Sites table later. |
Facepunch/garrysmod-issues | 304358670 | Title: Nextbot BodyMoveXY() someimes Crashing Server when using ACT_HL2MP_SWIM_ ACTs upon being shot.
Question:
username_0: It doesn't ALWAYS crash, but it's very common. I've noticed some server instances where it won't really crash, but I can usually get it to crash if I keep retrying it. Especially if they run into a corner or something.
Nextbot Code straight from the wiki(which also crashes randomly) with swim animations:
```
AddCSLuaFile()
ENT.Base = "base_nextbot"
ENT.Spawnable = true
function ENT:Initialize()
self:SetModel( "models/player/charple.mdl" )
self.LoseTargetDist = 2000 -- How far the enemy has to be before we lose them
self.SearchRadius = 1000 -- How far to search for enemies
self:SetHealth(1000);
end
----------------------------------------------------
-- ENT:Get/SetEnemy()
-- Simple functions used in keeping our enemy saved
----------------------------------------------------
function ENT:SetEnemy( ent )
self.Enemy = ent
end
function ENT:GetEnemy()
return self.Enemy
end
----------------------------------------------------
-- ENT:HaveEnemy()
-- Returns true if we have a enemy
----------------------------------------------------
function ENT:HaveEnemy()
-- If our current enemy is valid
if ( self:GetEnemy() and IsValid( self:GetEnemy() ) ) then
-- If the enemy is too far
if ( self:GetRangeTo( self:GetEnemy():GetPos() ) > self.LoseTargetDist ) then
-- If the enemy is lost then call FindEnemy() to look for a new one
-- FindEnemy() will return true if an enemy is found, making this function return true
return self:FindEnemy()
-- If the enemy is dead( we have to check if its a player before we use Alive() )
elseif ( self:GetEnemy():IsPlayer() and !self:GetEnemy():Alive() ) then
return self:FindEnemy() -- Return false if the search finds nothing
end
-- The enemy is neither too far nor too dead so we can return true
return true
else
-- The enemy isn't valid so lets look for a new one
return self:FindEnemy()
end
end
----------------------------------------------------
-- ENT:FindEnemy()
-- Returns true and sets our enemy if we find one
----------------------------------------------------
function ENT:FindEnemy()
[Truncated]
self:HandleStuck()
return "stuck"
end
coroutine.yield()
end
return "ok"
end
list.Set( "NPC", "simple_nextbot", {
Name = "<NAME>",
Class = "simple_nextbot",
Category = "NextBot"
} )
```
Am I missing something here? Or is this indeed a bug? Keep in mind, I've had the NPC with swim animation for years now on my server, with no where near as many crashes.
Answers:
username_1: Cannot reproduce it and the dumps link is dead.
username_0: Oh sorry. I fixed the link to the dumps.
It's a very odd bug, but I should note I haven't had ANY issues since I've swapped out the swim animations with normal run ones. |
cursive-ide/cursive | 126995600 | Title: "Send X to REPL" not working
Question:
username_0: What I did:
1. Highlighted the text (+ 2 2) in my editor
2. Right clicked.
3. Selected REPL
4. Clicked Send '(+ 2 2) to REPL
What error I got in the REPL:
CompilerException java.lang.RuntimeException: Unable to resolve symbol: +
Answers:
username_1: This is not a bug. When you send forms from the editor to the REPL, by default they are executed in the namespace of the file from which you sent them, not the current namespace in the REPL. If you have not loaded that file into the REPL, then the namespace will not have been created and the symbols from `clojure.core` will not have been referred into it. You can fix this problem by doing *Tools→REPL→Load File in REPL* before sending the form to the REPL.
Status: Issue closed
username_0: Gotcha. Thanks. |
eslint/eslint | 164128134 | Title: Rule idea: no useless template literals
Question:
username_0: **When does this rule warn? Please describe and show example code:**
```javascript
// These are bad:
const foo = `${otherVar}`;
const bar = `${someFn(1, 2, 3)}`;
// They should be:
const foo = otherVar;
const bar = someFn(1, 2, 3);
```
**Is this rule preventing an error or is it stylistic?**
Stylistic.
**Why is this rule a candidate for inclusion instead of creating a custom rule?**
I noticed this pattern a bunch in a recent code review. It can easily happen if you're modifying existing code that uses template literals and don't notice that your literals have become trivial.
**Are you willing to create the rule yourself?**
Possibly. (I've implemented tslint rules but never eslint rules.)
Answers:
username_0: Worth noting that one side effect of the "useless" template literals is that they convert whatever they contain to a string. So if `someFn(1, 2, 3)` returns a number in the example above, then the two `bar`s will have different types (string and number).
username_1: My personal take is that this might not merit inclusion as a core rule unless we can see evidence that this is a serious problem (or an important component of a popular style guide). I think this would be great as a custom rule or a plugin rule. But that's just my take-- we can see what the rest of the team thinks.
username_2: As you mentioned they are not equivalent, since the template ensures you get a string.
username_2: Thanks for your interest in improving eslint. Unfortunately, it looks like consensus couldn't be reached on this issue and so I'm closing it. While we wish we'd be able to accommodate everyone's requests, we do need to prioritize. We've found that issues failing to reach consensus after 21 days tend never to reach consensus, and as such, we close those issues. This doesn't mean the idea isn't interesting, just that it's not something the team can commit to.
Status: Issue closed
|
dret/exercise | 59147562 | Title: EDF time series
Question:
username_0: currently EDF is based on time stamps and assumes a (mostly) uniform distribution of the data points. this works reasonably well for the sensor-based data such as heart rate. however, it is a poor fit for the data where we have long and potentially uneven distributions of data points, such as exercise classification (ran for 50min, walked for 4hrs, slept for 8hrs). if our goal is to cover this kind of data with EDF as well, then we probably need to adapt the format to also allow time ranges. |
the-tcpdump-group/tcpdump | 79117995 | Title: mistagged release
Question:
username_0: https://github.com/the-tcpdump-group/tcpdump/releases
It looks like there was a typo, and the 4.7.4 release's tag is `tcpdump-1.7.4` instead!
Answers:
username_1: I'd like to see this fixed, too as my build scripts checkout the sources by release tag.
Status: Issue closed
username_2: The commit that was previously tagged 1.7.4 is now tagged 4.7.4, thank you for pointing the typo out.
username_3: I pushed this again with a pgp signed tag.
username_2: Thank you. |
adobe/aem-project-archetype | 806942347 | Title: Missing CIF config dependency in archetype v.25
Question:
username_0: ### Expected Behaviour
Project generated with archetype successfully deploys and works in AEM Cloud Service
### Actual Behaviour
Project builds and deploys successfully but pages fail to open in AEM.
### Reproduce Scenario (including but not limited to)
Generate a project with includeCommerce set to "y", deploy to AEM and open any page
#### Steps to Reproduce
1. Generate project from archetype - set includeCommerce to "y"
2. Run `mvn clean install -PautoInstallSinglePackage`
3. Navigate to site in AEM and select a page to edit
#### Platform and Version
Adobe Experience Manager 2021.1.4830.20210128T075814Z-201217
CIF Add-On 2021.02.01
#### Sample Code that illustrates the problem
#### Logs taken while reproducing problem
Caused by: java.lang.NullPointerException: null
at com.adobe.cq.commerce.core.components.internal.models.v1.storeconfigexporter.StoreConfigExporterImpl.initModel(StoreConfigExporterImpl.java:72) [com.adobe.commerce.cif.core-cif-components-core:1.7.0]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.sling.models.impl.ModelAdapterFactory.invokePostConstruct(ModelAdapterFactory.java:962) [org.apache.sling.models.impl:1.4.16]
at org.apache.sling.models.impl.ModelAdapterFactory.createObject(ModelAdapterFactory.java:761) [org.apache.sling.models.impl:1.4.16]
... 186 common frames omitted
11.02.2021 19:38:06.561 *INFO* [[0:0:0:0:0:0:0:1] [1613093886529] GET /libs/wcm/core/content/components.1613093603547.json HTTP/1.1] com.day.cq.wcm.core.impl.components.ComponentServlet provided components.
#### Resolution notes
We traced the issue to the fact that the following dependency was not included:
```
<dependency>
<groupId>com.adobe.commerce.cif</groupId>
<artifactId>core-cif-components-config</artifactId>
<type>zip</type>
<version>${core.cif.components.version}</version>
</dependency>
```
Once this was added to the main pom.xml and the pom.xml in ui.apps and ui.all and embedded into the final package the error was resolved
Answers:
username_1: @username_0 this was fixed by https://github.com/adobe/aem-project-archetype/pull/587 which will be part of the next release.
Status: Issue closed
|
tchajed/coq-tricks | 743425162 | Title: Trick suggestion
Question:
username_0: I found the `exploit` tactic useful in case where I have `H : P -> Q` in the context or as a lemma and would like to have `Q` in the context with the assumption `P` left as a subgoal. The tactic is defined in: https://github.com/AbsInt/CompCert/blob/27beb944ff6ff18ea612c116e414eb40ce1320a6/lib/Coqlib.v#L57 |
ukwa/w3act | 1016520316 | Title: W3ACT sessions expire in Firefox/Chrome
Question:
username_0: Since the latest roll-out, I've been noticing a lot of sessions that expire, which wasn't the case before.
This is the message that will pop-up, soon after browsing in QA ACT: https://www.webarchive.org.uk/act/wayback/archive/20210413112946/https://collectivewisdomproject.org.uk/about/

I've tested the issue in three set-ups
1. **Chrome** (not private mode) = issue seems to affect Chrome
2. **Firefox** (private mode) = seems fine, sessions don't expire
3. **Firefox** (normal mode) = issue also seems to be affecting Firefox
After the session times out, and you're asked to log back in, this happens: https://www.webarchive.org.uk/act/wayback/archive/20210413112946/https://collectivewisdomproject.org.uk/about/

**Testing Firefox (normal mode, not private)**............................. sessions expire after 1 minute, server is GMT

1. **visit target record:** https://www.webarchive.org.uk/act/targets/29155

2. **find instances:** https://www.webarchive.org.uk/act/wayback/archive/*/https://www.scottishpower.co.uk/

3. **click on instance(2018 instance):** https://www.webarchive.org.uk/act/wayback/archive/20180401020323/https://www.scottishpower.co.uk/

4. **click on a link:** https://www.webarchive.org.uk/act/wayback/archive/20180401020323mp_/https://www.scottishpower.co.uk/cancer-research-uk/

5. **Looking at another instance(2021 instance), without re-logging into ACT:** https://www.webarchive.org.uk/act/wayback/archive/20210501104047/https://www.scottishpower.co.uk/

6.visiting the same 2021 instance in **Chrome (not private mode)**: https://www.webarchive.org.uk/act/wayback/archive/20210501104047/https://www.scottishpower.co.uk/

7.Chrome prompts login

8. Going back to **Firefox (normal mode)**, logging back into ACT and visiting the same 2021 instance: https://www.webarchive.org.uk/act/wayback/archive/20210501104047/https://www.scottishpower.co.uk/

[Truncated]
4. trying a different 2021 instance in **Firefox (private mode)**: https://www.webarchive.org.uk/act/wayback/archive/20210927080230/https://www.scottishpower.co.uk/

5. trying the same 2021 instance as above, but in **Firefox (normal mode**): https://www.webarchive.org.uk/act/wayback/archive/20210927080230/https://www.scottishpower.co.uk/

**Possible issues:**
1.
Firefox (private mode): last time cookies accessed, seems ok, but showing GMT, not BST. Could this hour discrepancy cause sessions to end within 1 minute?

2.
Sessions seems **not** to time out in private/incognito mode, only in normal viewing mode, for both Chrome and Firefox . The page also renders better in Firefox private mode (compared to normal mode), maybe because cookies for resources aren't expiring quickly?
Answers:
username_1: This is very odd.
From my investigations, it doesn't seem to be anything to do with time. I can keep browsing around for a while, and leaving the session and then continuing also works, unless I go to a particular page - that Scottish Power homepage. At some point, for reasons I don't understand, the `PLAY_SESSION` cookie that authenticates the user gets dropped.
This seems to be correlated with loading https://www.webarchive.org.uk/act/wayback/archive/20180401020323mp_/https://www.scottishpower.co.uk/ which looks a bit like this:

It's hard to tell what's going on, because so much happens concurrently in the browser, but it seems like the original website is setting a `JSESSIONID` and for some very odd reason I don't understand, this is causing the `PLAY_SESSION` cookie to get invalidated.
The only other thing I could think of was some kind of cookie overload, i.e. does your normal browser session have lots of cookies associated with archived websites and is this causing some kind of blockage. e.g. if you clear all cookies for www.webarchive.org.uk does it seem to work better? Is that the aspect of Private Mode that is helping?
One of the issues I've seen is that sometimes the cookies gets lost after a page has loaded, and you don't notice anything is wrong until you try to go to a new page. This makes working out what's happening more difficult, and I think that's why it seems to behave like a timeout.
Anyway, before going to far down this route, it'd be good to verify whether you've seen this for other archived websites? If it's hitting other sites that'll help triangulate what's going on.
username_0: Thanks for looking into Andy, I'm not sure what could be causing it too, but I have come across the issue when browsing other archived instances across different domains. I'll also clear my cookies and see if that helps. In the meantime, I'll monitor it and note down anything that stands out.
It's not too much an issue, as I can still do my work; I wasn't sure if other ACT users are also being affected.
Webarchive cookies:

username_0: **Using Firefox private mode:**
Trying to access: https://www.webarchive.org.uk/act/wayback/archive/20211001100547/https://www.unrefugees.org.uk/

Session times out visiting link: https://www.webarchive.org.uk/act/wayback/archive/20211001100547mp_/https://www.unrefugees.org.uk/learn-more/news-and-stories/

I log back into ACT after being prompted:

Cookies for https://www.unrefugees.org.uk/ are present

I then try to visit the same instance: https://www.webarchive.org.uk/act/wayback/archive/20211001100547/https://www.unrefugees.org.uk/

Cookies for https://www.unrefugees.org.uk/ have disappeared:

username_0: Closed Firefox private, re-opened Firefox private so everything was flushed
https://www.webarchive.org.uk/act/wayback/archive/20211001100547/https://www.unrefugees.org.uk/

Visited the link that logged me out before, works fine now: https://www.webarchive.org.uk/act/wayback/archive/20211001101313/https://www.unrefugees.org.uk/learn-more/news-and-stories/

username_1: Hey @ikreymer if you get a chance could you take a look at this and see if you think we're on the right track? Unfortunately, it's going to be hard to test as this is specifically about running behind an authenticated service.
username_1: Hi @username_0, on DEV I've modified QA Wayback so the authenication cookie is returned as if it was a new cookie with every single response. I'm hoping this means the browser will consider it 'fresh' and not discard it. Please try visiting https://dev.webarchive.org.uk/act/wayback/archive/*/https://www.scottishpower.co.uk/ etc. and see if it seems better...
Status: Issue closed
username_1: This is fixed, pending rollout. |
adam-paterson/watson-tone-analyzer | 334294480 | Title: Installation issue
Question:
username_0: When running the composer command:
Could not find package username_1/watson-tone-analyzer at any version for your minimum-stability (stable). Check the package spelling or your minimum-stability
Answers:
username_1: Hi @username_0,
Thanks for your interest in the package. Sadly at this moment in time it's still a work in progress and there isn't a stable release available.
Having said that I can probably publish an alpha release up for you to test in your application. I'll notify you when this is done.
username_0: Thank you Adam.
I've managed to do what I wanted using GuzzleHttp.
But would be more than happy to use your package once finished.
Status: Issue closed
|
f-miyu/Plugin.CloudFirestore | 869744599 | Title: Question about IDocumentReference data type
Question:
username_0: I would like to ask how this thing works?
Does this work like a join query in SQL?
Here is my scenario.
I have a Feed class which contains Title, Body, PostedBy
PostedBy is a DocumentReference type for my Users collections which contains user basic information (display name, birthdate and etc.)
What I want to achieve is upon calling the collection "Feed" I want it to also return the Name of the user.
I just don't know how to work this out.
Can someone guide me or help me on this case? Will be much appreciated.
Answers:
username_1: Firestore doesn't support join query. You need to get each user document with the DocumentReference. |
markdownlint/markdownlint | 673477176 | Title: Add setting to disable extension snippets
Question:
username_0: ## Describe the Enhancement:
Add a setting to disable all the snippets added with this extensions.
The problem is that snippets like `markdownlint-capture`, `markdownlint-disable`, `markdownlint-enable`, ... are sooo long that many words are formed with the letters of these snippets. So Intellisense is constantly popping while you type.
VSC matchs snippets with the typed letters in _any position_ of the snippet label; not just the first letters. For examen, typing `male` suggests all the previous snippets ("[MA]rkdown[L]int-[E]nable").
The proposed setting could be something like:
`"markdownlint.addSnippets" = "false"`.
This would avoid the snippets window **constantly popping** while typing text, which is really annoying.
### Impacted Rules:
None
## Describe the Need:
## Current Alternative
Disable intellisense. But this also disables my own markdown snippets or forces me to use `Ctrl + space` every time I want to use my snippets.
## Can We Help You Implement This?:
I don`t have the skills to implement this.
Thanks for this great extension.
Raúl
Answers:
username_1: This seems like feedback for the VS Code markdownlint extension - did you mean to open it at https://github.com/username_1/vscode-markdownlint instead?
On a related note, I don’t think this is something the extension has control over. Once it publishes a set of snippets (which it does statically in its configuration), I don’t know offhand of any way to temporarily unpublished them. You might look for a setting in Code itself to temporarily disable snippets, although that would probably apply globally.
Status: Issue closed
username_0: Sorry. You are right. I posted in the wrong repository.
Thanks for your response. |
aws/aws-sdk-php | 377270873 | Title: Memory leak occurs when SqsClient is continuously called
Question:
username_0: Executing the following code increases the usage memory more and more as requested memory is requested.
`
use Aws\Sqs\SqsClient;
$client = new SqsClient([
'profile' => 'default',
'region' => 'ap-northeast-1',
'version' => '2012-11-05'
]);
while (true) {
$receiveMessage = $client->receiveMessage([
'AttributeNames' => ['SentTimestamp'],
'MaxNumberOfMessages' => 1,
'MessageAttributeNames' => ['All'],
'QueueUrl' => 'http://192.168.20.31:4576/queue/test', // REQUIRED
'Endpoint' => 'http://192.168.20.31:4576',
'WaitTimeSeconds' => 0,
]);
echo memory_get_usage();
}`
I think that there is something that the guzzle sync method obtained keeps on for the promise that has not been scheduled yet maybe.
Answers:
username_1: Hi @username_0, thanks for reaching out to us about this. This behavior sounds similar to #1645 where we found that the SDK (or one of its dependencies) does appear to have cyclic references that have to be cleaned up by PHP's garbage collector, however garbage collection does occur naturally after a certain amount of references have built up.
When running the code you provided I see that memory usage builds up from 3.29MB to about 8.19MB across ~500 iterations of the loop before PHP's garbage collection begins naturally. This is a result of the SDK favoring a faster runtime over a lighter memory footprint, however you can call `gc_collect_cycles()` manually to override this behavior. Including this function at the end of the loop resulted in the memory usage remaining at a consistent 3.29MB instead of slowly increasing to ~8MB before garbage collection begins automatically.
username_0: Hi @username_1 , thanks for your reply.
We can also manually invoke garbage collection in PHP.
It became very helpful, so I would like to try this suggestion.
Thank you very much.
username_0: I was able to confirm that the memory does not continue to increase by using `gc_collect_cycles()`.
Thank you very much.
Status: Issue closed
username_2: Is this being caused by this following issue:
https://forums.aws.amazon.com/message.jspa?messageID=695298
The short of it being something with cURL SSL verification being leaky.
Also this: https://github.com/aws/aws-sdk-php/issues/1273 |
aws/aws-sdk-java-v2 | 1091951264 | Title: user.permissionsBoundary returns NULL while retrieving information from AWS using Java SDK(short issue description)
Question:
username_0: ### Describe the issue
I am using AWS Java SDK v2 to list users using the code defined [https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/iam/src/main/java/com/example/iam/ListUsers.java](here) on the AWS GitHub repo.
public static void listAllUsers(IamClient iam) {`
try {
boolean done = false;
String newMarker = null;
while (!done) {
ListUsersResponse response;
ListUsersRequest request;
if (newMarker == null) {
request = ListUsersRequest.builder().build();
} else {
request = ListUsersRequest.builder()
.marker(newMarker).build();
}
response = iam.listUsers(request);
for (User user : response.users()) {
System.out.format("\n Retrieved user %s", user.userName());
System.out.println("\nPermission Boundary: " + user.permissionsBoundary());
}
if (!response.isTruncated()) {
done = true;
} else {
newMarker = response.marker();
}
}
} catch (IamException e) {
System.err.println(e);
System.exit(1);
}
}
It returns NULL for user.permissionsBoundary(). Here is the output for print statements in the above code.
`Retrieved user jamshaid
Permission Boundary: null
Retrieved user luminadmin
Permission Boundary: null
Retrieved user test
Permission Boundary: null
`
When I run the following command in AWS CloudShell on AWS console, it returns the PermissionBoundary for the users it is defined.
`aws iam get-user --user-name test `
[Truncated]
### Steps to Reproduce
Assign Permission Boundary to a user. Retrieve the permission boundary and user information using the code provided in this SDK example and you will find that the Permission Boundary is returned NULL.
### Current behavior
It returns NULL for the Permission Boundary
### AWS Java SDK version used
v2
### JDK version used
openjdk version "1.8.0_312" OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~21.10-b07) OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
### Operating System and version
Ubuntu 21.10
Answers:
username_1: Hi @username_0,
This has already been answered on [StackOverflow](https://stackoverflow.com/a/70551179).
To reiterate,
In the console you are using `get-user` and not `list-users`, which is the reason why the command is returning all the information about the user, PermissionsBoundary within it.
The output for `aws iam list-users` would match the output you are currently getting for your sample code.
You can use the [`getUser`](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/iam/IamClient.html#getUser-software.amazon.awssdk.services.iam.model.GetUserRequest-) method to return `PermissionsBoundary` in the response.
Example:
```
GetUserRequest request = GetUserRequest.builder()
.userName("test")
.build();
GetUserResponse response = iam.getUser(request);
User user = response.user();
System.out.println("\nPermission Boundary: " + user.permissionsBoundary());
```
Status: Issue closed
|
Jozwiaczek/smart-gate | 810359312 | Title: [Bug report]: Confirm password field error text "must match"
Question:
username_0: **Describe the bug**
Error message "The password fields must match." does not show when the fields mismatch
**To Reproduce**
Steps to reproduce the behavior:
1. Type '<PASSWORD>' into password filed
2. Type '<PASSWORD>' into confirm password filed
3. Remove '7' from password filed
**Expected behavior**
The website should display the message "Password fields must match."
**Screenshots**
 |
AVEgame/AVE | 176249334 | Title: Random numbers
Question:
username_0: We should be able to use random numbers in games. Need to think about syntax
Answers:
username_1: In what context?
username_0: 1. Picking a destination room at random
Maybe `Go somewhere => R(room1,room2)` randomly picks room1 or room2 to go to?
Maybe `Go somewhere => R(room1,room2)(5,1)` goes randomly with probabilities 5/6 and 1/6.
2. Do you think there is a need for a random number (0to1) for use after `?`s? eg. `Go to secret room => secret ? R > 0.9`
3. Anywhere else we need random numbers?
username_0: Going with `__R__` for random
username_0: Done for javascript / website in daef407ad3a99db735d1bcb86bbd09863fa97926
Status: Issue closed
|
FormidableLabs/react-swipeable | 1164089708 | Title: Setup codesandbox examples
Question:
username_0: Move/relocate/repackages all examples into a new codesandbox setup so each example is it's own codesandbox that people can fork to iterate from.
Similar to `use-gesture`'s setup 😸
- https://github.com/pmndrs/use-gesture/tree/main/demo/src/sandboxes
- https://github.com/pmndrs/use-gesture/tree/main/.codesandbox |
hackingmaterials/atomate | 418166621 | Title: Gibbs Workflow -- Phonopy QHA refactor error
Question:
username_0: The phonopy QHA code was refactored in July 2018 and the variable "_max_t_index" no longer exists. It is currently used in _atomate/vasp/analysis/phonopy.py_:
```
max_t_index = phonopy_qha._qha._max_t_index
G = phonopy_qha.get_gibbs_temperature()[:max_t_index]
T = phonopy_qha._qha._temperatures[:max_t_index]
```
This needs to be updated for the current version of phonopy to be used with the Gibbs workflow (should be able to use _len instead).
Answers:
username_1: As far as I can tell, the whole `PhonopyQHA` interface has been removed/simplified and replaced with just a `QHA` object, the method signatures look similar though. In other words, this now gives the `._qha` object directly, and `._max_t_index` has become `._t_max`.
Not sure if there would be a better way of writing this analysis step without using 'protected' attributes?
username_0: @username_1 the PhonopyQHA interface is just for the API -- it has always just created a QHA object actually
username_1: Right, which is I guess why it was removed?
username_0: It hasn't been removed though?
username_1: Ah my mistake, I must have been looking at an old commit.
username_0: Haha it's alright, I thought the same thing. This is where it is defined FYI:
https://github.com/atztogo/phonopy/blob/b7cff9090ae17be41b209f6b89a6f49cd83c36ab/phonopy/api_qha.py#L38
username_2: I am trying to run a QHA analysis with phonopy and just got this error. Do I have an old version of the code or is this still an issue?
username_0: Hi @username_2, sorry for the late reply. I think this bug has *not* been fixed actually -- I don't see any commits to the code since 2017.
I ran the Gibbs workflow so long ago I'm not actually sure what edits I made to get things to work... this was also at the beginning of me starting as a grad student so I didn't do a great job contribution-wise. If my old branch might be of any use, here it is:
https://github.com/username_0/atomate/tree/gibbs
It's possible I'll revisit if/when I do phonopy stuff again, but if you are able to make a pull request with the fix that would be much appreciated :)
username_3: Basic fix from PR #751
still need to check the results in detail and add tests.
username_0: Thanks @username_3 for addressing this!! |
jchris/sofa | 2212074 | Title: Making rewrites work
Question:
username_0: In order to make rewites work the values to the arguments in the query part in rewrites.json have to be enclosed in `"`:
[
{
"from" : "",
"to" : "_list/index/recent-posts",
"query" : {
"descending" : "true", <---
"limit" : "10" <---
}
…
Status: Issue closed
Answers:
username_0: Cleaning up old issues. |
bcgov/ols-router | 335019719 | Title: Add support for soft road restrictions
Question:
username_0: @username_0 commented on [Mon Jun 04 2018](https://github.com/bcgov/api-specs/issues/340)
A soft road restriction is a restriction that can be mitigated by special activities. An example is allowing a wide load on a road segment by closing opposite direction and using a flag-person. MoTI must approve all mitigations.
A hard road restriction can't be mitigated (e.g., a bridge height or narrow width due to a large boulder on the shoulder).
Add a new resource that finds a route that minimizes soft road restriction violations. The result should include a list of the road segements with violated restrictions. This may require multi-criteria optimization. |
GoogleChrome/workbox | 298815778 | Title: Consider a `postMessage` plugin as an alternative to `BroadcastChannel`
Question:
username_0: Welcome! Please use this template for reporting bugs or requesting features. For questions about using Workbox, the best place to ask is Stack Overflow, tagged with `[workbox]`: https://stackoverflow.com/questions/ask?tags=workbox
**Library Affected**:
*workbox-sw, workbox-build, etc.*
workbox-build, workbox-sw
**Browser & Platform**:
*E.g. Google Chrome v51.0.1 for Android, or "all browsers".*
Any besides Chrome, FF
**Issue or Feature Request Description**:
*Your request here.*
Hi!
Workbox has been fantastic. I have been working on an Edge-specific project using the MS Edge Windows Insiders builds. These builds do _not_ support the `BroadcastChannel` API that FF and Chrome use.
Instead, one can communicate with the Host page via `postMessage`. Also, `@angular/service-worker` uses `postMessage` as a built-in. We decided against `@angular/service-worker` for the configurability of `workbox`, but bumped into this in our transition.
It'd be great to have `workbox` have a default plugin that enables various `postMessage` communications by default. That way, we can have Cache update (via `'staleWhileRevalidate'`) notifications go to host page, without having to write our own `plugin`.
Others may benefit from this as well.
Thanks!
*When reporting bugs, please include relevant JavaScript Console logs and links to public URLs at which the issue could be reproduced.*
Answers:
username_1: We can consider it. In the meantime, if you want equivalent functionality, you can add your own [`cacheDidUpdate`](https://developers.google.com/web/tools/workbox/reference-docs/latest/module-workbox-runtime-caching.RequestWrapper#.cacheDidUpdate) plugin to your handler:
```js
const postMessagePlugin = {
cacheDidUpdate: async ({cacheName, url, oldResponse, newResponse}) => {
// Use whatever logic you want to determine whether the responses differ.
if (oldResponse && (oldResponse.headers.get('etag') !== newResponse.headers.get('etag'))) {
const clients = await self.clients.matchAll();
for (const client of clients) {
// Use whatever message body makes the most sense.
// Note that `Response` objects can't be serialized.
client.postMessage({url, cacheName});
}
}
},
};
```
username_0: Great! Thank you!!
username_2: Jeff it would be good to add this to WebFundamentals somewhere. Maybe under "[advanced recipes](https://developers.google.com/web/tools/workbox/guides/advanced-recipes)" or we could create a "custom plugins" page. What do you think?
username_1: I'm going to close this now that we have the recipe merged in our docs.
If there's strong demand to update the "official" `workbox-broadcast-cache-update` plugin to add support for `postMessage()`, then we can reconsider.
Status: Issue closed
|
xephonhq/xephon-b | 306655352 | Title: [runner] Server for remote control
Question:
username_0: Currently Xephon-B just run and stop, however, it could be a server program for
- dynamic control during benchmark
- run multiple workload at same time (just create multiple managers)
- reduce warm up time if dataset is loaded from disk
- detect memory and goroutine leak in current runner |
home-assistant/core | 1140438312 | Title: 2022.2.7 broke my hue integration
Question:
username_0: ### The problem
2022.2.7 broke my hue integration, when I rolled back to 2022.2.6, everything works as expected
### What version of Home Assistant Core has the issue?
2022.2.7
### What was the last working version of Home Assistant Core?
2022.2.6
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Philips Hue
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/hue/
### Diagnostics information
- Setup of hue was made in the ui
- 2022.2.7 hue integratio broken
- rolled back to 2022.2.6, hue integration works
- may be related to:
https://www.home-assistant.io/blog/2022/02/02/release-20222/#release-202227---february-15
Bump aiohue to version 4.1.2 ([@username_1](https://github.com/username_1) - [#66609](https://github.com/home-assistant/core/pull/66609)) ([hue docs](https://www.home-assistant.io/integrations/hue/))
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_
Answers:
username_1: See here: https://github.com/home-assistant/core/issues/66636
For now, just enable any disabled motion sensors and you're all set.
The fix will be in .8 bugfix release
Status: Issue closed
|
john/drive.vote | 353831234 | Title: In scheduled ride form, pre-populate city_for_pickup from Ride Zone
Question:
username_0: And if the user edits city_for_pickup, simultaneously edit input for city_for_pickup near the 'where_to_be_dropped_off'
Also, verify that we want the same input name in both places, I expected to see a city_for_dropoff field, or something like that. Two form elements with the same name is invalid HTML and could cause bugs.
Answers:
username_1: Should city_for_dropoff be pre-populated also or only when city_for_pickup is edited?
Status: Issue closed
username_0: This actually turned out to be confusing behavior, filing a ticket to revert. Entirely my bad call. |
mmistakes/minimal-mistakes | 416562860 | Title: Footnote links no longer working properly
Question:
username_0: ## Environment
I am using Minimal Mistakes 4.15.2 and Jekyll 3.8.5, recently upgraded from 4.12.0 and 3.8.3 respectively. For more information see the attached Gemfile.lock files for the old and new environments.
[Gemfile.lock-old.txt](https://github.com/username_1/minimal-mistakes/files/2923624/Gemfile.lock-old.txt)
[Gemfile.lock-new.txt](https://github.com/username_1/minimal-mistakes/files/2923625/Gemfile.lock-new.txt)
## Expected behavior
In pages generated under the old environment Markdown footnote links behaved as expected: clicking on a footnote numeral link in text would cause the page to scroll to the footnote in question. Once there, clicking on the reverse link arrow would cause the scroll back to the original place where the footnote was referenced.
In pages generated under the new environment this behavior is broken: clicking on either the forward link or revise link causes the page to scroll to the top.
I have tested this under Safari 12.0.3, Firefox 65.01, and Chrome 72.0.3626.121 on MacOS Mojave. All show the same incorrect behavior.
## Steps to reproduce the behavior
You can compare the behavior of the old and new generated pages by looking at the following two pages, the first one on my production server generated under MM 4.12.0, and the second on my staging server generated under MM 4.15.2; the underlying Markdown for the pages is unchanged from one to the other:
OLD: https://civilityandtruth.com/2018/03/22/seven-answers-social-democracy/
NEW: http://username_0.net/2018/03/22/seven-answers-social-democracy/
## Other
If and when I have time I will try to create a minimal reproducible test case. The generated HTML for the pages themselves doesn't seem to differ in the HTML code for the footnotes, so it's possible the problem is in the included CSS files.
Answers:
username_0: Update: The problem appears to be somewhere in main.min.js. If I modify assets/js on my staging server to replace the new MM 4.15.2 version of main.min.js with the old MM 4.12.0 then the problem goes away (after clearing browser cache and reloading the page in question). Similarly, if I replace the old 4.12.0 version of main.min.js on my production server with the new 4.15.2 version, the problem occurs (again, after clearing browser cache and reloading the page).
If I have time later I'll try to narrow down which line(s) in main.min.js appear to be causing the problem.
username_0: Update 2: I've narrowed this down further, by forcing use of a particular version of MM in the Gemfile on my staging server (username_0.net), doing a bundle update, regenerating the entire site from scratch, and then using the test page linked to above. (I don't use incremental update because in the past I've had problems with one of the plugins when I do that.)
I've determined that 4.14.2 is the last version for which footnote links work properly. Apparently one of the changes introduced in main.min.js for 4.15.0 broke footnote links.
I don't have time right now to debug this further. For now I'm going to stay on 4.14.2 until/unless someone finds a fix for this.
P.S. That means the test page above will *not* show the incorrect behavior. You can see it yourself by creating a page using Markdown footnote links and then generating it both in 4.14.2 and 4.15.0.
username_1: I haven’t tested this yet, but seeing how it seems to have broken in 4.15 and up, it is likely related to #2019 and #2023.
Ccing @username_2 as they did this work and might have some insight as to the issue.
username_1: I've confirmed that this is definitely a bug.
Perhaps dropping the custom hash rewriting and scrollspy implimentation and going with Gumshoe as noted in #2050 would be a better solution. Seems that it doesn't break footnote links.
username_1: I put up a proof of concept that fixes the issue by using Smooth Scroll + Gumshoe. It needs some work yet as the scrollspy `.active` classes trigger as you scroll to a header, but turns off as you scroll pass it.
To test out the PR brand you can replace `gem "minimal-mistakes-jekyll"` in your `Gemfile` with the following:
```ruby
gem 'minimal-mistakes-jekyll', :git => 'https://github.com/username_1/minimal-mistakes.git', :branch => 'smooth-scroll-gumshoe'
```
username_0: The change you made appears to fix the problem with footnotes, at least on the browsers I tested: Chrome, Firefox, and Safari for MacOS (for versions see above) and Safari on iOS 12.1.4.
However I will note that on Chrome scrolling to the footnote and from the footnote back to the original test seems slower than normal with the fix; there is a noticeable lag until scrolling starts. (I didn't notice any substantial difference with the other browsers.)
Thanks for working on this!
username_1: The delay is part of the new Smooth Scroll script. It has different easing settings that I haven't tried adjusting yet to fine tune the scroll speed.
username_2: Sorry I haven't had a chance to look this over yet (a busy week) but thanks @username_1 for taking the lead here! Relying more on off-the-shelf packages seems like a generally good idea, so I'm happy to see this in the works. If you let me know when you tweak the settings, I'm happy to take the second testdrive.
username_1: Sounds good @username_2. I'm struggling with getting Gumshoe's scroll spy to work correctly. So if you have any suggestions please let me know.
Seems like it was designed to have the anchor `id` on a `<div>` element wrapped around each TOC section. So what happens is the scrollspy hits a header with an `id` that matches the TOC link, activates the `active` class, but turns it off as soon as you scroll by the header.
I'm sure there's a way to keep it active until you hit the next `id`, but haven't figured that part out. Guessing we'd need to tap into Gumshoe's event API somehow.
username_2: @username_1 Ah, that is unfortunate. It indeed seems that [the Gumshoe code](https://github.com/cferdinandi/gumshoe/blob/master/src/js/gumshoe/gumshoe.js#L128-L138) doesn't handle this case of headers implicitly defining sections. Surprising! I've opened an issue on the Gumshoe Github: https://github.com/cferdinandi/gumshoe/issues/97
username_1: Thanks for opening the issue! I've subscribed to it and will follow along with any developments to Gumshoe.
username_1: Updated Gumshoe to `5.1.0` in #2082, which supports headers. So now the TOC correctly highlights as you scroll. If you can test it out before I merge into `master` that would be much appreciated.
username_0: I don't know if you were asking me or username_2 to test, but anyway: I did a bundle update and am now at "minimal-mistakes-jekyll 4.15.2 from https://github.com/username_1/minimal-mistakes.git (at smooth-scroll-gumshoe@28ee259)". I regenerated my staging server site at username_0.net and checked the footnote functionality; it appears to be working correctly, and scrolling is nice and responsive on Chrome, Firefox, and Safari.
I don't have any blog posts with a TOC, so can't test that.
username_2: @username_1 I don't see Gumshoe 5.1.0 on https://github.com/username_1/minimal-mistakes/pull/2082 or https://github.com/username_1/minimal-mistakes/tree/smooth-scroll-gumshoe. Perhaps you didn't push, or I'm looking in the wrong place? A commit ID would also suffice.
username_1: @username_2 Ooops! My push silently failed and I never bothered to check. Should be there now https://github.com/username_1/minimal-mistakes/pull/2082/commits/1d4923578a53d2c67d6b46cb786556e849c45406
Thanks for testing @username_0!
username_2: Thanks! I just did a quick test of my [(very long) single-page site](https://username_2.github.io/coffeescript-for-python/) (local build -- public version hasn't been updated), and here's my impression:
* Long-distance scrolling is slow and stuttery (on Chrome on Windows). Clicking on a distanct TOC item took 10+ seconds to scroll. It seems there's a limit to speed, but I'd rather have a limit on time (0.5 seconds?) like we used to have.
* Scrolling to a new section no longer changes the URL to refer to the current section. Is this intentional? It's a change from previous behavior. On the other hand, clicking on a TOC link *does* change the URL, which is a nice improvement from old versions.
username_1: @username_2 Thanks for testing.
I believe there's some easing going on with the speed of the scroll. I went with the default of `300ms` but apparently you can [adjust that](https://github.com/cferdinandi/smooth-scroll#scroll-speed).
And re: the URL bar hash no longer changing on scroll. As far as I know the new Smooth Scroll script doesn't have that feature. I'm not so concerned about losing that. In fact the old way kind of bothered me with how it added `/#` when you scrolled to the top.
username_2: @username_1 Thanks for the link. As I guessed, the `speed` setting is by default an amount of time to scroll 1000px. My document is around 40,000px, so it can take 12 seconds to scroll.
If we add the `speedAsDuration: true` option, though, then `speed` should be the total time for an animation. The default for [jquery-smooth-scroll](https://github.com/kswedberg/jquery-smooth-scroll) is 400ms, so perhaps we should use the following settings?
```
speed: 400ms
speedAsDuration: true
```
Alternatively, we can set `durationMax: 500` or something like that. Then short scrolls will be faster, but long scrolls won't be too long.
I tried this on my local copy, and both approaches work: my scrolls are no longer several seconds. Unfortunately, I'm still getting a very jerky motion: I only get a couple of screen refreshes per second.
username_2: I just tried disabling Gumshoe, and the scroll is nice and smooth again. In the code I wrote, I disabled Gumshoe during smooth scrolls, exactly for performance. Maybe we'll need to do this manually again.
The problem seems to be that Gumshoe's [`scrollHandler`](_site/assets/js/main.min.js) uses `window.requestAnimationFrame` instead of `debounce`, which means that it is running on every frame of the scroll animation, instead of realizing that scrolling is still happening and waiting for that to stop.
username_2: I tried to reproduce the problem outside this theme, but it doesn't seem to happen with just Gumshoe + SmoothScroll. A performance analysis with Chrome suggests that it's entirely caused by FontAwesome's `data-search-pseudo-elements` feature. Do you use that in this theme? If not, I think removing it would remove the bad performance I'm seeing.
username_1: @username_2 I'm cool with changing the speed to more closely match jquery-smooth-scroll defaults.
Are you able to push commits to my pull request #2082 and add your code that disables Gumshoe during smooth scrolls?
username_1: @username_2 I don't personally use that feature of FA, but someone recently asked for it. I only added it because it didn't seem to have any adverse impact on performance.
Might have to reconsider that one as I don't think majority of the theme's users care or use FA icons as pseudo elements.
username_1: @username_2 I removed that attribute from the FA script and things look better on my end too. Think I'll leave it out. It's easy enough to override the theme's `_includes/script.html` file if someone really needs to add it back.
As far as I'm concerned it's an edge case and I'd rather improve performance for the majority of users who are using the theme unmodified.
Thanks for debugging this one for me!
username_2: Cool, thanks! I just tested b9c0461eb3af8ec8e435882cdbe747839937b72f and it looks good to me!
username_3: Here you have another online example with the smooth-scroll-gumshoe branch (at b9c0461eb3af8ec8e435882cdbe747839937b72f) with TOC and footnotes: https://gnss-sdr.org/docs/tutorials/gnss-signals
From what I tested (those features are widely used in that website), it works great.
Some slight adjustment for the scroll speed. Repo at https://github.com/gnss-sdr/geniuss-place
Status: Issue closed
|
goharbor/harbor | 1119180630 | Title: Tag Retention Policy with all projects
Question:
username_0: Hi,
I would like to create a tag retention rule for all projects, actually I have 2 rules for some projects but I did it manually.
- For the repositories matching **, retain the most recently pushed 5 artifacts with tags excluding v* with untagged
- For the repositories matching **, retain the most recently pushed 10 artifacts with tags matching v* with untagged
But I want this for all project or an automatic deployment of the rules for new projects.
I read the doc and saw the API swagger but I'm still stuck on this parameters, has anyone managed to do this?
The version of Harbor is not a problem but currently I am in 2.0.2
Thank you ^^
Answers:
username_1: We only support project-level retention currently, not system-level.
Status: Issue closed
|
smanders/externpro | 1075824624 | Title: coverage target(s) only created once
Question:
username_0: we have a project that ends up calling `xpSetFlagsGccDebug()` twice and a check should be added so the custom targets in this macro are only created once
see discussion https://isrhub.usurf.usu.edu/jcoppin/TestPlugins/pull/1#discussion_r96501
Status: Issue closed
Answers:
username_0: completed with commit referenced above |
jlippold/tweakCompatible | 449968477 | Title: `CallBlocker` notworking on iOS 12.1
Question:
username_0: ```
{
"packageId": "com.imkpatil.callblocker",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.imkpatil.callblocker",
"deviceId": "iPhone10,5",
"url": "http://cydia.saurik.com/package/com.imkpatil.callblocker/",
"iOSVersion": "12.1",
"packageVersionIndexed": true,
"packageName": "CallBlocker",
"category": "Tweaks",
"repository": "Packix",
"name": "CallBlocker",
"installed": "1.2",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 1 working reports.",
"id": "com.imkpatil.callblocker",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Block annoying Callers!",
"latest": "1.2",
"author": "<NAME>",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "notworking",
"notes": ""
}
``` |
cronvel/terminal-kit | 797555883 | Title: Update Typings
Question:
username_0: Is it possible to update the typings for this project?
The latest ones available were updated 10 months ago and seem to be very outdated.
Thank ya
Answers:
username_1: @username_0 I don't use typescript, so typings have to be maintained by the community. I'm open to a PR ;)
Status: Issue closed
|
ipfs/notes | 596365070 | Title: Equiping the Hydra-Booster with the BFR (Accelarate the .Provide for really large files -> millions of records)
Question:
username_0: Our baby hydra -- https://github.com/libp2p/hydra-booster -- is growing up to become a super useful type of node that can accelerate significantly the .FindPeers and .FindProviders in the IPFS network.
**What is missing to complete the full picture, is the ability o accelerate the .Provide queries** as well, so that nodes that are storing lot of data can tell the network that they storing without incurring in a huge time and bandwidth cost.
The particular challenge with providing a large file, is that you need to provide one record for each block (each IPLD node) to support random access to the file. Just for reference, a 100GB file transforms roughly into a 1M blocks when adding to IPFS. That means that 1M different records have to be put in the DHT in several different locations.
What makes things worse is that we end up crawling the DHT multiple time to keep finding nodes that match the "XOR metric closest peers to CID", sometimes resulting in having to dial to the same peer multiple times. This is highly inefficient.
A way to improve this (that has been proposed) is to have nodes with very large routing tables, so that the number of hops from provider to the node that will be hosting the provider record is 1~3 hops max. This does improve things but still not ideal, specially for services that provide IPFS pinning, as they will have to dial a ton of times to the network to put those records.
So, a question arises: **What if they were already all over the network?** That's where the hydra-booster comes in. An hydra-node is already everywhere (due to it's benevolent sybil attack) and can answer to .FindProviders queries from multiple locations simultaneously.
If we pair a Pinning Service with a Hydra node, the pinning service would only have to tell the hydra node of its records and then let the hydra node do its jobs. That's it. For this what would have to have is a **special flag that would tell an ipfs node to put all the provider records in a hydra node**.
This would be the first step, the second would be to add a [Thermal Dissipation load balancing strategy](https://github.com/libp2p/go-libp2p-kad-dht/issues/345#issuecomment-564608479) that replicates records to closest peers. What this enables is for the Hydra Nodes to replicate the record to the closest peers of each of its sybils (i.e. each hydra head) so that nodes in that surrounding have the copies of the record as well, increasing redundancy and resilience to churn.
Answers:
username_1: Maybe do this without a special flag, but instead have the hydra send a special flag and a unique ID for the whole hydra?
This would allow a client to mark this connection as more important, avoiding that it will be closed soon after the first provide.
And if there's a larger queue of stuff to provide, the node could ask the hydra for a list of the node IDs for their heads, with a special query.
This would allow addressing a hydra with just one persistent connection, for the whole provide process for all of its heads. If there are enough hydras in the network, this would drastically reduce the amount of connection that needs to be established and terminated, as well as crawling the DHT will get much quicker too. This without losing the fallback to connect to random nodes, which makes a healthy DHT worth all the trouble, as well as avoiding building a centralized infrastructure.
If you're a pinning service and you run some hydras, you can just add a random node-id of each hydra to your bootstrap or your persistent connection list, and if the heads of the hydras are distributed enough and there are enough of them, you would end up with the same result, but without having to rely on a single hydra doing the node's job for it. |
harmony-one/go-sdk | 533226902 | Title: make all the commands use RunE
Question:
username_0: I want when cli errors, that whatever error presented, it is wrapped by version, commit. (skips the need to have to ask for version)
see at bottom of root.go that has:
```go
fmt.Println(errors.Wrapf(err, VersionWrapDump).Error())
```
but this is doing some kind of redunancy.
Refactor all the Run callbacks to cmds to be RunE instead, so everything returns error at just eventually one place<issue_closed>
Status: Issue closed |
xiao8579/QuickCapture-Testing | 847141508 | Title: Additional Camera Options
Question:
username_0: **Describe the enhancement**
Would like to be able to choose a different camera than just the phone I'm using. If I'm connected to a 360 camera through WIFI, I would like to be able to choose that camera. Mostly use GoPro(Fusion,Max) to take photos but other competitors like Insta360 cameras would be nice.
**Describe the user impact**
Enable us to keep all collection within ArcGIS platform.
**Describe alternatives you've considered**
Collecting photos separately and bringing them into arcgis with photo to point tool.
**Additional context**
- Similar to request I made in a Field Maps holistic (https://github.com/doug-m/field-maps-testing/issues/6)
- Briefly spoke with Ismael in zoom session for holistic, noted that we could talk more in the future
- We take 360 photos every 1second/2seconds to capture park district trails. Would like to be able to bring these automatically into ArcGIS platform and possibly connect them to the new orientated imagery viewer. |
elanthia-online/illthorn | 777559780 | Title: Feature Request - Logging in directly from the Illthorn Client
Question:
username_0: Since we know explicitly how to connect games and clients, I would like to see as a future feature the ability to log in directly from Illthorn.
Answers:
username_1: There is the undocumented and still rough `:launch` command i started on |
arcnmx/nixexprs | 1000418856 | Title: wireplumber module improvements
Question:
username_0: - [ ] wrap wireplumber/wpexec/wpctl with the appropriate env vars
- [ ] fix `scripts` interface, also move it under `lua` attr
- [ ] support `.source` and `.text` like typical file types
- [ ] decide once and for all whether to use config dir env var or `-c`
- leaning toward `-c` unless mechanisms for adding more files to the config dir are added (don't seem useful?)
- [ ] enable use with stock config (and/or allow copying into /etc)<issue_closed>
Status: Issue closed |
gggeek/phpxmlrpc | 812723347 | Title: Switch to PSR Logger
Question:
username_0: The logger [used here](https://github.com/username_1/phpxmlrpc/blob/master/src/Helper/Logger.php) is impossible to replace/override, because of the way it's implemented
```
Logger::instance()->debugMessage($message)
```
also it's debug = 2 mode just echoes/prints debug message to the output, which has not much use if phpxmlrpc is used for example in the queue consumer.
replacing it with [PSR Logger](https://github.com/php-fig/log/blob/master/Psr/Log/LoggerInterface.php) would be much more flexible and allow users to use mainstream php loggers like Monolog.
this would also eliminate the `PhpXmlRpc\Client::$debug` variable by using proper PSR Logger loglevel methods.
Answers:
username_1: You are right that atm it is not always easy or even possible to swap out bits of the library, such as the logger, with alternative implementations.
The reason is simple: the library design predates by a wide margin the spread of DI patterns in the php ecosystem, or for that matter, PSR/Log.
In the latest releases however, some work has already been done in order to introduce DI patterns, that would allow this.
Specifically, the 'top-level' classes all implement `setLogger` and `getLogger` methods.
Which means that you can already either subclass `PhpXmlRpc\Client` and take over `getLogger`, or keep the existing Client class and add in your code a `setLogger()` call to inject a new logger to your client instances. Of course it is up to you to create a wrapper that implements the phpxmlrpc "logger api" on top of a psr logger.
I noticed that classes Charset, Http and XMLParser do not yet follow this DI pattern, so there is room for improvement there.
On the other hand, the question also stands: when injecting a logger into, say, the client, should the client propagate it its Requests? And should the Requests propagate it to their XMLParser?
Last words of feedback: this library prizes stability and backwards compatibility over anything else. So, while I am willing to make it easy or at least possible to use psr\log and DI patterns, this will not be done at the cost of BC, at least not until there is a new major version in the pipe.
WDYT?
username_1: ps. DOH!
I just realized that the code which makes it possible to swap out the logger is only in the master branch.
Would you be willing to test if it fits your requirements?
The current plan is to release it for version 4.6.0 - but I also would like to push in a few more new features in that version, and since these are busy times for me, it might not be tagged and released for a couple of months...
username_1: ping
username_0: absolutely yes. when instantiating/servicing the client, you inject logger once and expect everything gets logged into that one instance.
i will dig into it during upcoming days.
username_1: Care to explain how you are instantiating your Client objects? Are you making it a Service (singleton) via your framework's config, or are you creating a new instance manually?
username_0: i am making the client a service (multiple instances actually). but in both service & manual instance, i assume that `Client` and `Server` are the "entry points" which should get injected the logger and propagate it into other classes like `Value`, `Request`, `Response`.
looking at it, some classes which are used as "static property singletons" like the logger should also be serviced and injected, in particular `Encoder` and perhaps everything in `Helper\`.
there are also files in `lib/` which make static calls mostly to `Server`, but they are all marked as "deprecated". so perhaps get rid of them toghether with most (ideally, all) static accesses.
username_1: I'll take a look at what can be done for your scenario, but don't hold your breath for breaking changes and removal of old code...
username_0: i got your point there, but maybe it's time for new major version then. i know this library started long time ago, but using it with today's frameworks by their means is quite cumbersome. it's a useful lib and imho deserves some refactoring, and to maintain BC keep it in two branches, legacy plus current one.
username_1: You are certainly not wrong with that suggestion.
However there a few other things worth pointing out:
- I am maintaining an increasingly large number of open source projects. The time I can afford to dedicate to them is not infinite. Maintaining one extra branch with a mostly different codebase is not a trivial endeavour;
- the "modernize the lib" approach is quite a slippery slope, as there is a ton of things that I would change if shackled from the chains of BC. I even started doing that a little while ago! Just look at all the "phpxmlrpc-ng" repos in my github... Sadly, that project fell by the wayside due to other stuff taking precedence
- I am not sure how popular xmlrpc is nowadays in real life. My gut feeling is that this lib is used mostly by existing projects that adopted it a long time ago. How many users would pick up a new version with a vastly different API?
username_1: Btw, version 4.6.0 has been released, with the changes which were implemented in Feb.
So definitely still relying on a lot of static methods, but it should now be possible to at least introduce usage of a PSR-3 Logger with the help of some glue code... |
mesonbuild/meson | 275028357 | Title: dependency() object returns true but library does not exist
Question:
username_0: systemd checks for a library like this:
libqrencode = dependency('libqrencode',
required : want_qrencode == 'true')
have = libqrencode.found()
On a minimal Arch Linux system this does not show anything on the console ("Dependency libqrencode found:"). The library is not installed, however, "have" is true!
When setting "method : 'pkg-config',", the check works.
Diffing the whole meson output between the minimal system and my also Arch Linux based development system actual system revels several missing dependency checks in the output...
Presumably, the minimal system is missing something which breaks the meson dependency check when using auto...
This appeared during investigation/tests around this issue:
https://github.com/systemd/systemd/issues/7367
Answers:
username_1: I can't reproduce this at all. What are the steps for reproducing this?
username_2: I can reproduce this with systemd git checkout:
```console
$ ninja -C build
ninja: Entering directory `build'
[0/1] Regenerating build files.
The Meson build system
Version: 0.43.0
Source dir: /home/fedora/src/systemd
Build dir: /home/fedora/src/systemd/build
Build type: native build
Project name: systemd
Native C compiler: cc (gcc 7.2.1)
Build machine cpu family: x86_64
Build machine cpu: x86_64
Program tools/meson-check-compilation.sh found: YES (/home/fedora/src/systemd/tools/meson-check-compilation.sh)
Program c++ found: YES (/usr/lib64/ccache/c++)
Native C++ compiler: ccache c++ (gcc 7.2.1)
Compiler for C supports argument -Wextra: YES
...
Message: maximum system UID is 999
Message: maximum system GID is 999
Library rt found: YES
Library m found: YES
Library dl found: YES
Library crypt found: YES
Found pkg-config: /usr/bin/pkg-config (1.3.10)
Dependency libapparmor found: NO
Dependency polkit-gobject-1 found: NO
Library acl found: YES
Library pam found: YES
Library pam_misc found: YES
Library gcrypt found: YES
Library gpg-error found: YES
Library bz2 found: YES
Configuring config.h using configuration
```
while on a different machine I get:
```console
+ meson /root/build -D sysvinit-path=/etc/init.d -D default-hierarchy=unified -D man=false
The Meson build system
Version: 0.43.0
Source dir: /root/src
Build dir: /root/build
Build type: native build
Project name: systemd
Native C compiler: cc (gcc 7.2.0)
Build machine cpu family: x86_64
Build machine cpu: x86_64
Message: Activated pre-commit hook
Program tools/meson-check-compilation.sh found: YES (/root/src/tools/meson-check-compilation.sh)
Program c++ found: YES (/usr/sbin/c++)
Native C++ compiler: c++ (gcc 7.2.0)
Compiler for C supports argument -Wextra: YES
...
Message: maximum system UID is 999
Message: maximum system GID is 999
Dependency threads found: YES
Library rt found: YES
Library m found: YES
[Truncated]
+Native dependency libcurl found: YES 7.57.0
+Native dependency libidn found: YES 1.33
+Native dependency libiptc found: YES 1.6.1
+Native dependency libqrencode found: YES 4.0.0
Library gcrypt found: YES
Library gpg-error found: YES
+Native dependency gnutls found: YES 3.5.16
+Native dependency libdw found: YES 0.170
+Native dependency zlib found: YES 1.2.11
Library bz2 found: YES
+Native dependency liblzma found: YES 5.2.3
+Native dependency liblz4 found: YES 1.8.0
+Native dependency xkbcommon found: YES 0.7.2
+Native dependency glib-2.0 found: YES 2.54.0
+Native dependency gobject-2.0 found: YES 2.54.0
+Native dependency gio-2.0 found: YES 2.54.0
+Native dependency dbus-1 found: YES 1.12.2
Configuring config.h using configuration
```
(the lines with + are missing from the first log)
username_1: The reason why the `dependency()` lines are missing is because those are cached. We should print `(cached)` after the line instead. I am not sure if this is the same as the original bug; the dependency is found, we just don't print it.
username_2: So... not sure if this is a bug or an unexpected feature, but after `touch meson.build && ninja -C build`, the lines for the dependencies tests are not printed again. It's not immediately obvious if the tests themselves are being performed.
It seems that they are not. I can reproduce the issue like this:
```bash
sudo dnf build-dep systemd
git clone https://github.com/systemd/systemd
meson -Dman=false build
ninja -C build
sudo dnf remove qrencode-devel
touch meson.build
ninja -C build
```
In the reconfiguration phase, qrencode is not rechecked and the build fails with `../src/journal/journal-qrcode.c:22:10: fatal error: qrencode.h: No such file or directory`.
username_2: So... I don't think caching like this makes sense. If it looks like stuff is being rechecked, it must be rechecked. Caching results between configure runs breaks everything.
username_1: Ah, if that's the bug then, yes. The reason why found-dependencies are cached is because you might set `PKG_CONFIG_PATH`, etc, during configure and not during the actual build (esp. when generating vcxproj files), so rechecking will mean they won't be found.
We also cannot reliably store those env vars because if those variables are then redefined during the build, did the user mean to re-set that or is that just set unintentionally in the environment and we should ignore it?
I do agree that the way we cache dependencies right now is not great, but we have to come up with a consistent and non-surprising way of dealing with this issue.
username_0: @username_2 with "on a different machine" you mean when using mkosi right? I think mkosi is reusing the build directory of the host. So the problem would come down to that mkosi reuses cached configure data from the host...
username_2: I also checked how meson behaves when a dependency was satisfied but stops being satisfied because a higher version is installed:
```python
libqrencode = dependency('libqrencode', version : '<3.4.4', requires : true)
```
This is not handled properly either, because the cached value is used. And while one could say that removing or downgrading of dependencies requires explicit user action, *upgrading* dependencies is something that happens all the time.
username_2: I see. I think that's a work-around for a user bug. Essentially, if one wants to use PKG_CONFIG_PATH, they need to make sure that it is set to that value whenever the configure step is performed.
username_2: A related issue, maybe you can enlighten me: what's the procedure to reconfigure anything set from variables, let's say CFLAGS:
```console
$ CFLAGS=... meson configure build
→ does not work, it just prints the configuration
$ CFLAGS=... meson configure build -Dsomething-unrelated=foobar
→ does not work, the new value of $CFLAGS is ignored
```
With autotools I'd say ./configure CFLAGS=... and it'd notice the new value. I hope meson can implement something like this.
username_3: `CFLAGS` et al are converted to Meson options which you can change with `meson configure`.
In Meson we have made a conscious choice to avoid using environment variables as much as possible. Global mutable state is bad enough, but envvars and their setting is, on top of that, mostly invisible and hidden.
username_4: I have a fix that might solve that. Basically it checks the versions for the cached dependencies as well. But I haven't polished it up for merging yet.
https://github.com/mesonbuild/meson/pull/2581
username_1: Not necessarily. For instance, with vcxproj files, you configure in `cmd.exe`, and your Visual Studio instance will not have the same variables set. To be fair, the ideal way to solve this would be to always cache environment variables unless the user manually calls `nina reconfigure`. I'd like to hear @username_3's thoughts on this.
username_3: I've said it before and I'll say it again: caching environment variables is a disaster area waiting to punch you in the face when you least expect it. It is very difficult to make work and be intuitive to most developers (people with 10+ years of command line dev work with Autotools et al are a minority, not a majority), especially when considering cross platform development.
Printing `(cached)` in the log is good, it makes it clear where the data is coming from.
One thing which was voiced some time in the past was to store the path and timestamp of the `.pc` file where the dependency comes from and rechecking that it is still valid when using a cached dependency. Maybe something like that would be a better solution instead?
username_1: Caching in general is a 'disaster area', especially caching with no obvious way to override it, but we still have to do it. Our dependency caching mechanism is actually already quite complicated and error-prone, so I'm thinking of ways to simplify it. Perhaps refactoring is a better option.
Checking the timestamp of the `.pc` file would solve it for pkg-config dependencies, but we'd need something else for other dependency types.
username_3: If we could never cache any dependency information but instead just always get it from the system it would be totally awesome and would reduce complexity by a fair bit. But is this something we can actually do, especially on Windows?
username_1: Yes, if we cache the environment and print it on reconfigure ;)
username_4: Sorry for hijacking the thread for a question. Would it be correct to use the vs2017 compiler just because you've opened a 2017 dev console?
What would be the correct behavior of `--backend=vs2015` in a 2017 dev console?
username_1: `--backend=vs*` control the format of the vcxproj files, and the current environment and detected compiler are used for running configuration tests. The two are independent right now, but we may decide to change that. |
regio-osm/housenumberfrontend | 312339607 | Title: if official housenumbers uploadable to osm: let user select which housenumbers in which streets
Question:
username_0: if an official housenumber list with geocoordinates is available and the addresses can be uploaded to osm: let the user select, which addresses should be transferred to josm?
Until now (2018-04), fix a maximum of 500 addresses, sorted by street name and housenumbers, are transferable. |
gogap/aop | 588375692 | Title: not easy enough to use
Question:
username_0: current when we use aop, we had to define **beanFactory/aspect/pointcut**, and then handle _AddAdvice_ explicit
I think there should be a solution (feature) be added to aop, so that we can use aop in golang more **easily**
Answers:
username_1: The purpose for this project is learn golang `reflect` feature, and currently is prototype stage.
If you have good idea for improve this project, you could give me a PR or have some pseudocode, thanks.
username_2: @username_0 Take a look at my solution [proxyz](https://github.com/username_2/proxyz), it may meet your needs. |
electrofun-smart/my-smart-home-java-mqtt | 711403938 | Title: Some classes have strange .old extensions when looking for the smart-home-key.json
Question:
username_0: Some of the classes in the file have strange .old extensions in the filename of the service account key. As far as I can tell there are four such classes. If helpful, I will submit a PR removing these as this solved QUERY intent issues.
Answers:
username_1: Thanks for find it. Yes, please do so.
Status: Issue closed
|
NewHuLe/AppUpdate | 549902823 | Title: 下载文件的总长度出现负数
Question:
username_0: int status = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_STATUS));
long totalSize = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_TOTAL_SIZE_BYTES));
long currentSize = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_BYTES_DOWNLOADED_SO_FAR));
// 当前进度
int mProgress;
if (totalSize != 0) {
mProgress = (int) ((currentSize * 100) / totalSize);
} else {
mProgress = 0;
}
你的demo我只改了布局,其他都没有动,你原来也是计算的时候也是int型,我在想是不是项目里设置的NDK的CPU架构有关系
Answers:
username_1: @username_0 按照demo改的时候需要注意,我想这个问题应该是你进度计算使用了int类型造成的。
Status: Issue closed
username_0: int status = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_STATUS));
long totalSize = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_TOTAL_SIZE_BYTES));
long currentSize = cursor.getInt(cursor.getColumnIndex(DownloadManager.COLUMN_BYTES_DOWNLOADED_SO_FAR));
// 当前进度
int mProgress;
if (totalSize != 0) {
mProgress = (int) ((currentSize * 100) / totalSize);
} else {
mProgress = 0;
}
你的demo我只改了布局,其他都没有动,你原来也是计算的时候也是int型,我在想是不是项目里设置的NDK的CPU架构有关系 |
hellodigua/vue-danmaku | 791136347 | Title: 当 danmus为空,loop为false 时,我给danmus添加数据,没有弹幕显示,但是loop设置为true又有了,但是插槽取不到content的值
Question:
username_0: `<template>`
` <div class="barrage-container">`
` <vue-danmaku ref="danmaku" :danmus="danmus" :config="config">`
<!-- 弹幕插槽(vue 2.6.0 以下请使用 slot-scope语法) -->
` <template v-slot:dm="{ index, danmu }">`
` <span class="danmu-item">111{{index}}{{ danmu.content }}</span>`
` </template>`
` </vue-danmaku>`
` </div>`
`</template>`
`<script>`
`import vueDanmaku from 'vue-danmaku'`
`import { getBarrageList } from '@/api/index'`
`export default {`
` components: {`
`vueDanmaku`
`},`
` data() {`
` return {`
` danmus: [],`
` config: {`
` slot: true,`
` channels: 5,`
` loop: false,`
` speed: 5`
` },`
` barrageList: [],`
` timer: null`
` }`
` },`
` mounted() {`
` this.getBarrageList()`
` },`
` methods: {`
` getBarrageList() {`
` this.danmus = [{ content: 222 }]`
` this.$nextTick(() => {`
` this.$refs.danmaku.play()`
` })`
` }`
` }`
`}`
`</script>`
Status: Issue closed
Answers:
username_1: `<template>`
` <div class="barrage-container">`
` <vue-danmaku ref="danmaku" :danmus="danmus" :config="config">`
<!-- 弹幕插槽(vue 2.6.0 以下请使用 slot-scope语法) -->
` <template v-slot:dm="{ index, danmu }">`
` <span class="danmu-item">111{{index}}{{ danmu.content }}</span>`
` </template>`
` </vue-danmaku>`
` </div>`
`</template>`
`<script>`
`import vueDanmaku from 'vue-danmaku'`
`import { getBarrageList } from '@/api/index'`
`export default {`
` components: {`
`vueDanmaku`
`},`
` data() {`
` return {`
` danmus: [],`
` config: {`
` slot: true,`
` channels: 5,`
` loop: false,`
` speed: 5`
` },`
` barrageList: [],`
` timer: null`
` }`
` },`
` mounted() {`
` this.getBarrageList()`
` },`
` methods: {`
` getBarrageList() {`
` this.danmus = [{ content: 222 }]`
` this.$nextTick(() => {`
` this.$refs.danmaku.play()`
` })`
` }`
` }`
`}`
`</script>`
username_1: 第一个问题: loop为false 时,我给danmus添加数据,没有弹幕显示
这是个BUG,已经在0.3.4中修复了
Status: Issue closed
username_1: 这是因为danmu数据暂未支持异步配置,在0.3.5中补充了这部分功能
你可以重新安装0.3.5,然后就没问题了
username_0: Can't find package vue-danmaku@^0.3.5
username_0: 好的,感谢,家里面网不好,我明天到公司试试
username_1: @username_0 更新了0.3.6,这个应该可以了 |
aws/aws-app-mesh-controller-for-k8s | 641463352 | Title: Add optional limits to injected Containers
Question:
username_0: We deploy ResourceQuotas in each of our application namespaces, in order to force teams to put memory/CPU requests on all of their pods. As a result, our injected pods cannot start, because the `proxyinit` initContainer does not specify any resources requests or limits. We request that the init container has resources and limit specified.
Answers:
username_1: we should expose configurability for cpu/memory limits as well.(in addition to requests)
username_2: Optional resource limits for init and sidecar containers added here #326
Status: Issue closed
|
kubernetes/kubernetes | 567389474 | Title: kubectl cluster-info dump does not include initContainers logs
Question:
username_0: **What happened**:
Ran kubectl cluster-info dump to collect all logs
For logs with initContainer, the logs for initContainers are not included in the dump.
**What you expected to happen**:
Get all logs for running pods, including logs for initContainers even if they are already done.
*How to reproduce it (as minimally and precisely as possible)**:
kubectl cluster-info dump, when there is any pod containing initContainer
**Environment**:
- Kubernetes version (use `kubectl version`): 1.16.2
(rest is irrelevant)
Answers:
username_0: /sig cli
Status: Issue closed
username_0: fixed on: https://github.com/kubernetes/kubernetes/pull/88324 |
linebender/piet | 700723108 | Title: Web example text not showing up
Question:
username_0: Details shown in this zulip chat: [link](https://xi.zulipchat.com/#narrow/stream/255910-druid-help/topic/text.20not.20showing.20up/near/209961417)
Nothing more to add. Will follow up with a check using 0.6
Answers:
username_0: checked out to version 0.6:

username_1: Hi... I have recently had the same issue as you with druid and piet on WebAssembly, and I'm also interested in helping get this fixed. Is there anything I could do to help?
Also, could you please clarify what you mean by "checked out version 0.6"? I believe piet goes up to 0.2, and I'm having this exact problem with druid 0.6.
Thank you!
username_2: Yes, this is broken as of the recent text work. Help would definitely be welcome; it would involve digging into the text stuff in `piet-web/src/text`. Web text doesn't need to have parity with text on other platforms, but text should definitely show up!
username_1: Yes, I read something about it on Zulip, but I feel like I'm missing on some context.
In particular, that remark about 0.6 working sounds like a clue, but I'm not sure I quite understand... druid 0.6 does not work for me, and there is no piet >= 0.6, right?
If you could help figure that out, I could try to try doing the bisection too. In alternative, I'm available to team-up with @username_0, or continue on from his work if he's no longer interested in it. Whatever comes easier to you!
username_1: Nice! Thank you for the pointer. I'll start to look into this as soon as I get some time 👍
Status: Issue closed
|
honest-cash/honestcash | 393774630 | Title: Notes on editing / deleting posts
Question:
username_0: I made a test post here: https://honest.cash/marklundeberg/test-post-123456789-270
I'm unable to delete it.
I *am* able to edit it to have a blank body and extremely short title, things which would normally be blocked for new posts.
Answers:
username_1: Thanks a lot for reporting it. We'll address the issue shortly.
username_2: @username_0 it took long to address this, sorry for that. with the PR #42 you will now be able to delete your posts.
After the PR gets merged you can navigate to /posts using your profile badge dropdown menu and clicking on my posts. the rest should be self-describing.
Thank you for reporting this. If you have more suggestions/bug reports, feel free to create and issue and we'd be happy to help. |
Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2 | 582407966 | Title: Getting access token fails with "code already redeemed" exception, though code hasn't been used yet
Question:
username_0: ### Versions
ASP.NET Core 3.1, MSAL.NET 4.9.0
Thank you for your help!
All the best,
Akos
Answers:
username_1: Hi @username_0, are you running the sample as is (just updating the config file)? Or do you have extra customization?
Another question is, how did you register the application on Azure Portal? Manually or via Powershell?
Thanks
username_0: Hi @username_1,
Please ignore, we can close this issue. After posting (of course when else) I discovered that the Scope in the settings was incorrect. Nonetheless, the error message was terrible at telling me what is the problem, but that's not the samples repo's fault.
For future reference, if anyone sees the "code already redeemed" error, that not necessarily means that the code was actually used before. In this case it was because the scope was incorrect.
Anyhow, thank you for your time!
Status: Issue closed
username_1: @username_0 This is indeed very weird. Would you mind telling what scope were you using?
I will try to reproduce it here and send the results to the team responsible for the error messages. They might have a bug.
username_0: Sure. I was trying to use an Azure FHIR API.
Depending on the Active Directory App and the Azure FHIR API setup:
<ins>The correct scope:</ins> https://azurehealthcareapis.com/user_impersonation
<ins>The one I tried incorrectly:</ins> https://the-name-of-your-fhir-api.azurehealthcareapis.com/user_impersonation
p.s.: Sorry for the delay |
flutter/flutter | 537472781 | Title: NATIVE SIDE [JAVA] : Passing registrar from the MainActivity to external Class.
Question:
username_0: Hi guys! How can I pass **registrar** from the MainActivity to the external Class. I need it to get the **context** so that I can use it on `Intent` 1st parameter.
Answers:
username_1: Hi @username_0
This platform is not meant for assistance on personal code.
Please see https://flutter.dev/community for resources for asking questions like this,
you may also get some help if you post it on Stack Overflow.
Closing, as this isn't an issue with Flutter itself,
if you disagree please write in the comments and I will reopen it
Thank you
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 757140041 | Title: COVID-19 Redirect Request
Question:
username_0: ### Instructions
- Requests for URL changes/redirects need to be submitted **AT LEAST** 2 weeks in advance. Some changes/redirects take significant amount of time and coordination to implement, so start the process as soon as you know you will need one.
- This issue will be used from initial request through implementation of the redirect to ensure all individuals working on this are notified of status updates. Please do not create multiple issues to track differnt steps.
- It is your responsibility to notify VA stakeholders of pending redirect.
### Description
We need to secure a vanity url and variations for the covid-19 vaccine page. Worked with Mikki on this already, as it is an urgent need.
### Requestor team info
Product: @batemapf
Content: @lalexanderson-dsva @username_0
### Implementation date
Date new URL(s) will be live: Monday, 12/7
Please indicate when the change/redirect(s) needs to be implemented"
- [X] On the same date the new URLs launch
- [ ] Can happen within 1-5 business days after new URLs launch
- [ ] Other - Please indicate timing :
**The requesting team is responsible for notifying the group working on this issue if the target date changes. They are also responsible for ensuring the destination URLs are implemented correctly and live at the time the redirects are deployed.**
### Redirects needed
Current URL | Redirect Destination or New URL
--- | ---
*www.va.gov/health-care/covid-19-vaccine* (will be up Monday) | *www.va.gov/covid-19-vaccine, www.va.gov/covid-vaccine, www.va.gov/coronavirus-vaccine, www.va.gov/covid19-vaccine*
### Definition of done
- [ ] Above information is provided and issue is tagged and assigned appropriately - *@ requesting team*
- [ ] All appropriate VA stakeholders are notified of pending redirect - *@ requesting team*
- [ ] Request is vetted and documented and implementation plan is clear - *@ Content & IA team*
- [ ] Request is assigned to appropriate team for implementation - *@ Content & IA team*
- [ ] Implementation team completes work - *@ Implementation team*
- [ ] Implementation pushed live and redirect is validated in production - *@ all*
- [ ] Ticket is closed - *@ requesting team*
Answers:
username_1: Updated the table to clarify which is the destination url and which are the ones to redirect.
These redirects are ready to go and can be implemented at any time. They do not necessarily need to wait until the page is live, but they will redirect to a 404 until it is live (which is no different than if you try to go to them right now).
username_1: @brianalloyd @username_2 @ncksllvn In the interest of time, I'm tagging you on this. These redirects for for the vaccine page going up and can be implemented at any time - even before the page goes live on monday.
username_2: I'm picking this up now
username_2: PR is up: https://github.com/department-of-veterans-affairs/devops/pull/8128
username_2: @username_1 just want to confirm, I can merge + deploy this asap?
username_1: Yes, this can be deployed. The destination page is not live yet, but should be shortly.
username_3: @username_2 @brianalloyd Can this ticket be closed out? VSP work is complete.
username_2: Yupp! 🙂 PR was merged + deployed + validated: https://github.com/department-of-veterans-affairs/devops/pull/8128
Status: Issue closed
|
visgl/deck.gl | 893444000 | Title: yarn test fails with error from Terser
Question:
username_0: #### Description
Running `yarn test` leads to the following error message:
```
1..5272
# tests 5272
# pass 5272
# ok
browser-driver: Browser Test completed in 385.7s.
browser-driver: Browser Test successful: All tests passed
Automatically collecting metrics for deck.gl-monorepo
| Version | Dist | Bundle Size | Compressed | Imports |
| --- | --- | --- | --- | --- |
ERROR in size.js from Terser
Unexpected token: punc (.) [size.js:50278,38]
wc: *: open: No such file or directory
gzip: can't stat: * (*): No such file or directory
wc: *.gz: open: No such file or directory
rm: *.gz: No such file or directory
| 8.5.0-alpha.5 | es5 | KB | KB | *
ERROR in size.js from Terser
Unexpected token: punc (.) [size.js:31584,38]
wc: *: open: No such file or directory
gzip: can't stat: * (*): No such file or directory
wc: *.gz: open: No such file or directory
rm: *.gz: No such file or directory
| 8.5.0-alpha.5 | esm | KB | KB | *
ERROR in size.js from Terser
Unexpected token: punc (.) [size.js:30169,38]
wc: *: open: No such file or directory
gzip: can't stat: * (*): No such file or directory
wc: *.gz: open: No such file or directory
rm: *.gz: No such file or directory
| 8.5.0-alpha.5 | es6 | KB | KB | *
✨ Done in 468.60s.
```
All the tests pass OK and running `yarn test fast` and `yarn test ci` is without problems.
#### Expected Behavior
`yarn test` completes without an error
#### Repro Steps
<!-- Steps to reproduce the behavior. -->
- Check our master branch into clean directory
- run `yarn`
- run `yarn test`
#### Environment
- Framework Version: 8.5.0-alpha.5
- Browser Version: -- (invoked using CLI)
- OS: Mac OS X 10.14.4, node
#### Things tried
- Reinstalling node & npm & yarn
- Manually upgrading `webpack` to `4.46.0` and `terser-webpack-plugin` to `1.4.5` (latest versions)
Answers:
username_1: Yes metrics reporting emits error and it should be fixed. But it doesn't make `yarn test` fail.
username_1: Tracking with https://github.com/uber-web/ocular/issues/362
Status: Issue closed
|
flexion/ef-cms | 1041614968 | Title: BUG: Filed and Served Stamp doesn't appear on the page
Question:
username_0: **Describe the Bug**
Looks like some scanned documents are causing the file and served stamp to fall outside of the page. Interestingly, you can see that it was added in the page preview pane (Chrome PDF Viewer). So, it's getting added, but the location is off.
**Business Impact/Reason for Severity**
It's important to be consistent for indicating when a document has been filed and served.
**In which environment did you see this bug?**
Prod / Test
**Who were you logged in as?**
Docket Clerk
**What were you doing when you discovered this bug? (Using the application, demoing, smoke tests, testing other functionality, etc.)**
Support Ticket
**To Reproduce**
Steps to reproduce the behavior:
1. Login as a docket clerk
2. File and serve a copy of the PDF available on test in the documents bucket `537e6179-210e-42da-81bc-0ebd7ae23b50`
3. Observe that the Filed And Served with Date doesn't get added to the right place at the bottom of the page. It might be falling off the page.
**Expected Behavior**
Filed & Served should consistently appear at the bottom of documents
**Actual Behavior**
Filed & Served stamp does not appear visible, except for in the preview pane
**Screenshots**


<img width="1789" alt="Screen Shot 2021-11-01 at 10 52 59 AM" src="https://user-images.githubusercontent.com/5023502/139739829-25ddb9a3-a3c6-475b-be48-c22fa12361cf.png">
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Cause of Bug, If Known**
**Process for Logging a Bug:**
* Complete the above information
* Add a severity tag (Critical, High Severity, Medium Severity or Low Severity). See below for priority definition.
[Truncated]
Process: If the unexpected results are new use cases that have been identified, but not yet built, new acceptance criteria and test cases should be captured in a new user story and prioritized by the product owner.
If the Court is not able to reproduce the bug, add the “Unable to reproduce” tag. This will provide visibility into the type of support that may be needed by the Court. In the event that the Court cannot reproduce the bug, the Court will work with Flexion to communicate what type of troubleshooting help may be needed.
## Definition of Done (Updated 4-14-21)
**Product Owner**
- [ ] Bug fix has been validated in the Court's test environment
**Engineering**
- [ ] Automated test scripts have been written
- [ ] Field level and page level validation errors (front-end and server-side) integrated and functioning
- [ ] Verify that language for docket record for internal users and external users is identical
- [ ] New screens have been added to pa11y scripts
- [ ] All new functionality verified to work with keyboard and macOS voiceover https://www.apple.com/voiceover/info/guide/_1124.html
- [ ] READMEs, other appropriate docs, JSDocs and swagger/APIs fully updated
- [ ] UI should be touch optimized and responsive for external only (functions on supported mobile devices and optimized for screen sizes as required)
- [ ] Interactors should validate entities before calling persistence methods
- [ ] Code refactored for clarity and to remove any known technical debt
- [ ] Deployed to the Court's test environment
Answers:
username_1: [537e6179-210e-42da-81bc-0ebd7ae23b50.pdf](https://github.com/flexion/ef-cms/files/7505739/537e6179-210e-42da-81bc-0ebd7ae23b50.pdf)
The PDF document in question was revealed to contain read-only form fields.
It is believed that the clerk who scanned the paper document possibly opened the PDF in an editing tool and placed these large "checkbox" fields to cover streaks and imperfections that resulted from some process of printing/scanning/copying.
These edits did indeed have the effect of making the document look much cleaner.
However, they had the unintended effect of interacting with placement of signatures and served date stamps, by obscuring them as the form fields were "above" the layer on which the signatures and stamps were placed.
Strategy is likely to be attempting to either flatten the pdf form or remove the form fields entirely if possible.
username_1: https://github.com/ustaxcourt/ef-cms/pull/1793
username_2: I tested this on 11/16/21 and I could see the Entered and Served stamp for the SDEC document that is in 11883-21. I'd say this is fixed. Thank you!
Status: Issue closed
|
QuantEcon/QuantEcon.py | 221141634 | Title: TEST: add tests for cartesian.py
Question:
username_0: Coverage:
```
cartesian.py 34 13 62%
```
Answers:
username_1: @username_0 What is the status of this issue? We have `cartesian.py` and `gridtools.py`, and `test_gridtools.py` but no `test_cartesian.py`. See also https://github.com/QuantEcon/QuantEcon.py/issues/143#issuecomment-128095418.
username_2: @username_0 I'm trying to work on this issue - would you have time to clarify its status?
username_0: @username_1 and @username_2 thanks for bringing my attention to this. It looks like ``cartesian.py`` and ``gridtools`` should be merged. They contain the same code (just different docstrings and some formatting changes). Looks like this PR failed to remove it (https://github.com/QuantEcon/QuantEcon.py/pull/149). As we have migrated to using ``grid_tools.py`` I will submit a PR to remove ``cartesian`` and update the appropriate docs page.
Status: Issue closed
username_0: This can be closed as `cartesian.py` is now removed. |
ajoberstar/grgit | 596983249 | Title: No repository found
Question:
username_0: CODE: Within the build.gradle file, line 32 states...
hqBranch = grgit.branch.current().name
This is everything leading up to line 32:
plugins {
id 'org.username_1.grgit' version '2.3.0'
}
allprojects {
apply plugin: 'java'
sourceCompatibility = 1.8
targetCompatibility = 1.8
repositories {
mavenLocal()
maven {
url 'https://github.com/MegaMek/mavenrepo/raw/master'
}
jcenter()
}
}
subprojects {
group = 'org.megamek'
version = '0.47.5'
}
ext {
hqGitRoot = 'https://github.com/MegaMek/mekhq.git'
mmGitRoot = 'https://github.com/MegaMek/megamek.git'
mmlGitRoot = 'https://github.com/MegaMek/megameklab.git'
// Work on MML or MHQ sometimes requires changes in MM as well. The maven publishing tasks use
// these properties to append the branch name to the artifact id if the repo is not in the master
// branch, making it available separately to the child project.
hqBranch = grgit.branch.current().name
I understand it can't get 'branch' property on the grgit object because accessing grgit caused an NPE. I'm guessing the problem is because the expected org.username_1.grgit (version '2.3.0') plugin listed on line 2 isn't part of the build package, nor did it pull it from the web.
QUESTION 1: Does grgit need to be defined somewhere in this build file, or will the grgit object be defined once the plugin is properly available?
QUESTION 2: How do I install org.username_1.grgit plugin for gradle, or how do I correct the build.gradle file to correctly access the git repository?
Thanks in advance for your forthcoming wisdom. :)
Answers:
username_1: Did you clone the repo or download as a zip? That message usually indicates that you aren't in a git repo.
username_0: I downloaded the build as a zip & extracted all. I installed gradle. I am not familiar with the term "clone the repo".
username_0: I chose the "Source code (zip)" option of listed assets for the v0.47.5 Development Snapshot
username_0: I'll research how to clone the git repository and give that a try. If you have any other tips, I'm all ears. I'm willing to mark this as closed if you think that's my only problem.
Status: Issue closed
|
ruflin/Elastica | 317184408 | Title: Class for Elasticsearch Task Management API
Question:
username_0: Elastica doesn't provide a class for Task Management API
As the minumum, the Class should provide
- getting the list of currently running tasks (GET _tasks)
- getting task by id (GET _tasks/node_id:task_id )
- providing a possibility to check if a task is completed
Future improvements should be easily possible, to include other features from Task Management API, e.g.
- specifying filters and options when getting the list
- specifying options when getting a specific task
- canceling a task
Answers:
username_1: +1 on having support for the task API. Interested to contribute it?
Side note: Elastica can be used the access any API endpoint but just using raw queries or the underlying php-elasticsearch client, it's just not with nice objects.
username_0: Yes, starting to work on that.
Thanks for the helpful side note!
username_1: Great to hear. Feel free to open a PR early if you need some feedback. |
artic92/configurations | 618596514 | Title: backup keepassxc binaries
Question:
username_0: It's better to store an executable version of keepassxc to get access even in a non-internet available scenario. Moreover, it would be also advisable to clone the source code, so to build it on demand. |
thewca/wca-regulations | 545990507 | Title: Minor (but important?) fixes
Question:
username_0: 12a2 "n may be also be omitted" should be changed to "n may also be omitted"
a2d1 What happens if there isn't a signature from the scrambler/scramble checker? Is that intentionally not stated because this has just been introduced? I can't quite tell if this really is mandatory or not.
a5a This means that if someone other than these people starts talking to a competitor they cannot do anything to ask them to stop other than ask the judge to do so. I recommend adding a point about allowing some involuntary remark that does not aid the competitor in any way at the discretion of the judge.
a6h it should say that noone except for a WCA delegate is allowed to even touch the puzzle in case of a dispute. I heard people say that, but it seems like it's not actually in the regs, you just can't align the puzzle.
a7b1 replace the / from "before/starting" with a space
a7c2 Just a question, is this mandatory? Cause I've never seen this enforced.
a7c4 I have seen multiple instances of delegates allowing a signature to be made after the fact, so the reg should probably be updated to include sportsmanship and what not.
b1b should it say that the blindfold must not be see through?
b4 should it instead say that the memo phase starts and a sight blocker is placed AFTER the first move is applied?
f3a but they can rotate the gears/clocks?
h1a3 shouldn't it be "arranged in A shape"?
Answers:
username_1: Hi username_0,
Thank you for the issues. I will answer you as much as I can.
`12a2` Good catch!
`A2d1` Because there is a "must" in the sentence. It is mandatory to do so. However, at this moment, a missing signature is not punished, but the WCA Delegate should do their best to prevent this. You can refer to the [Guideline](https://www.worldcubeassociation.org/regulations/guidelines.html#A2d1+) for more information.
`A5a` If someone is talking to the competitors during their solves, the judge should stop that. Or the competitors can just ask for extra attempts.
`A6h` Actually, even the WCA Delegate should not touch the puzzle before it's resolved, so it is not written in the Regulations.
`A7b1` The before or starting is actually correct, because the inspection phase and the "starting" (hand position, touching puzzle...etc) of the solve are written in the same penalty area.
`A7c2` It is mandatory, and it is enforced. You must only sign it AFTER the solve.
`A7c4` It is allowed to sign after the attempt but it must be done before the end of the competition. Please refer to [A7c4+](https://www.worldcubeassociation.org/regulations/guidelines.html#A7c4+)
`B1b` Please see [B4c1](https://www.worldcubeassociation.org/regulations/#B4c1) and [B1b+](https://www.worldcubeassociation.org/regulations/guidelines.html#B1b+)
`B4` I thought there was already a statement for that. We will check again.
`F3a` Of course not, rotating gear is applying moves and F3a exsited because pins are not considered in a solved state.
`H1a3` I am not a native speaker, so it might need others to clarify this one.
Thank you very much for these questions and suggestions.
username_2: `H1a3` Another good catch by @username_0. It should be "arranged in _a_ shape" not "arranged in shape".
username_0: Sorry for some of the questions, because I found the answers to a few of them after I reread the guidelines. There are still some unresolved issues though.
A6h I still think it should implicitly say that nobody can touch the cube instead of adjust/rotate
A7b1 should still be fixed to something like "before the solve/while starting the solve"
F3a I was unable to find where it says that a gear/clock turn is considered a turn and thus will be penalized by a DNF
username_1: Don't be sorry, discussing is a good thing.
`A6h` It already means no body can touch based on `A6g`, so I think it is fine at this moment.
`A7b1` Personally, I think both are okay and the current one is cleaner. If you strongly think the current one should be modified, you are welcome to add another issue just for this one. Other team members can make decision.
`F3a` If you read `12g3` , `A3c1`, then you will know moving a wheel/gear is considered making a move.
username_0: Oh, thank you, 12g3 is what I was missing.
username_1: For the `B4` question, I found the regulations it's [B4e](https://www.worldcubeassociation.org/regulations/#B4e)
username_0: I am aware of this, but this means that the judge would have to quickly remove the sheet again and inexperienced judges would be even more confused. I think it would be simpler to tell them to put up the sheet after the first move is applied.
username_1: I think if we need to put it in, in would be better in the Guideline. Could you suggest a guidline for this?
username_0: "ADDITION The judge should wait for the competitor to make their first move before putting up the sheet to avoid hindering their attempt if they choose to take off their blindfold and look at the puzzle again."
Status: Issue closed
|
decred/dcrdex | 587029666 | Title: server/db: store match secret
Question:
username_0: Related to #209, see [previous conversation](https://github.com/decred/dcrdex/pull/209#discussion_r392240227).
The secret is part of the `init` and `audit` serializations, so should be stored.
Answers:
username_1: OK. For the record, the reason the secret is now relayed by the server is to facilitate SPV clients whose wallets will not have the counterparty's swap contract. Is that right? I'm still not sure how the client's are supposed to audit the contract if they cannot pull the transaction from the network independent of the server. Anyway, @username_2 said that the wallet will have the transaction if the redeem script is imported, so perhaps that is another way to approach this.
username_0: Any SPV implementation would need to figure out how to implement a `FindRedemption` method.
username_1: I got side tracked on contract auditing, sorry, but I'm a little fuzzy on how the client negotiation currently handles both auditing of the contract and secret extraction from the redemption, despite having just reviewed all that.
1. Contract auditing. The participant/taker needs to check the contract script for the secret hash, lock time, and redeeming addresses, and check the contract output amount. The client has to be able to get this info without the server's help, or at least with just the contract txid from the server to spare the client from having to search for it, and this means being able to pull a transaction that the wallet may not index.
2. Secret extraction from the redemption. The participant must be able to pull (and possibly find as you point out) the initiator's redeem txn. But if the server at least provides the redeem txid and the client can pull this redeem txn, then they can extract the secret from the input's signature script. Unlike with the contract, where the client really must be able to get the raw transaction for themselves, this can be skipped by just checking the hash of the secret against the known secret hash from (1.).
This is why I'm fixating on the contract auditing, because that is required to not trust the server to provide the correct secret hash. Sorry if I'm being particularly dense about something we've already discussed, but I have a hard time understanding how the contract auditing is achieved trustlessly if the client can't pull arbitrary txns (or import the redeem script).
username_1: So I think the bit I was missing is that even with `txindex=0`, `gettxout` always works because the UTXO set is always available for query.
username_2: If the server is backed by a full node, it can provide the merkle proof to clients that the transaction is mined in a block header than a SPV client wallet attached.
Status: Issue closed
|
stencila/stencila | 138844163 | Title: Cell expression editor
Question:
username_0: Currently, the interface for cell editing is basic and not suited to long cell expressions.This is perhaps the biggest issue for usability of sheets at present. For example, the same expression...
...in Stencila sheets:

...in Google sheets:

Cell editing features wish list:
- [ ] editing area to expand to fit expression
- [ ] multi-line editing (especially useful for rich text cells)
- [ ] colour coding of cell ids and ranges (like Excel and Google Sheets)
- [ ] insertion of cell ids when selecting another cell, insertion of cell ranges when dragging over other cells (like most other spreadsheet software)<issue_closed>
Status: Issue closed |
haltu/muuri | 288756066 | Title: How to add custom drag handle?
Question:
username_0: Hi,
Thanks for the great library, wanted to know if we can implement a drag handle instead of having the whole "item" ?
Thanks,
Abhi
Answers:
username_1: Yep, there is an option for handle. Check out (dragStartPredicate)[https://github.com/haltu/muuri/blob/master/README.md#dragstartpredicate-]
username_0: Thanks, it works.
Status: Issue closed
|
Kurento/bugtracker | 794968945 | Title: RTCPeerConnection is gone (did you enter Offline mode?)
Question:
username_0: On disposing of the participant, every time I get the error on the console
"**Unhandled Promise rejection: RTCPeerConnection is gone (did you enter Offline mode?) ; Zone: <root> ; Task: Promise.then ; Value: DOMException: RTCPeerConnection is gone (did you enter Offline mode?) shimPeerConnection/window.RTCPeerConnection.prototype.getStats@http://my-url-to-adapter.js**"
I am facing the issue after updating the kurento-utils.js file from 6.13 to 6.15.
Kindly help me how can I remove this error. |
aws/aws-cli | 543061410 | Title: describe-images with owners timeout bug
Question:
username_0: For some reason, when I filter these images with owners, it hangs for a very long time. Without ownerId, its fine. This test is in ubuntu 16.
```
vagrant@openfirehawkserverdev:/vagrant$ aws ec2 describe-images --filters "Name=description,Values=SoftNAS Cloud Platinum - Consumption (For Lower Compute Requirements) - 4.3.0" --region ap-southeast-2 --owners 679593333241
^C
vagrant@openfirehawkserverdev:/vagrant$ aws ec2 describe-images --filters "Name=description,Values=SoftNAS Cloud Platinum - Consumption (For Lower Compute Requirements) - 4.3.0" --region ap-southeast-2
{
"Images": [
{
"ProductCodes": [
{
"ProductCodeId": "da79v2cypw25ufe9ph9sudr1p",
"ProductCodeType": "marketplace"
}
],
"Description": "SoftNAS Cloud Platinum - Consumption (For Lower Compute Requirements) - 4.3.0",
"VirtualizationType": "hvm",
"Hypervisor": "xen",
"ImageOwnerAlias": "aws-marketplace",
"EnaSupport": true,
"SriovNetSupport": "simple",
"ImageId": "ami-048287b6b9f28c85c",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"SnapshotId": "snap-0471ccc1481d39fab",
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 100,
"Encrypted": false
}
}
],
"Architecture": "x86_64",
"ImageLocation": "aws-marketplace/SoftNAS Cloud Platinum - Consumption (For Lower Compute Requirement-e05f570a-f4b2-4b8f-a6a7-25e2c6ab682d-ami-00e7eb9798aa20cb0.4",
"RootDeviceType": "ebs",
"OwnerId": "679593333241",
"RootDeviceName": "/dev/sda1",
"CreationDate": "2019-09-10T17:55:12.000Z",
"Public": true,
"ImageType": "machine",
"Name": "SoftNAS Cloud Platinum - Consumption (For Lower Compute Requirement-e05f570a-f4b2-4b8f-a6a7-25e2c6ab682d-ami-00e7eb9798aa20cb0.4"
}
]
}
```
Answers:
username_1: I can reproduce it as well. There is not really much we can do from the CLI's perspective as these filters are part of the API request we send to EC2 so these filters are being applied server side and the CLI is just waiting for EC2 to send a response back. I'd recommend reaching out to AWS support or the [EC2 forums](https://forums.aws.amazon.com/forum.jspa?forumID=30) to get more information on why the API is behaving this way. |
richardforth/botulizer | 264836019 | Title: handle gzipped files
Question:
username_0: # python /home/**REDACTED**/botulizer.py access_ssl_log.processed.2.gz 500
Scan started on file : access_ssl_log.processed.2.gz
Please wait....
)˱»¬kCM�Àð³(ñBOT.ð(�FåÙRhTblgc�ïé�ýAĩ®÷âT]+�� | [1]
Answers:
username_0: https://docs.python.org/2/library/gzip.html
username_0: changing the script to take data from STDIN so that we can use pipes to zcat the file first and pipe it into botulizer which is more unix-like behaviour
username_0: ```
$ zcat logfile.gz | ./botulizer.py
Scan started, please wait....
mj12bot | ================ [50]
baiduspider | ======= [23]
yandexbot | ===== [17]
Ahrefsbot | == [6]
PinterestBot | = [5]
Semrshbot | = [3]
```
Status: Issue closed
|
alacritty/alacritty | 903662577 | Title: Regex hinting ignoring action and new regex give errors
Question:
username_0: Updated alacritty and found out that it now supports regex hinting, very cool but I have encountered few problems.
1. If I hint `Control+Shift+U` it will open the url instead of copying it to clipboard
```yaml
hint:
alphabet: "jfkdls;ahgurieowpq"
enabled:
- regex: "(mailto:|gemini:|gopher:|https:|http:|news:|file:|git:|ssh:|ftp:)\
[^\u0000-\u001F\u007F-\u009F<>\"\\s{-}\\^⟨⟩`]+"
action: Copy
mouse:
enabled: true
mods: Control
binding:
key: U
mods: Control|Shift
```
2. Newly added regexes dont work
```yaml
# FIXME: Regex to find email
- regex: "^[a-zA-Z0-9+_.-]+@[a-zA-Z0-9.-]+"
action: Copy
mouse:
enabled: true
mods: None
binding:
key: E
mods: Control|Shift
```
Hitting `Control+Shift+E` doesnt do anything even though the regex is tested on [rustexp](https://rustexp.lpil.uk/)
### System
OS: ArchLinux
Version: alacritty 0.8.0 (a1b13e68)
Linux/BSD: X11 , Bspwm, Xcompmgr
Answers:
username_1: Your config's broken. It's `hints`, not `hint`.
username_0: My bad. |
ChuckerTeam/chucker | 624979295 | Title: java.lang.NoSuchMethodError when adding Chucker with compileOnly in library
Question:
username_0: ## :iphone: Tech info
- Kotlin version: 1.3.72
- Chucker version: 3.2.0
## :page_facing_up: Additional context
Note: This only happens when the versions applied in `app` and `cloud` differ, that is when one uses the no-op version and the other uses the regular one.
In the `cloud` module I can not decide which version of Chucker I can use. The version is decided for different flavours of the application and those are only available in `app`.
Since I do the same with `implementation com.jakewharton.threetenabp:threetenabp` and `compileOnly org.threeten:threetenbp:$threetenbpVersion` I thought it might be possible to provide the final artifact at a different place (i.e. `:app`).
Answers:
username_1: tl;dr: you can't use the `library` and `library-no-op` of Chucker and do dependency shading with `compileOnly`.
The reason behind this is that the two artifacts are not binary compatible each other (that's by design).
The signature of the `ChuckerInterceptor` class is different between the two artifacts:
** library **
https://github.com/ChuckerTeam/chucker/blob/583e395c1a94cbc7ea4ce3226161e6bf4ac4d40b/library/src/main/java/com/chuckerteam/chucker/api/ChuckerInterceptor.kt#L39-
L45
** library-no-op **
https://github.com/ChuckerTeam/chucker/blob/583e395c1a94cbc7ea4ce3226161e6bf4ac4d40b/library-no-op/src/main/java/com/chuckerteam/chucker/api/ChuckerInterceptor.kt#L11-L16
(See Change type of a formal parameter [here](https://wiki.eclipse.org/Evolving_Java-based_APIs_2#Evolving_API_classes_-_API_methods_and_constructors))
So your app is failing when you're using `library-no-op` as `compileOnly` and `library` as `implementation` in `app` (or viceversa) as one of the two constructor is missing.
The `library-no-op` artifacts is source compatible with `library`, and not vice versa. You can swap `library` with `-no-op` and the code will still compile.
This allows to compile/assemble on the `release` variant without compilation failures.
Long story short: you need to use `debugImplementation`/`releaseImplementation`.
I'm closing here.
Anyway @username_0, I'm curious to hear what's the use case behind using `compileOnly`
Status: Issue closed
username_0: Hey @username_1 thank you very much for clarifying! I had the same intuition when I looked at the source code for both classes, but I didn't knew if this would really result in "binary incompatibility".
My use case is the following:
I'm building an application that is heavily modularised. So I have one module `:cloud` that's responsible for handling calls to the interwebs and that module is also using Chucker. But my application module `:app` is heavily customisable in the sense that I have all kinds of flavours to build the app. There is one for us internal developers, two or more for our partners (developers and non-developers, designers, etc.) and then finally the store version.
So the flavour dimensions are not only `debug`/`release` but also `develop`/`partnerDevelop`/`partnerDesign`/`store`. And for `develop` and `partnerDevelop` I want to have Chucker enabled, but not for the rest of the flavours. These flavours are not available in my library modules. So only in `:app` I can do something like `partnerDevelopImplementation "com.github.ChuckerTeam.Chucker:library:3.2.0"`.
I think I can workaround this by passing in some build info so I can decide in `:cloud` whether to use Chucker or not.
username_1: Please note that `debug/release` are not flavour but build types. Those should be available on your android library modules (also in the `cloud` module). So you should be fine using `debugImplementation` and `releaseImplementation` directly inside cloud.
username_0: Okay sorry, I should've clarified that part.
Yes, of course `debug/release` are available. But that info is not enough. There will never be a `partnerDesignStoreRelease` version. The only `release` build thats ever going to be made is of the `store` flavour.
So the usual `debug -> library` and `release -> no-op library` does not work, because even with `debug` I sometimes want the no-op library. |
numpy/numpy | 141612597 | Title: `np.multiply` vs. `*`
Question:
username_0: In the course of our work over on [dipy](https://github.com/nipy/dipy/pull/951), we stumbled into the following behavior:
The following line:
https://github.com/nipy/dipy/blob/master/dipy/denoise/shift_twist_convolution.pyx#L68
raises this error (on np 1.11, but not earlier):
```
Traceback (most recent call last):
File "contextual_enhancement.py", line 149, in <module>
sh_order=8, test_mode=True)
File "dipy/denoise/shift_twist_convolution.pyx", line 68, in dipy.denoise.shift_twist_convolution.convolve (dipy/denoise/shift_twist_convolution.c:2147)
TypeError: 'numpy.float64' object cannot be interpreted as an index
```
When changed to this:
https://github.com/stephanmeesters/dipy/blob/enhance_5d_update_pr/dipy/denoise/shift_twist_convolution.pyx#L69
This error no longer occurs. What does error mean (it's a bit opaque in this context), and how do we avoid it in the future?
Thanks!
Answers:
username_1: What does `output_norm = output * (np.amax(odfs_dsf)/np.amax(output))` do? If you run that function under pure Python (no Cython) what does it do?
username_2: Note that float indexes will not error in Numpy 1.11, we reverted that to allow more time for downstream projects to adjust. There has been a deprecation warning out for ~1.5 years, and the error will return in 1.12.
username_3: The reason should be that `output` is actually a sequence and not an ndarray but you actually
username_3: On second thought, I really get confused and I think <NAME> is right to ask what even happens, also outside Cython.
output should be a typed memoryview, how is the multiplication of a typed memoryview even *defined* in Cython? My first guess is, it is not, so that numpy converts the memoryview to an array.
My guess is, the change does not change the output?! So for some reason the numpy scalar got first shots at trying to repeat the sequence and then Cython multiplied elementwise? Or did numpy try to repeat the sequence, and when it fails tries to convert to ndarray instead?
username_4: Here's a simplified case, without Cython (trying on py34):
```
In [8]: m = memoryview(np.ones(3))
In [9]: m * 4
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-a2ec8a2e7818> in <module>()
----> 1 m * 4
TypeError: unsupported operand type(s) for *: 'memoryview' and 'int'
In [10]: m * np.int64(4)
Out[10]: array([ 4., 4., 4.])
In [11]: m * np.float64(4.1)
/home/njs/.user-python3.4-64bit/bin/ipython:1: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
#!/home/njs/.user-python3.4-64bit/bin/python3.4
Out[11]: array([ 4.1, 4.1, 4.1])
```
So it looks like `memoryview.__mul__` just returns `NotImplemented` pretty much always. So then the array scalar's `__rmul__` gets called, and it does an odd thing: it seems to always coerce the memoryview to an array and perform elementwise multiplication (good), but for a float array scalar somehow in the process it triggers the `__index__` warning. Notice that this doesn't affect the final result, though -- the float is not truncated, and it never does sequence repetition (even when passed an integer).
Also note that `ndarray` itself does not hit this warning -- only the scalar does:
```
In [12]: m * np.array(4)
Out[12]: array([ 4., 4., 4.])
In [13]: m * np.array(4.0)
Out[13]: array([ 4., 4., 4.])
```
So the current mystery is, where is that warning being issued in `memoryview * np.float64(4.1)`?
username_3: Bah, this is crazy. What used to happen in cases such as this was this:
```
If (other object has no multiply slot):
as_int = int(scalar) # failure, will error, but all numerical types can do this
try:
operator.repeat(other, as_int)
except:
pass
return np.multiply(other, scalar) # or something similar.
```
Now we want to make the `int` more strict so that `[1, 2, 3] * np.float64(3.5)` will not work, but then the `np.multiply` fallback for when the other does not even support repeating does not get executed.
My guess: We should test whether the other object claims to have a sequence repeat implemented, and otherwise deirectly call the numpy multiply/equivalent logic (I am a bit suprised, I somewhat thought we have a special scalar*scalar case, but that is orthogonal).
username_3: It appears we should have been giving spurious warnings for this case all along, since it probably should keep working and was never a sequence repeat here.
username_4: Wow. I didn't know that this `nb_mul`/`sq_repeat` mess could get *worse* :-(
username_3: Well, my guess is, it should be easy enough to fix, just reverse the seq logic....
username_4: Yeah, it sounds like it would be a straightforward fix to make it instead look like
```
if (other object has sq_repeat) and (I am an integer):
call sq_repeat
else:
...
```
I guess during the deprecation period it would instead be
```
if (other object has sq_repeat) and (int(self) succeeds):
if (index(self) raises an error):
warn
call sq_repeat
else:
...
```
My broader concern is that it's very very unclear to me whether this fall-back to `sq_repeat` logic is correct at all. Should we check for `NotImplemented`, surely it's somehow a bug that `[1, 2] * np.array(2)` and `[1, 2] * np.int64(2)` return different results, *cough* `__numpy_ufunc__` *cough*, etc. etc. I guess we don't need to deal with this broader issue though right now -- we should probably make the simple fix just to get rid of the spurious warning first.
username_3: Not quite right I guess. Question is whether we want `[1, 2, 3, 4] * np.float64(3.5)` to raise or convert the list to an array.
As it currently repeats the list, I think it should be an error, so we don't have to care whether the the scalar can be made an integer.
Or maybe a better argument, we must *not* care whether or not the scalar is integer because:
```
[1, 2, 3] * np.float64(3.)
```
must be identical to
```
[1, 2, 3] * 3
```
*unless* it raises an error. I know there are good reason why one might expect the array multiply logic in the float case, but here it just seems to fall into a dark pit of ugliness if you try to make it work. All fancier array-likes will already know how to multiply with a scalar anyway.
username_4: My knee-jerk response is that `list * numpy scalar` should always convert to an array, because I don't like inconsistencies between 0dim arrays and scalars, and:
```
In [1]: [1, 2, 3] * np.array(3.)
Out[1]: array([ 3., 6., 9.])
In [2]: [1, 2, 3] * np.array(3)
Out[2]: array([3, 6, 9])
```
(And the 0dim array behaviour I think is correct, because we obviously want this for arrays with >=1 dimension, and we'd really like to avoid adding more 0dim special cases.)
Not sure if whether or not I trust this knee-jerk response though.
I think it's more important that `[1, 2, 3] * np.float64(3.5)` and `[1, 2, 3] * np.float64(3.0)` do the same thing, then that it is that `[1, 2, 3] * np.float64(3.0)` and `[1, 2, 3] * np.int64(3)` do the same thing.
Note also that in pure python:
```
In [3]: [1, 2, 3] * 3.0
TypeError: can't multiply sequence by non-int of type 'float'
```
so there's definitely no reason to make `[1, 2, 3] * np.float64(3.0)` repeat except for the backcompat issue.
I guess we don't really have to decide on the final outcome right now -- the likely possibilities are to either to make `[1, 2, 3] * np.float64(...)` an error, or else to make it convert-to-array... and even if we do want to make it convert-to-array, passing through an error state in between is probably a good idea, so our next step is the same in either case. I expect it is *very* rare that people are depending on the repeat behavior for numpy floats here, but still better safe than sorry.
What's less obvious is what `[1, 2, 3] * np.int64(...)` should do. I think there's a pretty reasonable argument that we might want to deprecate the repeat interpretion here, just because it's very ambiguous what should happen. But that's probably far enough from the original issue that it should be handled as a separate issue.
So I think that suggests that the immediate fix for this issue is:
```
if (other argument has sq_repeat):
try:
i = index(self)
except:
i = int(self)
issue deprecation warning for index
call sq_repeat
```
sound right?
username_3: Sorry, my first reaction was too far in the other direction. The 0D array is the big issue, I agree and I doubt there is any good solution, but somehow overlooked how big the issue is. In the end, I am not sure...
But I guess we are stuck with an error on floats for now in any case for starters. And that is already the case for master. And then we could go from there to the other way around.
Since we are using a VisibleDeprecationWarning now fixing it even in 1.11 could be nice, but maybe not a priority.
username_2: I don't want to do anything more for 1.11, 1.12 will be branched soon enough.
username_3: Sure. If someone else complains we can put it into a point release if there is one.
username_0: Thanks everyone for the comments. I think that we have a fix for now, and
you all seem to know what's going on... My only (and possibly
not-too-useful; I'm definitely out of my depth...) comment is that if this
indeed keeps raising an error -- the index error was quite confusing (to
me), so if it is possible to catch this kind of thing and give users some
more direct guidance, that would be great. If you want to close this issue,
that's OK with me, or keep it open, if you want to keep it around. As
always - thanks for everything that you do!
username_5: Is this still needed?
username_3: Hmmm, good question. Currently `list * float_scalar` is an error, while `list * 0D_array` gives `np.multiply` (array) behaviour.
So, there may still be the open question of making `list * float_scalar` convert to arrays, but since it is an error currently, I am wondering if it might make sense to just leave things as is right now. Are you running into this as an issue?
username_3: Close it, as the last comment said:
```
[1, 2, 3] * np.float64(4.1)
```
raises and mismatches with 0-D arrays. But that is tricky and most of the discussion is about things that are not relevant anymore. We should just open a new issue when we run into it again.
Status: Issue closed
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.