repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
alberliu/gim | 552045645 | Title: 你这代码跑不起来吧
Question:
username_0: // internal/logic/mq/consume/consumer.go
```
package consume
import (
"gim/conf"
"gim/logic/db"
"gim/public/imctx"
"time"
"github.com/nsqio/go-nsq"
)
```
哪有conf,logic和public这3个包?
Answers:
username_1: 虽然这段代码存在,但是没有引用,我这边没有影响到编译
username_1: 那块代码已经删掉了,你更新下试一下
Status: Issue closed
|
rossfuhrman/_why_the_lucky_markov | 372145259 | Title: Tiger’s Vest [! I Hope For Your Success and My Hair is On End About This and Dreams Really Do Come True Earlier, I mentioned that attr_reader adds reader methods, but not undefined.
Question:
username_0: Toot: Tiger’s Vest [! I Hope For Your Success and My Hair is On End About This and Dreams Really Do Come True Earlier, I mentioned that attr_reader adds reader methods, but not undefined.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
edelstone/tints-and-shades | 506776427 | Title: Make hex code copying work with enter key and/or spacebar
Question:
username_0: Recently we made the hex color table cells tab-able, but the enter key or spacebar don't initiate a copy like a click of the mouse would. It would be good for usability and access if this functionality existed.
Here is a basic example on how to do this with some Javascript: https://www.w3.org/TR/wai-aria-practices/examples/button/button.html
Answers:
username_1: I would be happy to work on this
username_0: Sounds good to me. Let's wrap up #11 and you can circle back if you want. Always appreciate the help.
Status: Issue closed
|
andywu9/StudyBuddy | 204442081 | Title: Research calendar API options
Answers:
username_1: Is this for a calendar view on the frontend or something else? If it is a calendar view you need there is a library that integrates with rails that can accomplish this for you, a group in RCOS used it last year and it turned out well. |
elbywan/hyperactiv | 707138943 | Title: Impressive
Question:
username_0: I just want to say thank you!
- The API surface is nice to work with
- serves as a great replacement for MobX
- SOO lightweight
Answers:
username_1: Wow thank you for the kind words! ❤️
username_2: I'd like to second this praise 🙌 |
tlksio/front | 67020139 | Title: Add a "contact us" route and template
Question:
username_0: This page should have the follow info:
* Report a bug ( and point to github front issues )
* Ask a general question
* A basic form that sends an email to <EMAIL> ( list to be created )
* Only accessible to registered users to prevent SPAM :stuck_out_tongue_closed_eyes:<issue_closed>
Status: Issue closed |
HumanCellAtlas/data-portal-content | 373111825 | Title: Ingest blurb for Develop Overview page
Question:
username_0: Please add a one or two-sentence blurb about the ingest broker development guide for the Develop Overview page here: https://dev.data.humancellatlas.org/develop/development-guides/development-guides-overview
Content should be put on this page in the Subtitle section at the top, where it says "Insert subtitle here":
https://github.com/HumanCellAtlas/data-portal-content/blob/master/content/develop/development-guides/ingest-broker-development-guide.md<issue_closed>
Status: Issue closed |
WarEmu/WarBugs | 53397631 | Title: [Scenario] Loss of Bolster on scenario death
Question:
username_0: Being killed in a Scenario purges all your buffs, including Bolster and other buffs which should persist through death.
Answers:
username_1: This one looks important! :+1:
I'm trying to poke around looking for the logic that handles death. I found basic logic that sets health to zero and respawns you.. but nothing that looks like logic that would persist buff status.
I'll keep looking.
Status: Issue closed
|
locustio/locust | 510416440 | Title: Installing 0.12.1 requires "pipenv lock --pre"
Question:
username_0: The latest release of `locustio` cannot really be managed using pipenv. It includes a dependency of `geventhttpclient-wheels`, which has only pre-release versions available.
As such, it is not possible to generate a `Pipfile.lock` file without using `pipenv lock --pre`.
Doing this will install pre-release versions of other packages, which is problematic. (In my case, it installed a 5.0dev version of coverage, which uses a different coverage file format).
Answers:
username_0: (It's possible that this was also present in earlier versions: my last lock file that worked correctly indicated that it worked fine with 0.11.0).
username_0: I’m not sure: I think maybe Pipenv would still complain.
username_1: I just tried installing geventhttpclient-wheels using pipenv:
```
pipenv install geventhttpclient-wheels
```
Which failed because I didn't use the `--pre` flag. However the installation succeeded when I ran:
```
pipenv install geventhttpclient-wheels==1.3.1.dev2
```
Wouldn't that indicate that it would work if we pinned the version locust's setup.py? I haven't used pipenv much, and not sure how to test it.
Status: Issue closed
username_1: I pinned the version in master, and now `pipenv install -e git+https://github.com/locustio/locust.git#egg=locustio` succeeds, so I'm assuming it fixed it.
username_2: I'm not sure this has fixed things. The version of gevent being specified for locust is 1.5a2. This is a pre-release version. When I `pipenv install locustio` I get the same errors as the original poster.
username_3: Same, I get the same errors. @username_1 Can you re-open this?
username_1: That's unfortunate. I could issue a non pre-release version of the geventhttpclient-wheels package, though I'd prefer not to, since I think it'd be better to follow the same version number as the official geventhttpclient package.
Gevent 1.5 fixes fixes a bug which would cause Locust's Web UI to crash on Python 3.8 (#1154), so I don't think we'd want to downgrade to gevent 1.4 (the latest non pre-release version).
Therefore, I don't see a good fix for this at the moment, except from maybe documenting it.
username_1: The latest release of `locustio` cannot really be managed using pipenv. It includes a dependency of `geventhttpclient-wheels`, which has only pre-release versions available.
As such, it is not possible to generate a `Pipfile.lock` file without using `pipenv lock --pre`.
Doing this will install pre-release versions of other packages, which is problematic. (In my case, it installed a 5.0dev version of coverage, which uses a different coverage file format).
username_4: @username_1 You reopened this, but say there is no good solution :) I guess we are stuck waiting for a new release on geventhttpclient? Or should we "solve" the ticket by just documenting it?
username_1: @username_4 We currently also pin the gevent version to 1.5a2 (to fix a crash on Python 3.8 (#1154)) which also is a pre-release version, so just releasing a non pre-release version of geventhttpclient-wheels won't resolve it.
I think we should wait with a fix until gevent 1.5 is released. Until then I think it's a good idea to leave this open so that it's more discoverable for people running into the issue.
username_5: I encountered the same problem with locustio 0.4.5 version
`Could not find a version that matches gevent==1.5a3,>=0.13`
Are there anyway to resolve this problem or loscustio just can't be installed with pipenv till 1.5 released ?
username_6: This has affected me too, used workaround thanks @username_5
username_4: @username_1 Now that there is a gevent 1.5 release, I guess we could build a non-prerelease eventhttpclient-wheels version?
username_1: @username_4 Yes, we could do that. Only problem is that the version numbers in our `geventhttpclient-wheels` package would diverge from `geventhttpclient`, but maybe that's okay.
username_7: @username_5
@username_4
IMO using the pre-release flag is not a good solution, as it will cause ALL dependencies in your project to use the lastest `pre-release` versions, which are often time unstable and could cause things to break in unpredictable ways.
You are better of adding a pinned version of `geventhttpclient-wheels` to your `Pipfile` as @username_1 [suggested](https://github.com/locustio/locust/issues/1116#issuecomment-545156978).
`pipenv install geventhttpclient-wheels==1.3.1.dev3`
Also try clearing your `pipenv` cache and re-locking.
`pipenv lock --clear`
https://pipenv.pypa.io/en/latest/diagnose/#your-dependencies-could-not-be-resolved
username_4: Fixed (a while back)
Status: Issue closed
|
fujaba/org.fujaba.graphengine | 196218007 | Title: improve visualization
Question:
username_0: improve visualization, possibly using alchemy.js instead of sigma.js - also maybe generate html files instead of json files for another html to read, many browsers have problems with external files from the filesystem.
Answers:
username_0: 
username_0: 
username_0: Something crashes, if you try to zoom or shift things - and I removed the click handlers for now.
It's supposed to be rather minimalistic - but I could always improve all kinds of things regarding the visualization.
username_0: I think it looks not quite symmetric, because the reachability graph is calculated in a depth-first order. Unlike with a real search, where you look for a path, here it doesn't matter if it's breadth-first or dept-first in terms of memory or time for a single check. But possibly it could influence overall performance, because potentially more states will be known with lesser depth, when using breadth-first search, making more graphs fail to become a new rg-node in the end. I just have to try out and compare.. Also with breadth-first order, the graph would certainly look much more symmetrical, using alchemy.js
Status: Issue closed
|
geoadmin/mf-geoadmin3 | 335307241 | Title: 3D Improve labels
Question:
username_0: https://github.com/AnalyticalGraphicsInc/cesium/issues/6699
improves greatly our sercvices
Answers:
username_0: pls @procrastinatio check if this in our master yet
username_1: We're still with Cesium 1.44 (April 2018), so this should not be in our master yet (as it was merged into Cesium's master in July 2018)
username_0: @gjn would it make sense to test latest cesium against the current edge browser from BIT?
We could really improve performance of 3d
username_0: see latest cesium?
username_2: should be included in last cesium build
username_3: @username_3 commented: pls @procrastinatio check if this in our master yet
username_3: @username_3 commented: We're still with Cesium 1.44 (April 2018), so this should not be in our master yet (as it was merged into Cesium's master in July 2018)
username_3: @username_3 commented: @gjn would it make sense to test latest cesium against the current edge browser from BIT?
We could really improve performance of 3d
username_3: @username_3 commented: see latest cesium?
username_3: @username_3 commented: should be included in last cesium build |
trailofbits/polytracker | 766422335 | Title: 洛阳汽车站哪里有真实大保健(找特色服务_qh
Question:
username_0: 洛阳汽车站哪有特殊服务的洗浴〖加薇107乄719乄09〗 “讲真的,会不会是我,被鬼迷心窍了,敷衍了太多,我怎么不难过。”看到熟悉的歌词,相信很多人一定会想到火遍大江南北的《讲真的》,也势必会想起因为这首歌走入大众视野的摩登兄弟刘宇宁。 刘宇宁究竟有多火原本身为草根的他,在某短视频平台演唱了《讲真的》后“七天之内狂涨万粉丝”,很多人“始于歌声、陷于颜值、忠于才华”成为他的铁杆粉丝,至今粉丝数直逼万。据官方爆料,这位才艺貌兼具的实力艺人,也即将亮相双十一苏宁狮晚,为晚会增添青春、活力、阳光的时尚气息。 此外,本次狮晚将秉承“敢”的特质,邀请世界顶级舞美设计师塔姆琳、灯光设计师马库斯操刀设计、制作过《中餐厅》的王恬工作室担任制作团队、各行业的卓越贡献者与社会名流倾情加盟,势必超越前几届双十一,打造出新一轮现象级综艺明星零售的娱乐风暴,让全国观众享受最精彩的全民嘉年华。 本次晚会邀请摩登兄弟刘宇宁的原因,就是欣赏这位青年艺人身上的冲劲,可谓与苏宁易购多年来立足消费者需求,不断创新突破,全心全力做到最好的信念不谋而合。 在刘宇宁爆红后,他并没有成为一现的昙花。这位多才多艺且外形高大俊朗的阳光少年,如同坐上了火箭般势头强劲,为影视剧、网游演唱主题、推广曲,和团队一起举办摩登兄弟“成长风暴”巡回演唱会。还接主演了电视剧《热血少年》、客串电影《使徒行者:谍影行动》,连续登上各大卫视的王牌综艺,每次现身公众场合,都会引发粉丝追捧。可见,刘宇宁的确是用努力赢得上天眷顾的优质偶像。 带货能力方面,刘宇宁也是一绝。他曾凭借自己的号召力创造了国内知名男性电子刊物上线十分钟被争抢万本的销售记录。他首次为某护肤品牌担任推广大使,两小时内销量就突破了万,两天的总销量更直逼万。相信他在狮晚的惊喜亮相,也将助力苏宁易购双十一销售额获得新突破。 此前,湖南卫视王牌主持团和吴亦凡、杨洋、江疏影、沈腾等明星大腕,已确定参加狮晚。而这台惊喜纷呈的晚会上,还将有很多近些年广受社会关注的一线名流亮相,让观众尽享一场视听感齐全的饕鬄盛宴。声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯枪部质招慈驳沧仑纤肝桨醚显炭刨翟跃纱挛攘唤非陕乇景竟招虑僖期逼喂卣己焚https://github.com/trailofbits/polytracker/issues/4788?8q5oG <br />https://github.com/trailofbits/polytracker/issues/4728?2GCrh <br />https://github.com/trailofbits/polytracker/issues/4670?2ll8u <br />https://github.com/trailofbits/polytracker/issues/4661?psbun <br />https://github.com/trailofbits/polytracker/issues/4601?xkhvl <br />oxkaadmqkigihhompoydfgdxmxgbwjrnbte |
cndaqiang/cndaqiang.github.io | 479326797 | Title: ScalapackTest
Question:
username_0: ```
Answers:
username_0: ```
username_0: # fortran读入命令行参数
在Fortran中主函数是没有参数的,所以要获取命令行参数需要额外调用其他的函数。
agrc=iargc():
返回命令行参数的个数
call getarg(i,buffer):
读取命令行的第i个参数,并将其存储到buffer中,其中命令本身是第0个参数
对于Fortran2003及其之后,使用GET_COMMAND_ARGUMENT来获取参数
username_0: # fortran 字符串数字互转
!1,数字转字符
write(str1,"(i4.4)")num ! 如有需要,不足四位前面补零
print*,str1
!2,字符转数字
read(str1,"(i2)")num
print*,str1
username_0: # undefined reference to `show_'
找不到变量`show` |
Hacker0x01/react-datepicker | 399689357 | Title: Invariant Violation: View config not found for name input
Question:
username_0: ### Expected behavior
Invariant Violation: View config not found for name input
### Actual behavior
### Steps to reproduce

Answers:
username_1: I have the same issue...any solution?
username_1: it does not support on RN...just got it.. |
C0reFast/c0refast.github.io | 617100311 | Title: 服务器的能耗控制以及高性能模式配置(Dell) | C0reFast记事本
Question:
username_0: https://www.ichenfu.com/2020/02/26/cpu-power-management/
事情的起因要算很久之前一次测试,一个同事借了我们的一台机器测试,在测试之前惯例使用cpupower frequency-set -g performance命令将CPU高性能模式打开,避免因为系统处于节能模式导致性能测试不准确。但是在我们这台机器上执行命令却报错了: 1]# cpupower frequency-set -g performance2Setting cpu: 03Error set |
cli/cli | 614330747 | Title: Issue after installation
Question:
username_0: When GH Cli is installed, it doesn't start by itself
Answers:
username_1: hi,
can you please provide some more info? what happens when you run `gh` after install?
username_0: after installation, UI just ask to push finish button after clicking that button, nothings happens, neither `gh` starts nor it ask for restart system or something else. Searched in installed apps, no installation showed. i am using `gh` on Windows 10
username_2: @username_0 After completing installation on Windows, `gh` will be available for invocation from a terminal application such as Command Prompt or PowerShell on Windows.
For example, after launching PowerShell you can type `gh help` to use usage instructions:

GitHub CLI is only available to be used from terminal like this. It does not have its own graphical application that can be launched from the Windows Start menu.
username_0: Fine, thank you @username_2 for your reply
Status: Issue closed
|
dylanfoster/parch | 200911075 | Title: add support for namespacing a resource
Question:
username_0: ```javascript
Router.map(function () {
this.resource("account", { namespace: "users/:user_id" });
});
```
would generate
```
GET /users/:user_id/accounts
GET /users/:user_id/accounts/:account_id
POST /users/:user_id/accounts
PUT /users/:user_id/accounts/:account_id
DELETE /users/:user_id/accounts/:account_id
```
Status: Issue closed
Answers:
username_0: resolved with https://github.com/username_0/parch/commit/0548ed5730a7df1be06d22c3476eb97f9acc3edd |
microsoft/msquic | 837103411 | Title: fail to run quicsample on Linux server, fixed by set SO_REUSEADDR and delete IPV6_DONTFRAG
Question:
username_0: ### Describe the bug
I compiled the library on Ubuntu 20.04,and encounter some error when running quicsample as server.
1、exit with ENOPROTOOPT, fixed after delete IPV6_DONTFRAG option
2、exit with EADDRINUSE, fixed after set SO_REUSEADDR to socket
### Steps to reproduce the behavior
Running Linux version is Ubuntu 20.04.
./quicsample -server -cert_file ./server.cert -key_file ./server.key
ListenerStart failed, 0x5c!
./quicsample -server -cert_file ./server.cert -key_file ./server.key
ListenerStart failed, 0x62!
### Expected vs actual behavior
IPV6_DONTFRAG option seems to be unavailable in linux.
In server with multiple cores, SO_REUSEADDR option is required since we need to create several listen fd.
Answers:
username_1: That's very weird. They've worked on Ubuntu when we tried. I assume you are running it on the same machine you compiled it? Do you have anything special about your set up?
username_1: Did you modify any other code?
username_2: That's very odd, as all of our tests run on Ubuntu 20.04, and on machines with up to 80 cores. A quick Google search shows that the dontfrag issue might occur if your system doesn't support ipv6 at all, or as nick posted above switching to an ipv4 only socket.
Reuseaddr being required is weird, and we use reuseport instead to enable RSS. Reuseport should do everything reuseaddr does and more.
username_0: I did not modify other code, but my test enviroment is the WSL in windows10. Maybe this is the difference, I'll try more efforts to figure this out.
username_2: Are you using WSL1 or WSL2. I wouldn't be surprised if WSL1 has issues like that, it never had full supported networking anyway. WSL2 definitely should work, and its what I was doing for a lot of the linux development anyway.
username_0: It's WSL1! I will upgrade to WSL2.
Thank you for your help~
Status: Issue closed
|
developit/snarkdown | 810479647 | Title: in inline html, attribute values are incorrectly parsed as markdown
Question:
username_0: Test case:
```js
snarkdown(`<a href="/foo_bar_baz.html">link</a>`)
```
[JSFiddle link](http://jsfiddle.net/x6ub3dfr/)
expected result:
```html
<a href="/foo_bar_baz.html">link</a>
```
actual result:
```html
<a href="/foo<em>bar</em>baz.html">link</a>
```
Answers:
username_1: Should be adressed via https://github.com/developit/snarkdown/pull/99.
username_2: Is there a chance that this fix will be added to npm?
username_3: It'd be great if this fix would make it to npm!
username_1: Fixed, among other things in [@bpmn-io/snarkdown](https://www.npmjs.com/package/@bpmn-io/snarkdown). |
Monokai/monokai-pro-vscode | 910173573 | Title: Monokai Pro not rendering Cascadia Code font properly in VSCode
Question:
username_0: Hello,
I'm using Monokai Pro as my theme in VSCode, rencently I changed my font to Cascadia Code. After that, I found out that withn this font, it doesn't rendering properly within Monokai Pro extension enabled. New typed text were ok, but existing text would looks like this.

Within Monokai Pro extension switched off, there is no problem rendering Cascadia Code.
Wondering is there any fix of this ?
Regards,
Yang
Answers:
username_0: Turns out for some reason, it is rendering text in Italic style which should be rendered in regular style.
Status: Issue closed
|
spinnaker/spinnaker | 440780215 | Title: Tags not appearing during manual execution of some spinnaker pipelines while their triggers are configured images are available
Question:
username_0: ### Issue Summary:
### Cloud Provider(s):
### Environment:
### Feature Area (if this issue is UI/UX related, please tag `@spinnaker/ui-ux-team`):
### Description:
### Steps to Reproduce:
### Additional Details:
---
**_Please delete the instructions below this line prior to submitting_**
Instructions:
* An issue is not a place to ask questions. Please use [Slack](http://join.spinnaker.io) or [Stack Overflow](http://stackoverflow.com/questions/tagged/spinnaker).
* Before you open an issue, please [check if a similar issue already exists](https://github.com/spinnaker/spinnaker/issues) or has been closed before.
* Make sure you have read through the [Spinnaker FAQ](https://www.spinnaker.io/community/faqs/) and [Halyard FAQ](https://www.spinnaker.io/setup/quickstart/faq/) to provide as much information as possible.
Descriptions:
* Issue Summary: A brief description of what you're seeing.
* Cloud Provider: AWS, GCP, Kubernetes, Azure, Cloud Foundry, etc. Please assign a label from the right so your issue can be properly sorted.
* Environment: As much information about your Spinnaker environment and configuration that might be relevant to the issue. For example: "I am running Spinnaker using the Amazon images to deploy into AWS and GCP."
* Feature Area: Notifications, Pipelines, UI, Jenkins, etc. Please assign a label from the right so your issue can be properly sorted.
* Description: The behavior you expect to see, and the actual behavior.
* Steps to reproduce: Ideally, an isolated way to reproduce the behavior (example: GitHub repository with code isolated to the issue that anyone can clone to observe the problem). If not possible, as much information as possible to see this behavior.
* Additional Details: Additional information such as screenshots and exception logs.
Answers:
username_0: Automatic triggers not working and Tags not appearing during manual execution of some spinnaker pipelines while in their trigger configuration images are available
Screenshots attached for available

Please assist with a solution |
kubernetes/website | 512907451 | Title: Issue with k8s.io/docs/concepts/overview/components/
Question:
username_0: **This is a Bug Report**
<!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**Problem:**
In cloud-controller-manager documentation section, version number is incorrect it says release 1.6 but latest release is 1.16, below it text from that section
cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes **release 1.6.**
**Proposed Solution:**
I think, it should be release 1.16
**Page to Update:**
https://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
1.16
<!--Additional Information:-->
Answers:
username_1: The cloud-controller-manager binary is introduced in Kubernetes release 1.6.
It has existed for a long time and isn't a new feature in 1.16.
So this page is ok and no need to update.
Status: Issue closed
|
docker/for-mac | 625889295 | Title: Docker.VmnetdError error 1.
Question:
username_0: <!--
Please, check https://docs.docker.com/docker-for-mac/troubleshoot/.
Issues without logs and details cannot be debugged, and will be closed.
Issues unrelated to Docker for Mac will be closed. In particular, see
- https://github.com/docker/compose/issues for docker-compose
- https://github.com/docker/machine/issues for docker-machine
- https://github.com/moby/moby/issues for Docker daemon
- https://github.com/docker/docker.github.io/issues for the documentation
-->
<!--
Replace `- [ ]` with `- [x]`, or click after having submitted the issue.
-->
- [ ] I have tried with the latest version of my channel (Stable or Edge)
- [ ] I have uploaded Diagnostics
- Diagnostics ID:
### Expected behavior
### Actual behavior
### Information
<!--
Please, help us understand the problem. For instance:
- Is it reproducible?
- Is the problem new?
- Did the problem appear with an update?
- A reproducible case if this is a bug, Dockerfiles FTW.
-->
- macOS Version:
### Diagnostic logs
<!-- Full output of the diagnostics from "Diagnose & Feedback" in the menu ... -->
```
Docker for Mac: version...
```
### Steps to reproduce the behavior
<!--
A reproducible case, Dockerfiles FTW.
-->
1. ...
2. ...
Answers:
username_1: No info in this ticket, closing.
Status: Issue closed
|
ossf/scorecard | 816914697 | Title: Reducing GitHub API calls in security policy check
Question:
username_0: The check for security policy code https://github.com/ossf/scorecard/blob/main/checks/security_policy.go#L31-L47 could potentially use `16` API calls.
This can be reduced to `1` call by using downloading the archive of the repository and checking for the files locally, similar to this https://github.com/ossf/scorecard/blob/main/checks/frozen_deps.go#L37-L46<issue_closed>
Status: Issue closed |
GothenburgBitFactory/taskwarrior | 297245917 | Title: [TW-1899] give hooks the filter args and the command args
Question:
username_0: _<NAME> on 2017-03-07T21:46:33Z says:_
Right now hooks get the args via, well, the `args` attribute. E.g.,
{code}
args:task something like progr sumfin
command:progress
{code}
While it's not too hard to have hooks to parse which part of args is the pre-command filter, and which part goes after, it's still tricky because of the possibility to abbreviate the command, and the keyword appearing something else on the line. It's be nice if the hooks got something like
{code}
args:task something like progr sumfin
pre_command_args: something like
post_command_args: sumfin
command:progress
{code}
Answers:
username_0: Migrated metadata:
```
Created: 2017-03-07T21:46:33Z
Modified: 2017-03-08T19:07:57Z
```
username_0: _<NAME> on 2017-03-08T16:21:49Z says:_
It's more complicated than 'before' vs 'after'. How about distinguishing by 'filter' and 'mods'?
username_0: _<NAME> on 2017-03-08T19:07:57Z says:_
Indeed, this quickly converges to the necessity of task sharing the (parts of the) parse tree tags with the external program. 3rd party tools should not need to reimplement task's parsing engine. |
prettier/prettier | 222533551 | Title: Yarn add fails due to incompatible module error
Question:
username_0: This is a React Native app.
`[email protected]: The engine "node" is incompatible with this module. Expected version ">=4.2.0".`
```bash
my-app feature/prettier 🚀 yarn add --dev prettier
yarn add v0.18.1
info No lockfile found.
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
error [email protected]: The engine "node" is incompatible with this module. Expected version ">=4.2.0".
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
```
Answers:
username_1: You can use npm to install, which just prints a warning instead of failing.
Should probably be reported to babylon, though 😄
Status: Issue closed
username_0: @username_1 You're right, it seems to be a global `yarn` issue for some reason, rather than `prettier`'s problem. Thanks for pointing it out!
username_2: @username_0 you can also use `yarn add prettier --ignore-engines`
username_0: Thanks @username_2! |
PyPSA/pypsa-eur-sec | 1165488879 | Title: biogas potential overestimated with myopic
Question:
username_0: There is a bug for the biogas potential, one store with 345TWh is added in every time step, i.e. 'EU biogas-2020', EU biogas-2025' This overestimates the biogas potential. It can be easily fixed by checking if the EU biogas store exits before adding it
Answers:
username_1: One quick fix to this issue: #231 (Set all stores from previous years to zero)
username_1: This was only an issue in PyPSA-Eur-Sec 0.6.0
username_2: Can we find out which commit caused this bug? Not to assign blame, but just so we can check there were no further bugs introduced.
username_3: I am having a look at the PR and can have a look at the commits as well.
username_3: The error was introduced by the new default values (PyPSA release 0.18) where the `lifetime` has a default value `np.inf` (previous default `np.nan`). When filtering which assets should be duplicated for each investment period in function `add_build_year_to_new_assets` (see [here](https://github.com/username_1/pypsa-eur-sec/blob/20f29b5bd337efb5b34a3e59f94ed11d5f6b9afb/scripts/add_existing_baseyear.py#L31)) it does not filter out anymore the biogas/biomass stores.
It should be resolved in PR #217 |
cloudposse/geodesic | 347717432 | Title: Historical command line editing is broken
Question:
username_0: ## what
Editing historical command lines using emacs keys is broken in certain circumstances. My guess is that it has to do with a divergence between the actual length of the prompt when output versus the length of the prompt when queried by the command line editor.
Reproducing this bug is a little tricky. This seems to work for me, but you might have to try some variations.
1. Enter a long command at the command line:
```
echo top level this is a long command
```
1. Press the up-arrow ↑ to recall the command to the command line.
1. Press ctrl-A to move the cursor to the beginning of the command line.
Expected: cursor hovers over "e" in echo.
Observed: cursor hovers over "c" in echo.
## why
Not only does this cause difficulty in editing historical command lines, it results in a dangerous situation where the command visible on the command line is not exactly what will be submitted when you hit return.
Answers:
username_1: What version are you using?
You can set `PROMPT_STYLE=plain` which will probably eliminate your problems
username_0: Using `0.12.6`. Previously using `0.11.0` which did not have this problem.
`PROMPT_STYLE=plain` does not solve the problem.
Commenting out `PROMPT_HOOKS+=("geodesic_prompt")` from `/etc/profile.d/prompt.sh` eliminates the issue (and the nice prompt) but of course does not survive a restart of the container.
username_0: Replacing `/etc/profile.d/prompt.sh` with version from `0.9.18` does not solve the problem, nor does unsetting `PROMPT_HOOKS`.
username_1: This is what I see in iTerm2:

Maybe related to my terminal (why it works).
Can you try just radically simplifying the prompt characters in `/etc/profile.d` and removing all the `tput` statements.
If that works, we can introduce another prompt format.
username_0: I'm using the OS X Terminal. I seem to see some improvement in Terminal if I turn on the "Unicode East Asian Ambiguous characters are wide" option in Terminal -> Preferences -> Advanced, but not complete solution. What do you see when you search for "echo" and then start to edit the line (key sequence `ctrl-R e c h o ctrl-A`)? I see a colon left over from search and the cursor over the space between the colon and the "e" in "echo".

username_1: Okay, I think we'll introduce a new prompt style that uses these gyphs: `0x20E0`, `0x2234`
∴ ⃠
username_0: FWIW, I reproduced the issues in iTerm2 version 3.1.7. The problem goes away in iTerm2 if the option "Use Unicode Version 9 widths" it enabled. Documentation for this feature says "Unicode version 9 offers better formatting for Emoji."
Glyphs `0x20E0` and `0x2234` seem to have the same problem of being double-width and messing up Terminal. Using those in the prompt will not solve this issue.
username_0: I suggest you use these single-width glyphs instead.
✗
BALLOT X
Unicode: U+2717, UTF-8: E2 9C 97
✓
CHECK MARK
Unicode: U+2713, UTF-8: E2 9C 93
⨠
Z NOTATION SCHEMA PIPING
Unicode: U+2A20, UTF-8: E2 A8 A0
username_1: Fixed in https://github.com/cloudposse/geodesic/releases/tag/0.15.0
Status: Issue closed
|
GUDHI/gudhi-devel | 533822215 | Title: [Alpha_complex] MPFR shall be mentionned in the installation as it is required by Alpha_complex
Question:
username_0: Only Alpha complex seems to require MPFR:
```bash
$ for file in `find . -perm -u+x -type f`; do echo $file; ldd $file | grep mpfr; done
./Alpha_complex/utilities/alpha_complex_3d_persistence
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007ff02257f000)
./Alpha_complex/utilities/alpha_complex_persistence
./Alpha_complex/test/Alpha_complex_test_unit
./Alpha_complex/test/Weighted_periodic_alpha_complex_3d_test_unit
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007f3b0d10c000)
./Alpha_complex/test/Periodic_alpha_complex_3d_test_unit
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007f4e0080f000)
./Alpha_complex/test/Alpha_complex_3d_test_unit
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007f2ebf65e000)
./Alpha_complex/test/Weighted_alpha_complex_3d_test_unit
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007fed92194000)
./Alpha_complex/example/Alpha_complex_example_from_off
./Alpha_complex/example/Alpha_complex_example_fast_from_off
./Alpha_complex/example/Alpha_complex_example_3d_from_points
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007fd62f135000)
./Alpha_complex/example/Alpha_complex_example_weighted_3d_from_points
libmpfr.so.6 => /usr/lib/x86_64-linux-gnu/libmpfr.so.6 (0x00007fb5ee7d9000)
./Alpha_complex/example/Alpha_complex_example_from_points
...
``` |
ant-design/ant-design-mobile-rn | 455730700 | Title: Icon组件如何使用自定义的ttf库
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design-mobile-rn/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[http://new-issue.ant.design/?repo=ant-design-mobile-rn](http://new-issue.ant.design/?repo=ant-design-mobile-rn)
### Steps to reproduce
Icon组件如何使用自定义的ttf库
### What is expected?
Icon组件如何使用自定义的ttf库
### What is actually happening?
Icon组件如何使用自定义的ttf库
| Environment | Info |
|---|---|
| antd | 3.1.17 |
| React | react-native 0.59.9 |
| System | Icon组件如何使用自定义的ttf库 |
| Browser | Icon组件如何使用自定义的ttf库 |
---
Icon组件如何使用自定义的ttf库
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: 现在不支持自定义的字体 https://github.com/ant-design/ant-design-mobile-rn/blob/master/components/icon/index.tsx#L1
如果有需要可以使用自己的iconfont。
Status: Issue closed
username_0: 能支持自定义图片吗
username_0: 
以前的自定义图库icon,升级ant-design版本后无法使用
username_1: @username_0 3.0 后统一使用ant-icons了, 所以不支持之前的 写法。 😢
username_0: ant-icons支持新增矢量图的使用吗?没找到使用方法😿
username_1: 不支持的, 你可以定义自己的字体图标(例如从iconfont上面下载后)在本地项目使用ttf,可以看看
https://github.com/oblador/react-native-vector-icons#generating-your-own-icon-set-from-a-css-file |
Jigsaw-Code/outline-server | 493022731 | Title: DNS & Pi-Hole?
Question:
username_0: I’ve done a pretty good noob search for how to alter the DNS that the outline
(shadowsocks) server uses. I’m running outline on a google cloud micro instance running Debian 9 x64 intel and connecting to iOS. I would like to run Pi-Hole on the same machine or in a different instance if necessary and point outline at it for DNS. How do I do this?
I can make Pi-Hole listen on all interfaces and then use its internal IP.
I tried altering resolv.conf to use this IP but that change is not persistent and does not work. It seems that it does not matter what resolve.conf has in it because the client always returns OpenDNS. Dang it!
Any help would be greatly appreciated.
For God’s sake Jim...I’m a Doctor...not a Linux Developer!
Status: Issue closed
Answers:
username_0: I am closing this. I hope it helps someone in the future. |
icarus-consulting/Yaapii.Atoms | 295101654 | Title: Increase test coverage of OrTests
Question:
username_0: <!--
IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION YOUR ISSUE MIGHT BE CLOSED WITHOUT INVESTIGATING
-->
### Bug Report or Feature Request (mark with an x)
- [ ] bug report -> please search issues before submitting
- [ ] feature request,
- [x] Improvement
### Expected Behavior
All ctors are tested
### Actual Behavior
not all ctors are tested
### Steps to reproduce the behavior
<!-- Simple steps to reproduce this bug. -->
### The log given by the failure.
<!-- Normally this include a stack trace and some more information. -->
### Mention any other details that might be useful
<!-- Give any information that might be usefull --><issue_closed>
Status: Issue closed |
rust-lang/rust-clippy | 435658502 | Title: Tests failing with multiple matching crates for `serde`
Question:
username_0: https://github.com/rust-lang/rust/pull/60053 broke the Clippy tests. Specifically `tests/ui/serde.rs` fails with:
```
error[E0464]: multiple matching crates for `serde`
--> $DIR/serde.rs:4:1
|
LL | extern crate serde;
| ^^^^^^^^^^^^^^^^^^^
|
= note: candidates:
crate `serde`: /home/<user>/code/rust-clippy/target/debug/deps/libserde-3ec33c1c405dbd9a.rlib
crate `serde`: /home/<user>/.rustup/toolchains/master/lib/rustlib/x86_64-unknown-linux-gnu/lib/libserde-4b8bb77bdaea80db.rlib
error[E0463]: can't find crate for `serde`
--> $DIR/serde.rs:4:1
|
LL | extern crate serde;
| ^^^^^^^^^^^^^^^^^^^ can't find crate
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0463`.
```
I'm not really sure how fix this. Maybe somewhere in compiletest-rs? The error still happens with a clean Rust master installation. Can we somehow ignore the sysroot serde?
cc https://github.com/laumann/compiletest-rs/issues/114, https://github.com/rust-lang/rust/issues/24853
Answers:
username_1: First of all, sorry for the breakage.
I'll also paste the Rust CI [logs](https://api.travis-ci.com/v3/job/194551604/log.txt) (albeit for a diferent case):
```
[01:25:18] +error[E0464]: multiple matching crates for `serde`
[01:25:18] + --> $DIR/used_underscore_binding_macro.rs:11:10
[01:25:18] + |
[01:25:18] +LL | #[derive(Deserialize)]
[01:25:18] + | ^^^^^^^^^^^
[01:25:18] + |
[01:25:18] + = note: candidates:
[01:25:18] + crate `serde`: /checkout/obj/build/x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-gnu/lib/libserde-bb2ee07afa357acf.rlib
[01:25:18] + crate `serde`: /checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/release/deps/libserde-2ef066c730b0470a.rlib
[01:25:18] +
[01:25:18] +error[E0463]: can't find crate for `_serde`
[01:25:18] + --> $DIR/used_underscore_binding_macro.rs:11:10
[01:25:18] + |
[01:25:18] +LL | #[derive(Deserialize)]
[01:25:18] + | ^^^^^^^^^^^ can't find crate
[01:25:18] +
[01:25:18] +error: aborting due to 2 previous errors
[01:25:18] +
[01:25:18] +For more information about this error, try `rustc --explain E0463`.
[01:25:18] +
[01:25:18]
[01:25:18] The actual stderr differed from the expected stderr.
[01:25:18] Actual stderr saved to /checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/x86_64-unknown-linux-gnu/release/build/clippy-77007bd62c0dadf4/out/test_build_base/crashes/used_underscore_binding_macro.stderr
[01:25:18] To update references, run this command from build directory:
[01:25:18] tests/ui/update-references.sh '/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/x86_64-unknown-linux-gnu/release/build/clippy-77007bd62c0dadf4/out/test_build_base' 'crashes/used_underscore_binding_macro.rs'
[01:25:18]
[01:25:18] error: 1 errors occurred comparing output.
[01:25:18] status: exit code: 1
[01:25:18] command: "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools-bin/clippy-driver" "tests/ui/crashes/used_underscore_binding_macro.rs" "-L" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/x86_64-unknown-linux-gnu/release/build/clippy-77007bd62c0dadf4/out/test_build_base" "--target=x86_64-unknown-linux-gnu" "--error-format" "json" "-C" "prefer-dynamic" "-o" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/x86_64-unknown-linux-gnu/release/build/clippy-77007bd62c0dadf4/out/test_build_base/crashes/used_underscore_binding_macro.stage-id" "-L" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/release" "-L" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/release/deps" "-Dwarnings" "-Zui-testing" "-L" "/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/x86_64-unknown-linux-gnu/release/build/clippy-77007bd62c0dadf4/out/test_build_base/crashes/used_underscore_binding_macro.stage-id.aux" "-A" "unused"
```
As I understand it, this is because we have 2 version of serde:
- one in the sysroot (compiler uses its own version) at
```
/checkout/obj/build/x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-gnu/lib/libserde-bb2ee07afa357acf.rlib
```
- one for the tools at
```
/checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/release/deps/libserde-2ef066c730b0470a.rlib
```
which is pulled for the compiletest compiler invocation (wrapped in a `clippy_driver`) with a
`-L /checkout/obj/build/x86_64-unknown-linux-gnu/stage2-tools/release/deps`
I don't have a clear idea how to approach this, so to think out loud:
Obviously we can't drop the `-L` because clippy-driver needs their deps to run,
...but also we're interested in the serde used in the local build,
...hence we'd like to ignore sysroot serde somehow.
I don't think we can ever let local deps coalesce to the ones used in the compiler, because they may be semver-incompatible and compiler would like to have its own thread-locals and such.
Could we somehow "blacklist" sysroot crates on a lib-searching level if a `rustc_private` feature isn't enabled? However, if that's enabled (which I believe is for Clippy) should we set a precedence? There are both cases where sysroot is preferred and also where `-L` ones are preferred...
Could we differentiate sysroot crates on a semantic level (e.g. `use sysroot::serde`)?
I *hope* solving this would also enable using `serde_derive` from crates.io in the compiler (same problem with shadowed sysroot libs via `-L`), which we had to workaround by manually expanding the macros in the rls-data (not pretty!)
FWIW RLS and Clippy work correctly with Rust patch on crates that pull serde dependency (tested in the RLS). Worst case, I can revert the patch but would really like to see a path forward with this.
username_1: Indeed, with the original invocation of
```
LD_LIBRARY_PATH=$(rustc --print sysroot)/lib "target/debug/clippy-driver" "tests/ui/serde.rs" "-L" "/home/xanewok/repos/rust-clippy/target/debug/test_build_base" "--target=x86_64-unknown-linux-gnu" "-C" "prefer-dynamic" "-o" "/home/xanewok/repos/rust-clippy/target/debug/test_build_base/serde.stage-id" "-L" "target/debug" "-L" "target/debug/deps" "-Dwarnings" "-Zui-testing" "-L" "/home/xanewok/repos/rust-clippy/target/debug/test_build_base/serde.stage-id.aux" "-A" "unused"
```
I get the following error:
```
error[E0464]: multiple matching crates for `serde`
--> tests/ui/serde.rs:4:1
|
LL | extern crate serde;
| ^^^^^^^^^^^^^^^^^^^
|
= note: candidates:
crate `serde`: /home/xanewok/repos/rust-clippy/target/debug/deps/libserde-66800e6ad9385298.rlib
crate `serde`: /home/xanewok/repos/rust/build/x86_64-unknown-linux-gnu/stage2/lib/rustlib/x86_64-unknown-linux-gnu/lib/libserde-bb2ee07afa357acf.rlib
error[E0463]: can't find crate for `serde`
--> tests/ui/serde.rs:4:1
|
LL | extern crate serde;
| ^^^^^^^^^^^^^^^^^^^ can't find crate
```
but if I add the disambiguating `--extern`
```
--extern serde=/home/xanewok/repos/rust-clippy/target/debug/deps/libserde-66800e6ad9385298.rlib
```
this correctly compiles.
Status: Issue closed
|
kalexmills/github-vet-tests-dec2020 | 758533306 | Title: rakutentech/go-nozzle: detector_test.go; 17 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/rakutentech/go-nozzle/blob/f4952a44e07754b06bb79bc844fe7a6eb3140080/detector_test.go#L66-L82)
<details>
<summary>Click here to show the 17 line(s) of Go which triggered the analyzer.</summary>
```go
for _, tc := range cases {
// Send the events
go func() {
eventCh <- tc.Input
}()
select {
case <-detectCh:
if !tc.Expect {
t.Fatalf("expect not to be detected")
}
case <-time.After(1 * time.Second):
if tc.Expect {
t.Fatalf("expect to be detected")
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: f4952a44e07754b06bb79bc844fe7a6eb3140080<issue_closed>
Status: Issue closed |
ANGSD/angsd | 338870312 | Title: installing problem
Question:
username_0: Dear,
I got a waning message when installing angsd, the command 'angsd' seem to be correct, but I could not find the command 'realSFS' in installation directory.
The message was below,
make -C ../ bfgs.o HTSSRC=/angsd/htslib-1.7
HTSSRC defined
make[2]: Entering directory `/angsd/angsd1'
make[2]: `bfgs.o' is up to date.
make[2]: Leaving directory `/angsd/angsd1'
g++ -O3 ngsPSMC.cpp -O3 -o ngsPSMC psmcreader.o header.o main_psmc.o hmm_psmc.o fpsmc.o ../bfgs.o -I/angsd/htslib-1.7 /angsd/htslib-1.7/libhts.a -lz -lm -lbz2 -llzma -lpthread -lcurl
g++ -O3 ibs.cpp -o ibs -lz
g++ -O3 scounts.cpp -lz -o scounts
gcc -O3 msHOT2glf.c -I/angsd/htslib-1.7 -O3 -o msHOT2glf -std=gnu99 /angsd/htslib-1.7/libhts.a -lz -lm -lbz2 -llzma -lpthread -lcurl
msHOT2glf.c: In function ‘print_ind_site’:
msHOT2glf.c:326:15: warning: ignoring return value of ‘bgzf_write’, declared with attribute warn_unused_result [-Wunused-result]
bgzf_write(outfileSAF,&homo,sizeof(double));
^
msHOT2glf.c:327:15: warning: ignoring return value of ‘bgzf_write’, declared with attribute warn_unused_result [-Wunused-result]
bgzf_write(outfileSAF,&het,sizeof(double));
^
msHOT2glf.c: In function ‘test’:
msHOT2glf.c:349:17: warning: ignoring return value of ‘bgzf_write’, declared with attribute warn_unused_result [-Wunused-result]
bgzf_write(outfileSAFPOS,&i,sizeof(int));
^
msHOT2glf.c: In function ‘main’:
msHOT2glf.c:448:15: warning: ignoring return value of ‘bgzf_write’, declared with attribute warn_unused_result [-Wunused-result]
bgzf_write(outfileSAF,buf,8);
^
msHOT2glf.c:449:15: warning: ignoring return value of ‘bgzf_write’, declared with attribute warn_unused_result [-Wunused-result]
bgzf_write(outfileSAFPOS,buf,8);
^
make[1]: Leaving directory `/angsd/angsd1/misc'
Thanks for the help,
Best,
zac |
sushantdhiman/sequelize-benchmark | 207379598 | Title: error “Got a packet bigger than 'max_allowed_packet' bytes” on MySQL bulkCreate test
Question:
username_0: My config is:
* Latest Sequelize master branch @ 15f6ad9
* sequelize-benchmark 1.0.0 from npm
* MySQL server is the Docker config that comes with Sequelize (`docker-compose up mysql-57`)
I’ve run this on my Mac and from a Docker Ubuntu VM.
I’ve also tried:
* Using the master branch of sequelize-benchmark @ bf3a1e2
* Using the v4.0.0-2 branch of Sequelize
All with the same results.
Status: Issue closed
Answers:
username_1: Fixed in https://github.com/username_1/sequelize-benchmark/commit/fdc0239436e3197e6d289c0e68c767a42cc05229
username_0: Thank you! I see you ran the benchmark against my other PR too. |
babel/babel-sublime | 69106753 | Title: Coffeescript?
Question:
username_0: sublime-react syntax highlighting supported cjsx, i.e. coffeescript with jsx. Any plans for babel to do the same? I can't get the deprecated sublime-react syntax highlighting to work anymore.
Answers:
username_1: Sorry, no plans whatsoever.
Side note: I feel that this something that the coffeescript community should take over. The sublime-react repo is still there, anyone can fork it and re-publish just the coffeescript parts.
Status: Issue closed
|
saltstack/salt | 93100974 | Title: Batch mode broken on 2015.5.2
Question:
username_0: Any chances for quicker batch job start (instant start in previous implementation was better than 10-30s delay) ? In addition i see that current implementation in version 2015.5.2 starts first batch in parallel, then next jobs in single mode and after few hosts crash occurs. Now batch mode is inusable but crucial for us.
<pre>
[ERROR ] An un-handled exception was caught by salt's global exception handler:
ValueError: list.remove(x): x not in list
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in
salt_main()
File "/usr/lib/python2.6/site-packages/salt/scripts.py", line 349, in salt_main
client.run()
File "/usr/lib/python2.6/site-packages/salt/cli/salt.py", line 98, in run
for res in batch.run():
File "/usr/lib/python2.6/site-packages/salt/cli/batch.py", line 179, in run
active.remove(minion)
ValueError: list.remove(x): x not in list
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in
salt_main()
File "/usr/lib/python2.6/site-packages/salt/scripts.py", line 349, in salt_main
client.run()
File "/usr/lib/python2.6/site-packages/salt/cli/salt.py", line 98, in run
for res in batch.run():
File "/usr/lib/python2.6/site-packages/salt/cli/batch.py", line 179, in run
active.remove(minion)
ValueError: list.remove(x): x not in list
Salt: 2015.5.2
Python: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Jinja2: 2.7.2
M2Crypto: 0.20.2
msgpack-python: 0.4.6
msgpack-pure: Not Installed
pycrypto: 2.0.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: Not Installed
</pre>
Answers:
username_0: Full example:
<pre>
salt -b 10 'node*.domain' test.ping
Executing run on ['node01.domain', 'node08.domain', 'node14.domain', 'node22.domain', 'node30.domain', 'node33.domain', 'node23.domain', 'node27.domain', 'node01.itg.domain', 'node35.domain']
retcode:
0
node30.domain:
True
retcode:
0
node23.domain:
True
Executing run on ['node34.domain', 'node07.domain']
retcode:
0
node14.domain:
True
Executing run on ['node20.domain']
node01.itg.domain:
----------
node01.domain:
----------
node33.domain:
----------
[ERROR ] An un-handled exception was caught by salt's global exception handler:
ValueError: list.remove(x): x not in list
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.6/site-packages/salt/scripts.py", line 349, in salt_main
client.run()
File "/usr/lib/python2.6/site-packages/salt/cli/salt.py", line 98, in run
for res in batch.run():
File "/usr/lib/python2.6/site-packages/salt/cli/batch.py", line 179, in run
active.remove(minion)
ValueError: list.remove(x): x not in list
Traceback (most recent call last):
File "/usr/bin/salt", line 10, in <module>
salt_main()
File "/usr/lib/python2.6/site-packages/salt/scripts.py", line 349, in salt_main
client.run()
File "/usr/lib/python2.6/site-packages/salt/cli/salt.py", line 98, in run
for res in batch.run():
File "/usr/lib/python2.6/site-packages/salt/cli/batch.py", line 179, in run
active.remove(minion)
ValueError: list.remove(x): x not in list
</pre>
username_1: @username_0, thanks for the report. This is a duplicate of #24875, which has been fixed in 2015.5.3, which will come out in the next few days. I don't have any experience with batch being slow to start. If you want we can change the title and update the content of this issue to focus on that and I'll remove the duplicate label, etc.
username_0: Correct me if i am wrong. For short time batch implementation was changed to use list of minions keys as a source of all minion instead of ping of all minions. This solution was working great for me and batch job started instantly. Now i think implementation has been reverted again to ping of all minion to fetch list of minion and in my opinion it's slow now. It requires a lot of seconds to start instead of instant start in previous implementation.
username_1: @username_0, it sounds like you know more about this than I do. A new command line flag could be added for batch commands to toggle this behavior.
username_0: @username_1 it would be great :)
username_0: After upgrade to 2015.5.3 batch mode works better but weird( or maybe I dont understand something) . Chunk size is from 1 to batch size and it's random for example exucution is done in order (salt -b 10 '*' test.ping):
- 3 hosts
- 7 hosts
- 2 hosts
- 10 hosts
- 3 hosts
Batch mode still needs 10-30 seconds to start and return a lot o minions with no response. It's still far away from working as good as with CKminions implementation.
username_0: Any updates on this topic ?
username_1: @username_0, I remember some batch work being done recently, but I think most or all of it went into 2015.8, so you could try that. Otherwise, I am unaware of any updates to batch. SaltStack resources are limited at this point, meaning that most bugs are not getting attention, but we're working on new ways to create more community involvement.
Status: Issue closed
|
TechnologyMasters/jobs | 290861594 | Title: Software Developer
Question:
username_0: <!--
==============================================================
PLEASE REVIEW RULES BEFORE POSTING:
https://github.com/TechnologyMasters/jobs/blob/master/README.md#employers
Issue title format: [Honeypot.io] - [Software Developer] - [Berlin, Hamburg, Munich, Amsterdam]
==============================================================
-->
## What You'll get
### Salary Expectation
<!--
Contract / Full Time
40k - 120k
-->
### Benefits
- Get multiple job offers
- Only one profile, no applications
- Choose which tech-stack you want to work with
### Location
- Berlin
- Hamburg
- Amsterdam
- Munich
- Stuttgart
## What You'll Do _(Job Description)_
Looking for Software developers with at least 2 years of experience who are willing to work in Germany or the Netherlands.
## What You Need to Be Successful _(Skills)_
### Must Have
- 2+ years experience as a Software Developer
## About Our company
We help with VISA sponsorship
## How to apply
Follow this link: [](https://goo.gl/m8QFLT)
---
## Meta
<!--
These meta tickboxes automatically apply labels to your post
Learn more in the README https://github.com/TechnologyMasters/jobs/blob/master/README.md#label-definitions
-->
- [ ] Full Time
- [ ] Salary
Answers:
username_1: Please include more details:
- salary figure or range
- check the 'meta' checkboxes to apply labels
- change issue title to the proper format: Company - Title - Location
username_0: Hi @username_1 thank you for your help. I changed it and added the salary range, changed the title, and also checked the right meta checkboxes.
Cheers!
username_2: missing apply link
username_0: @username_2 done! Thank you |
swagger-api/swagger-ui | 357613545 | Title: SwaggerUI 3.18.2 contains in dist folder exactly the same files as it is in version 3.18.1
Question:
username_0: ### Q&A (please complete the following information)
- OS: Windows
- Browser: Chrome
- Version: 3.18.2
- Method of installation: Download from GitHub
- Swagger-UI version: [e.g. 3.10.0]
- Swagger/OpenAPI version: OpenAPI 3.0
### Content & configuration
### Describe the bug you're encountering
I downloaded newest release by using "Source code" option on the GitHub. Since I'm interested only "dist" I deployed it on my http server. Checked version and it's pointing to 3.18.1 instead 3.18.2.
### To reproduce...
I checked version in Chrome console using `JSON.stringify(versions)`
### Additional context or thoughts
Answers:
username_0: @username_1 I have cleared my cache and I'm still seeing 3.18.1. I have even used other browser which I don't use as my default Edge and it's the same.
`file:///Users/kyle/Code/ui/dist/index.html` <- This one which you are showing me is it dist downloaded from that [link](https://github.com/swagger-api/swagger-ui/archive/v3.18.2.zip)?
username_1: Aha - looks like our `v3.18.2` tag was looking at an older commit.
I've just updated it, can you try downloading from that link again?
Status: Issue closed
username_0: Perfect! Now it's working :) Thank you! |
keithbrink/segment-spark | 517648610 | Title: Cashier::usesCurrencySymbol()
Question:
username_0: I have this error with the new Stripe version.
`Method KeithBrink\AffiliatesSpark\Formatters\Currency::__toString() must not throw an exception, caught Error: Call to undefined method Laravel\Cashier\Cashier::usesCurrencySymbol() {"userId":211,"exception":"[object] (Symfony\\Component\\Debug\\Exception\\FatalErrorException(code: 1): Method KeithBrink\\AffiliatesSpark\\Formatters\\Currency::__toString() must not throw an exception, caught Error: Call to undefined method Laravel\\Cashier\\Cashier::usesCurrencySymbol() at /var/www/vhosts/trafficshield.tools/httpdocs/vendor/laravel/framework/src/Illuminate/Support/helpers.php:251)`
namespace KeithBrink\AffiliatesSpark\Formatters;
I fixed that by chaning __toString function.
//$money = new Money();
//return Cashier::usesCurrencySymbol().number_format($this->amount, 2);
return strval(number_format($this->amount).'€'); |
saltstack/salt | 125885778 | Title: Add ability to define custom beacons
Question:
username_0: It would be nice if you where able to create your own custom beacons, just as you can with modules and runners.
I've searched the docs, and can't see any way to do this yet.
Answers:
username_1: @username_0, thanks for the feature request.
username_2: Looks like it is documented here?
https://docs.saltstack.com/en/latest/topics/beacons/#writing-beacon-plugins
username_0: --
<NAME> █▉ Systems Engineer
█▉█▉█▉
FreeNode: username_0 ▉▉ www.v42.dk
username_1: After looking over the [loader code](https://github.com/saltstack/salt/blob/v2015.8.3/salt/loader.py#L429-L443) it seems that custom beacons should be able to be served master to minion like other custom modules. If this does not work, it should.
username_0: Didn't know where in the code to look, as I haven't yet gotten a firm grasp of how Salt's internals are supposed to work. But since you pointed me in the right direction, I'll give it some more testing and try to fix what's wrong. Be it documentation or code.
username_2: @username_0 I had good luck using this a reference: https://github.com/saltstack/salt/blob/develop/salt/beacons/load.py
username_1: @username_0, no problem. I wouldn't expect anyone unfamiliar to salt to have to look into the core code just to write a custom extension module. :-) If you have time to figure out and document the process, that would be awesome.
Status: Issue closed
|
Azgaar/Fantasy-Map-Generator | 314488889 | Title: Heightmap editing erases work on borders and labels
Question:
username_0: <!-- PLEASE FILL OUT THE FOLLOWING INFORMATION -->
First off I am really enjoying this map generator.
But my one issue so far is that when I go to edit heightmaps the borders of the countries as well as country names get reset when I complete the heightmap work.
### Generator version
V 0.55b
### Browser version
65.0.3325.181
### Steps to reproduce
Go into heightmap rollback, then complete.
#### .map file
### Expected behaviour
Being able to edit geography without the country borders and labels being reset.
### Actual behaviour
Country borders (even custom) and labels are reset.
Answers:
username_1: Thanks for the feedback. Actually, it is expected that upon rolling back the map all data except of heightmap is vanished. There is a general customization rule: "finalize a Heightmap as a first step".
But... I understand it's not obvious for end users. I'm going to try to allow a heightmap edition without data clean up. As you can imagine it's a big risk as for example user can change the land with country assigned to an ocean cell. Existing roads, states and burgs can be totally messed up. I need to handle all this situations and let user fix the problems that may occur. What do you think?
username_1: Working on it
username_1: Change is deployed in a test version. Path: Options -> Customize -> Heightmap -> Edit -> Keep -> Complete (to exit the edit mode). Please comment in case of issues
Status: Issue closed
|
tarampampam/random-user-agent | 202090724 | Title: JS
Question:
username_0: ## Steps to reproduce:
1. Подменить ua
2. https://whoer.net/ru
3. UA и данные JS не соответствуют.
## Expected behaviour:
соответствовать
## Actual behaviour:
не соответствует
---------
### System information:
**Browser version**:
Google Chrome 55.0.2883.87 m
**Extension version**:
2.1.0.1
Answers:
username_1: Так как на ресурсе обработчик получения ua запускается во время загрузки страницы, а не по её завершению (это и есть как раз то самое ограничение, когда клиент определяет ua быстрее, чем запускается mock для его подмены) - данное расширение не может его подменить. Можно было бы запускать inject перед запуском клиентских скриптов, но в таком случае мы не сможем получить и подставить именно тот, который как раз надо подставить. Так же можно было бы предварительно его подменять на, например `N/A`, но тогда могут пострадать другие ресурсы, которые его используют для своей работы. В общем - ситуация довольно сложная, и на данный момент я не нашел её решения.
username_0: user-agent switcher for chrome делает это успешно, проверил, но не обладает
функционалом генерирования.
username_1: Ссылку на него, пожалуйста?
username_0: Ознакомьтесь с приложением "User-Agent Switcher for Chrome":
https://chrome.google.com/webstore/detail/user-agent-switcher-for-c/djflhoibgkdhkhhcedjiklpkjnoahfmg?utm_source=gmail
username_1: Хм, приведенный вами в качестве примера плагин выполняет один запрос к бэкэнду против пяти у моего, хотя общая логика mock-а аналогична. Более того - мой не всегда отрабатывает не верно (т.е. проблема носит не постоянный характер), от чего я могу сделать что проблема кроется именно в этом. Я подумаю о том, как можно разрешить данную проблему (точнее уже знаю как это сделать, но руки пока не дойдут в ближайшие дни).
В любом случае - большое спасибо за ваш репорт. Очень прям здорово.
username_0: не за что
username_1: Опубликовал в store патченную версию. Проверяйте.
Status: Issue closed
|
YosysHQ/yosys | 309673410 | Title: sat -dump_vcd, $end fix for $timescale.
Question:
username_0: Hi I tried to convert -dump_vcd generated VCD file with Modelsim's vcd2wlf.
To make it work, it needs $end for $timescale. I propose following fix to file sat.cc.
```
fprintf(f, "$timescale 1ns\n"); // arbitrary time scale since actual clock period is unknown/unimportant
—> fprintf(f, "$end\n");
fprintf(f, "$scope module %s $end\n", module->name.c_str());
```
Answers:
username_1: Removed the timescale altogether (in commit 665eec3). Why set it to an arbitrary value if we can also just not set it at all.
Status: Issue closed
|
neo4j/neo4j-go-driver | 687292741 | Title: Retryable errors surfacing from transaction functions
Question:
username_0: Retryable errors that happens in a cluster are not retried and transactions isn't committed and error returned to the user application.
These kind of errors are returned to application during leader election:
Connection error: dial tcp: i/o timeout
Connection error: dial tcp 192.168.3.11:7687: connect: connection refused
Neo.ClientError.Cluster.NotALeader<issue_closed>
Status: Issue closed |
Hedwika/python-012021 | 839752187 | Title: Úkoly z páté lekce
Question:
username_0: <NAME>,
posílám ti úkoly z páté lekce a děkuji za kontrolu.
Hezký den
Hedvika
Answers:
username_1: <NAME>,
### Příklad 21
Super :-)
### Příklad 22
Taky super, pouze poslední úkol sis mohla zjednodušit pomocí dvojtečky :-)
```python
gavin_gillam_books = character_deaths.loc["<NAME>": "Gillam", "GoT":"DwD"]
```
### Příklad 23
Taky super, líbí se mi názvy proměnných :-)
### Příklad 24
Taky super :-)
### Příklad 25
Taky super :-)
### Shrnutí
Máš vše perfektně zpracované včetně rozšíření a doplňků, takže připisuji 5 bodů :-)
Status: Issue closed
|
ColorlibHQ/AdminLTE | 522428956 | Title: [BUG] in "plugins/filterizr" in visual studio 2019
Question:
username_0: AdminLTE-3.0 giving 4 error in visual studio 2019 when i debugging.
1. "plugins/filterizr/ActiveFilter" has no exported member 'Filter'. Did you mean to use 'import Filter from "plugins/filterizr/ActiveFilter"' instead? "plugins\filterizr\FilterItems.d.ts"
2. "plugins/filterizr/ActiveFilter" has no exported member 'Filter'. Did you mean to use 'import Filter from "plugins/filterizr/ActiveFilter"' instead? "plugins\filterizr\Filterizr.d.ts"
3. "plugins/filterizr/FilterizrOptions/defaultOptions"' has no exported member 'RawOptions'. Did you mean to use 'import RawOptions from "plugins/filterizr/FilterizrOptions/defaultOptions" instead? "plugins\filterizr\Filterizr.d.ts"
4. "plugins/filterizr/FilterizrOptions/defaultOptions"' has no exported member 'RawOptionsCallbacks'. Did you mean to use 'import RawOptionsCallbacks from "plugins/filterizr/FilterizrOptions/defaultOptions"' instead? "plugins\filterizr\FilterContainer.d.ts"
i m using
windows 10
Adminlte-3.0(Latest)
Visual Studio 2019
Status: Issue closed
Answers:
username_1: According to this error you try to compile filterizr (.ts -> .js), AdminLTE doesn't compile plugins we use only pre-compiled plugin files. |
artsy/force | 181223319 | Title: 'Read More' content fold appears on top of contact gallery ribbon
Question:
username_0: https://www.artsy.net/artwork/andy-warhol-flowers-ii-dot-72-6
When scrolling down, the purple ribbon floats under the 'read more' line.

Answers:
username_1: I think this may have been fixed? (It may have even been me as I thought I'd looked at this before). This is what I see:

username_2: Totally fixed.
Status: Issue closed
|
docker/docker-py | 51521376 | Title: Add helper API on top of core API
Question:
username_0: It seems that the current architecture of `docker-py` is to exactly mirror the API of docker, and not add any "helper functions" on top. I totally agree that the docker-py core should mirror the API of docker as closely as possible. However, I keep finding myself re-writing common helpers over and over when integrating docker-py with other code.
Would the maintainers be opposed to adding a `helper` API on top of the existing core API that adds common operations, such as checking if a container or image exists, converting raw strings into more pythonic types, etc? I'm happy to provide some initial PRs to build a base for this new API if someone gives me a green light
Answers:
username_1: We had to figure out how to handle this very issue with boto a few years ago. Initially, I felt that a separate package with the more sane API would make the most sense. However, that would mean juggling two projects with two potentially different release cadences, with a really confusing backwards compatibility story (boto evolves/evolved fast, like Docker).
We eventually settled on a layered approach. The official Amazon Java SDK was imitated closely in ``layer1``, and a more humane API took form in ``layer2``. You can see this in practice in the [dynamodb2 module](https://github.com/boto/boto/tree/develop/boto/dynamodb). Amazon's DynamoDB API was particularly difficult to deal with at first, so this second layer brought some much-needed sanity and usability to the table.
However, some developers really do need access to the "native" level in order to specially optimize or handle some more complex cases. Most end up using layer2, because it's easier, but this setup allows everyone to be happy.
I don't know what this would look like in docker-py, but figured I'd share experiences from another Python module that also had to track a rapidly changing upstream API. It sounds like you are leaning that way, anyway.
FWIW, I'd gravitate towards a module name that suggests its primary purpose (helpers, conveniences, friendliness). Maybe something like ``docker.humane``.
username_2: FWIW I am +1 on this being a separate package, and I've already started assembling some helpers that I've used -- to add to your list, `temporary_image` and `temporary_container`, which create images and containers and delete them in a contextmanager. Possibly by the time I'm done I'll have shoved them into a separate library myself anyhow.
username_0: @username_2 great ideas! FWIW I've got some basic code but it's a bit stalled, if anyone wants to take point on a PR for this I'm happy to deposit my current code into a gist in case it's useful
username_2: Nice! I've got some as well, although it's a bit stuck on not having an
easy way to write tests for it unfortunately, but maybe starting to put
some stuff in the same place would get this kick started, yeah.
username_3: I'll try to kickstart something today or tomorrow if it helps.
username_3: Thrown a branch together for starters, added a few commonly requested functions. See https://github.com/docker/docker-py/tree/efficiency ; Feel free to send PRs against this branch, even if it's WIP.
username_4: I've started working on a separate project with the goal of providing a higher-level object-oriented API on top of the REST API implemented here: https://github.com/quantopian/dockorm. It's currently built with deployment of IPython/Jupyter notebook servers in mind, but the library itself is pretty generic besides using IPython's traitlet system to provide configuration: https://github.com/quantopian/dockorm.
username_0: Here's a simple idea that I've used a few times, maybe it will help out here.
```
import os
import threading
import sys
def log_for_docker(generator, line_prefix='', print_empty=False):
'''
To get generator, use `client.logs(container=cid, stream=True)`
It's perfectly reasonable to use this method once for STDOUT and
once for STDERR
'''
def log_docker(generator, print_empty, line_prefix):
for line in generator:
if print_empty or line.strip():
sys.stdout.write(line_prefix)
sys.stdout.write(line)
r_log = threading.Thread(target=log_docker, args=(generator,print_empty, line_prefix))
r_log.daemon = True
r_log.start()
```
Naturally this might cause output to be mixed when using multiple threads, but I've yet to have such serious issues that I've cared. If someone wants to modify this to take an output mutex, they could easily make it such that only one background thread is printing at a single time without hurting the performance of the main thread of execution. That's no guarantee that the main thread and the currently-printing background thread won't write at the same time, it would just help if you've got many container logs being tracked in one application
username_0: Usage for above code
```
log_generator = cli.logs(container=cid, stream=True)
log_for_docker(log_generator, 'RabbitMQ: ', True)
```
Output:
```
RabbitMQ:
RabbitMQ: RabbitMQ 3.4.4. Copyright (C) 2007-2014 GoPivotal, Inc.
RabbitMQ: ## ## Licensed under the MPL. See http://www.rabbitmq.com/
RabbitMQ: ## ##
RabbitMQ: ########## Logs: tty
RabbitMQ: ###### ## /var/log/rabbitmq/[email protected]
RabbitMQ: ##########
RabbitMQ: Starting broker...
RabbitMQ: =INFO REPORT==== 28-Feb-2015::23:55:38 ===
```
username_0: Another one that may be useful here while I'm at it. Ensures a container dies with the program.
Naturally this could be modified to support it's own kwargs (stuff like `should_rm` or `timeout`) that are stripped from the dictionary passed to start
```
import atexit
def safe_start(client, *args, **kwargs):
def container_cleanup(client, cid):
client.stop(cid)
client.start(*args, **kwargs)
if len(args) == 1:
atexit.register(container_cleanup, client, args[0])
else:
atexit.register(container_cleanup, client, kwargs['container'])
```
Called as:
```
# Both method work fine
safe_start(cli, cid, publish_all_ports=True)
safe_start(cli, container=cid, publish_all_ports=True)
```
username_5: I've already started working on a tool where one of the functionalities is to provide such helper functions:
https://github.com/DBuildService/dock/blob/master/dock/core.py#L151
username_6: I'm trying to wrap my head around whether this issue matches with the problem I'm having. Currently `docker-py` returns fairly painful-to-use responses from the API for a Python application. Even worse if you're writing against multiple versions of the Docker/Swarm API where things may have dramatically changed.
I would prefer being able to parse any version of the API that's supported by `docker-py` to a common format for consumers of `docker-py`. This would make it easier to integrate with a variety of versions of Docker that you may be talking to. Especially useful for a provider of a Docker service.
My current solution is a parser module where you essentially do:
```python
import docker
import docker_parse
client = docker.Client(..., version='auto')
resp = client.info()
friendly_info = docker_parse.parse(client.version(), "info", resp)
```
Thoughts?
username_7: https://github.com/docker/docker-py/issues/1186. :tada:
Status: Issue closed
|
rossfuhrman/_why_the_lucky_markov | 472279787 | Title: Like, say, let's focus on shoes today. There, already we shared a sense of humor.
Question:
username_0: Toot: Like, say, let's focus on shoes today. There, already we shared a sense of humor.
One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots |
FormidableLabs/radium | 243960902 | Title: I have a problem about radium.keyframes
Question:
username_0: I try to change the animation when I click a button and do something when the animation is complete.
Here is a simple example:
```js
import React from 'react';
import radium, {StyleRoot} from 'radium';
const isClickedStyle = {
opacity: '0'
};
const normalStyle = {
opacity: '1'
};
const normalAnimation = radium.keyframes({
'0%': isClickedStyle,
'100%': normalStyle
});
const isClickAnimation = radium.keyframes({
'0%': normalStyle,
'100%': isClickedStyle
});
const style = isClicked => ({
width: '100px',
height: '100px',
background: 'blue',
animation: 'x 0.5s ease-in-out',
animationName: isClicked ? isClickAnimation : normalAnimation,
...(isClicked ? isClickedStyle : normalStyle)
});
@radium
class Example extends React.Component {
constructor(props) {
super(props);
this.state = {
isClicked: false
};
this.animationEnd = true;
this.onClick = this.onClick.bind(this);
}
render() {
return (
<div>
<StyleRoot style={testStyle(this.state.isClicked)}
onClick={this.onClick}
onAnimationEnd={() => (this.animationEnd = true)}
/>
</div>
);
}
onClick() {
if(this.animationEnd) {
this.animationEnd = false;
[Truncated]
Here is my solution:
```
render() {
return (
<div>
<StyleRoot style={{animationName: isClickAnimation}} />
<StyleRoot style={{animationName: normalAnimation}} />
<StyleRoot style={testStyle(this.state.isClicked)}
onClick={this.onClick}
onAnimationEnd={() => (this.animationEnd = true)}
/>
</div>
);
}
```
This can add `keyframe` to `style tag` at the begin. `animation` will work because `keyframe` does exist.
However, I don\`t think this is a good solution. Is any another way to solve it?
Answers:
username_1: I'm facing the same problem is there a possible fix to this? or keyframes animation inline is just not possible at all?
username_2: Here's a minimal repro. Works in Chrome 60 but not Safari 10.1.1: https://codepen.io/patricknausha/pen/GvQxKR
username_2: It seems my codepen *sometimes* works the first time in Safari.
username_1: I ended up using GASP tweenmax to achieve this, worked like a charm!
username_3: My suspicion is that this is a race condition, in which the `@keyframes` are added to the stylesheet on the page _after_ the `animationName` style property is added to the element. Safari doesn't find the animation, and doesn't refresh those elements when the animation is added to the stylesheet.
This can also be verified by doing the following:
1) Find the element which is not animating in the web inspector.
2) Remove the `animation` style rule and then re-add it
3) Safari will now find the animation in the existing stylesheet and the element will animate.
Looking at the code, the update of the global stylesheet (`src/components/style-sheet.js:43`) is deferred by 0 milliseconds, while the addition of the style rule to the element is synchronous. This would cause the race condition described above.
Does anyone know why the `_onChange` method of `StyleSheet` in `src/components/style-sheet.js` is deferred?
username_4: You could also try this [AnimationAwareStyle component](https://github.com/FormidableLabs/radium/issues/754#issuecomment-579796290) |
bgamari/hoogle-index | 112085198 | Title: Generate hoogle docs for stack installed packages
Question:
username_0: This would be helpful since stack is getting quite popular, I'm guessing you'd need to look at the global stack.yaml and download from the default snapshot.
Answers:
username_1: This is indeed a reasonable request but I don't anticipate working on it personally any time in the near future. Patches accepted.
username_0: Totally understandable. Do you have any tips or direction if I were to try
and implement this myself?
username_2: @username_0 see https://github.com/commercialhaskell/stack/issues/55, which is the Stack issue tracking this feature, and all the notes on Hoogle 5. |
legacysurvey/legacypipe | 320430256 | Title: update unwise module files
Question:
username_0: @mlandriau @username_2 @username_1
The unwise (static and time-resolved) module files in `bin/modulefiles/cori/` and `bin/modulefiles/edison/` don't appear to be pointing to the correct locations on-disk. Can one of you please update these to point to the right location when you get a chance? I haven't kept up with the recent data movements...
Answers:
username_1: @username_0
I talked to Martin about this today and apparently the 'desiproc' pseudo-user that runs all of the data release processings has its own version of the module files (somewhere) that's more current. It seems like Martin would be the best person to help with this since he must know where to find those current modulefiles, whereas I wouldn't be able to tell you without spending a lot of time investigating.
username_2: fixed via #190
Status: Issue closed
|
harryosmar/plugin-validation | 297981360 | Title: rule `is_true` multiple call
Question:
username_0: Sorry after some consideration you can just multiple condition `and` for this case
```
public function test_is_true(){
$field = (new Field('field', 5 < 4 && 1+1 === 3))->isTrue('comparison error');
$this->assertFalse($field->isValid($this->language));
$this->assertEquals(['comparison error'], $field->getErrors());
}
```
Status: Issue closed
Answers:
username_0: Sorry after some consideration you can just multiple condition `and` for this case
```
public function test_is_true(){
$field = (new Field('field', 5 < 4 && 1+1 === 3))->isTrue('comparison error');
$this->assertFalse($field->isValid($this->language));
$this->assertEquals(['comparison error'], $field->getErrors());
}
```
Status: Issue closed
username_0: ```
public function test_is_true(){
$field = (new Field('field', 5 < 4))->isTrue('comparison error');
$this->assertFalse($field->isValid($this->language));
$this->assertEquals(['comparison error'], $field->getErrors());
}
```
can a field call multiple rule `is_true` ?
become like this
```
public function test_is_true(){
$field = (new Field('field'))->isTrue(5 < 4, 'comparison error')->(1+1 === 3, 'wrong');
$this->assertFalse($field->isValid($this->language));
$this->assertEquals(['comparison error'], $field->getErrors());
}
```
username_1: https://github.com/username_0/plugin-validation/pull/9
Status: Issue closed
|
starikcetin/Extenject | 842834461 | Title: Support UniRx integration for Extenject Signals
Question:
username_0: **Is your feature request related to a problem? Please describe.**
[UniRx integration](https://github.com/svermeulen/Extenject#unirx-integration) for Extenject signals requires manual installation of UniRx as well as editing of `Zenject.asmdef` file. This seems to undo itself easily inside the Package folder.
**Describe the solution you'd like**
It would be nice to support a `-unirx` branch that installs UniRx and provides a correctly setup `asmdef` file.
**Describe alternatives you've considered**
Currently, I am just not using openupm for Extenject and do the above manually. |
ethz-asl/kalibr | 87003553 | Title: Mistake in IccSensors.py
Question:
username_0: I am not 100% sure, but can someone check whether there should be a .transpose() in line 160 of the file: kalibr/aslam_offline_calibration/kalibr/python/kalibr_imu_camera_calibration/IccSensors.py
Since it is only affects the prior, the calibration results should still be correct but it would take longer for the optimization to converge |
ex-aws/ex_aws | 444990257 | Title: AWS Rekognition Service
Question:
username_0: Hello,
We are currently working on the AWS Rekognition service and so far we have implemented and launched a package https://github.com/coletiv/ex_aws_rekognition 🎊
We would like to integrate it within the current ex_aws packages to offer yet another service! To do this we are requesting a specific repository like said in the [CONTRIBUTING](https://github.com/ex-aws/ex_aws/blob/master/CONTRIBUTING.md).
Thanks 😄
Answers:
username_1: Hi @username_0 - I realise it's been a while, but if you're still maintaining the ex_aws_rekognition repo I'm happy to set you up an official one here - let me know.
username_0: Hey @username_1 - yes, we would like to centralise the Rekognition service with the rest 🙌
Status: Issue closed
username_1: Done - you should be all set. Thanks! |
playcanvas/developer.playcanvas.com | 827801467 | Title: Light halo tutorial no longer works with current material assets
Question:
username_0: https://developer.playcanvas.com/en/tutorials/light-halos/
Following the tutorial now produces the following:


Copying and pasting the original asset still works though
Answers:
username_0: I've had to add the opacity map to be the same resource as the circle blur to get the same effect:
```
glowMaterial.opacityMap = asset.resource;
``` |
BrowserSync/browser-sync | 326469632 | Title: Always return true but nothing else
Question:
username_0: ### Issue details
It always return a word 'true' when I run it.

### Steps to reproduce/test case
I run it in terminal Hyper.
### Please specify which version of Browsersync, node and npm you're running
- Browsersync [ 2.13.0 ]
- Node [ 9.3.0 ]
- Npm [ 6.0.1 ]
### Affected platforms
- [ ] linux
- [ ] windows
- [Y] OS X
- [ ] freebsd
- [ ] solaris
- [ ] other _(please specify which)_
### Browsersync use-case
- [ ] API
- [ ] Gulp
- [ ] Grunt
- [Y] CLI
### If CLI, please paste the entire command below
browser-sync start --server
### for all other use-cases, (gulp, grunt etc), please show us exactly how you're using Browsersync
Status: Issue closed
Answers:
username_0: ### Issue details
It always return a word 'true' when I run it.

### Steps to reproduce/test case
I run it in terminal Hyper.
### Please specify which version of Browsersync, node and npm you're running
- Browsersync [ 2.13.0 ]
- Node [ 9.3.0 ]
- Npm [ 6.0.1 ]
### Affected platforms
- [ ] linux
- [ ] windows
- [x] OS X
- [ ] freebsd
- [ ] solaris
- [ ] other _(please specify which)_
### Browsersync use-case
- [ ] API
- [ ] Gulp
- [ ] Grunt
- [x] CLI
### If CLI, please paste the entire command below
browser-sync start --server
### for all other use-cases, (gulp, grunt etc), please show us exactly how you're using Browsersync
Status: Issue closed
|
backstage/backstage | 955218445 | Title: catalog import YAMLSyntaxError is logged but not report on entity import.
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
A YAML syntax error should be an error, not a warn-level status and should cause catalog import to fail.
## Current Behavior
Import appears to succeed, reporting a location of the import (for example of a Template), but the imported entity
is partially missing. It's impossible to reimport after fixing the YAMLSyntaxError because the location table entry has been
created and needs to be manually deleted.
```text
2021-07-28T19:55:04.787Z catalog warn YAML error at url:https://gitlab.cc.columbia.edu/cuit-ent-arch/backstage-cu/-/blob/master/templates/dja-template/template.yaml, YAMLSyntaxError: All collection items must start at the same column type=plugin entity=location:default/generated-36959e2b859190bbda790c937be120bc95df2a6a
```
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas as to the implementation of the addition or change -->
Fail the catalog import on YAMLSyntaXError.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code or configuration to reproduce, if relevant -->
1. Forget to install yaml-mode in emacs (!)
[template.yaml.txt](https://github.com/backstage/backstage/files/6896345/template.yaml.txt)
and edit `template.html` inadvertently inserting a `\t` instead of spaces.
2. Go to /catalog-import and paste the repo url of `template.html`
3. Watch catalog import successfully analyze and then import, reporting a location.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context (e.g. links to configuration settings, -->
<!--- stack trace or log data) helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
- NodeJS Version (v12):
- Operating System and Version (e.g. Ubuntu 14.04):
- Browser Information:
Answers:
username_0: [template.yaml.txt](https://github.com/backstage/backstage/files/6896349/template.yaml.txt)
username_0: PS: after fixing the tab in the sample file I also had to change a tag from `REST` to `rest` due to the policy check requiring lowercase... However the import succeeded once that was done.
username_1: Ah this should not have been closed I think
username_1: <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
A YAML syntax error should be an error, not a warn-level status and should cause catalog import to fail.
## Current Behavior
Import appears to succeed, reporting a location of the import (for example of a Template), but the imported entity
is partially missing. It's impossible to reimport after fixing the YAMLSyntaxError because the location table entry has been
created and needs to be manually deleted.
```text
2021-07-28T19:55:04.787Z catalog warn YAML error at url:https://gitlab.cc.columbia.edu/cuit-ent-arch/backstage-cu/-/blob/master/templates/dja-template/template.yaml, YAMLSyntaxError: All collection items must start at the same column type=plugin entity=location:default/generated-36959e2b859190bbda790c937be120bc95df2a6a
```
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas as to the implementation of the addition or change -->
Fail the catalog import on YAMLSyntaXError.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code or configuration to reproduce, if relevant -->
1. Forget to install yaml-mode in emacs (!)
and edit `template.html` inadvertently inserting a `\t` instead of spaces.
2. Go to /catalog-import and paste the repo url of `template.html`
3. Watch catalog import successfully analyze and then import, reporting a location.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context (e.g. links to configuration settings, -->
<!--- stack trace or log data) helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
- NodeJS Version (v12):
- Operating System and Version (e.g. Ubuntu 14.04):
- Browser Information:
[template.yaml.txt](https://github.com/backstage/backstage/files/6896345/template.yaml.txt) |
ClickHouse/ClickHouse | 798115345 | Title: [Unexpected behaviour] Bad size of marks file - Data Skipping Indexes
Question:
username_0: Hi @username_1,
for the data I have imsi as 16 digits numeric value, nparty as string conversion of 11 digits numeric value, startdate and enddate I have in the format "2020-02-03 00:16:15", count is mostly two-digit numeric value, type consists of a single character
for ranges I have around 15762 unique imsi, 38747 unique nparty, 41169 unique startdate with max value is "2021-01-14 00:01:09" and min value "2020-02-03 00:16:15", 41169 unique enddate with max value is "2021-01-21 00:01:09" and min value is "2020-02-10 00:16:15",394 unique count value, 2 unique type value
for the index "20201119_108_814_1" for which I am getting the problem I have total rows 23970,
min startdate "2020-11-19 00:00:12" and max startdate "2020-11-19 14:55:37",min enddate "2020-11-26 00:00:12" and max enddate "2020-11-26 14:55:37"
I am getting the problem when I ingest the above-defined data and fire the given query as soon as data ingestion is done.
Status: Issue closed
Answers:
username_1: it's a bug https://github.com/ClickHouse/ClickHouse/issues/16925
username_1: Do you have a datasample / steps to reproduce it?
username_0: Hi @username_1,
for the data I have imsi as 16 digits numeric value, nparty as string conversion of 11 digits numeric value, startdate and enddate I have in the format "2020-02-03 00:16:15", count is mostly two-digit numeric value, type consists of a single character
for ranges I have around 15762 unique imsi, 38747 unique nparty, 41169 unique startdate with max value is "2021-01-14 00:01:09" and min value "2020-02-03 00:16:15", 41169 unique enddate with max value is "2021-01-21 00:01:09" and min value is "2020-02-10 00:16:15",394 unique count value, 2 unique type value
for the index "20201119_108_814_1" for which I am getting the problem I have total rows 23970,
min startdate "2020-11-19 00:00:12" and max startdate "2020-11-19 14:55:37",min enddate "2020-11-26 00:00:12" and max enddate "2020-11-26 14:55:37"
I am getting the problem when I ingest the above-defined data and fire the given query as soon as data ingestion is done.
Status: Issue closed
username_0: Hi @username_2,
what is the clickhouse version with this fix?
username_1: Is it fixed? When?
```
Code: 246, e.displayText() = DB::Exception: Bad size of marks file '/data/clickhouse/data/default/.../20210314_6786837_6786931_19/skp_idx_timeidx.mrk3': 72, must be: 96 (version 20.12.8.5 (official build))
``` |
gitim/react-native-sortable-list | 219315694 | Title: Performance
Question:
username_0: I'm trying to use this module with a simple text listview, but the performance is still not very good. On Android and iOS the movement is choppy. (Even with jsdev turned off for android)
Answers:
username_1: For performance improving you can add `shoudlComponentUpdate` into your component, that is rendered in `renderRow` function.
username_1: Also there is another way to improve performance, we can switch back to LayoutAnimation while animating rows swapping, I switch from it to js-animation, because it was buggy on android.
username_0: Wouldn't modifying shouldComponentUpdate for the row cause it not to actually move around onscreen? (Haven't actually tried it yet so don't quote me)
username_1: Seems, you didn't understand me.
You should implement `shouldComponentUpdate` inside MyRow component:
```js
<SortableList
style={styles.list}
contentContainerStyle={styles.contentContainer}
data={data}
renderRow={({data, active, disabled}) => <MyRow data={data} active={active} disabled={disabled} /> } />
```
username_0: <Icon
name="ellipsis-v"
size={20}
/>
<Text style={{ margin: 7 }}>
{data.name.trim()}
</Text>
</TouchableOpacity>```
username_2: @username_1 Ive got the same problems too. My ```renderRow``` returns a class with just a ```Text```. Could you be more elaborate about how do I fix perf issues?
username_1: Strange, how many entries in your list component?
username_2: Hey!
This is my state:
```
this.state.data = {
a: {
index: 'a'
},
b: {
index: 'b'
}
}
```
Now when I want to update my ```object```, I do this:
```
this.setState({
data: {
... this.state.data,
[key]: {
index: key
}
}
});
```
This causes the whole list to re-render. All the rows flash. Looks bad. I even tried shallow update it did not work. Can you tell me how do I fix this? How to add new items without re rendering.
username_1: I answered to this question here https://github.com/username_1/react-native-sortable-list/issues/47, looks possible to implement this heuristic, but I have not implemented it yet. Will try to take a look on it this week.
username_2: @username_1 Looking forward to it. I tried it with ```this.forceRender()``` by setting my state like this:
```
this.state[key] = {
index: key
}
```
But it did not work.
username_3: Any progression on this so far? This "flashing-the-whole-list" is killing me all day. My newbie skill could not help me at all digging around to try to fix it. I am using this for my to-do app, change a bit of Sortable List to accept child button and check box. Worked nicely until I saw the Flashing nightmare. T~T
username_2: @username_3 No! It still flashes. The whole state object gets updated because we are reassigning it.
username_3: @username_2 I am currently trying this one https://github.com/deanmcpherson/react-native-sortable-listview. It seems it does not have the flashing problem but I've still yet finished implementing.
username_2: @username_3 I need horizontal scrolling list. Does it support that?
username_2: @username_3 Also let me know how it fares.
username_4: @username_2 I had a similar problem.
My structure looks like the following:
```javascript
this.state.data = [
{
index: 'a'
},
{
index: 'b'
}
];
```
Now i also tryed to update my code:
```javascript
var data = [...this.state.data];
var pos = data.findIndex((item) => item.index === 'a');
if (pos !== -1) {
data[pos] = {
...data[pos],
changedSometing: true
};
this.setState({ data });
}
```
By changing this line `var data = [...this.state.data];` to this `var data = this.state.data;` fixed the problem for me. |
radar/distance_of_time_in_words | 775465475 | Title: Is not grouping the omitted time quantities
Question:
username_0: If I call for example:
distance_of_time_in_words(date1, date2, false, only: %i[years months days])
It's showing me something like like this:
1 year, 11 months, and 5 days
But the thing is that the amount of days is wrong. The amount of real time is:
1 year, 11 months, 3 weeks and 5 days
So your method should says instead:
"1 year, 11 months, and 26 days"
The fact that I only need the years, months and days, that doesn't mean that I ALSO want a wrong amount of days. I don't want that. I want to see reflected the difference of time in years, months and days. That's all. But not omitting anything.
Answers:
username_1: I think this is a dup of https://github.com/radar/distance_of_time_in_words/issues/77 |
sebastienros/jint | 506725941 | Title: Error during compilation of CoffeeScript code
Question:
username_0: Hello!
After updating the Jint to version 3.0.0 Beta 1612, during working of the [BundleTransformer.CoffeeScript](https://github.com/username_0/BundleTransformer/wiki/CoffeeScript) module an error began to occurr:
```
TypeError: scope is undefined
at 12731:6
```
To reproduce this error, I created a [demo project](https://www.dropbox.com/s/a8hq1mg8xp1175a/TestCoffeeScriptCompilationInJint3.zip).
Answers:
username_1: Did this work with 2.x? I quick glimpse told me that it's actually the compiler that is throwing the exception, can be quite tricky to track down.
username_0: Yes this code worked in version 2.0.
username_0: In the [`JavaScriptException`](https://github.com/username_2/jint/blob/dev/Jint/Runtime/JavaScriptException.cs) class there is a `CallStack` property that is empty in most cases. If this property always contained a call stack, it would be easier for us to find the causes of such errors.
username_0: This error is not thrown by the CoffeeScript compiler, but occurs in its source code. It’s just that at some point in time the `scope` variable turns out to be equals `undefined`.
username_0: @username_1 I determined the location in the `coffeescript-combined.js` file where the error occurs:
```
6229 o = merge(o, {
6230 level: LEVEL_TOP
6231 });
```
To be more precise, the error occurs in the `extend` method when copying the `scope` property:
```
153 // Extend a source object with the properties of another object (shallow copy).
154 extend = exports.extend = function (object, properties) {
155 var key, val;
156 for (key in properties) {
157 val = properties[key];
158 object[key] = val;
159 }
160 return object;
161 };
```
username_0: The above is reason for disappearance of the `scope` property. The “TypeError: scope is undefined” error itself occurs on line 6247 when accessing to property of the `scope` variable:
```
6243 post = this.compileNode(o);
6244 var _o2 = o;
6245 scope = _o2.scope;
6246
6247 if (scope.expressions === this) {
6248 declars = o.scope.hasDeclarations();
```
username_1: @username_0 thank you for the details, it helped me narrow the thing down. This was caused by faulty enumerator in StringDictionarySlim that acted up after delete that hit the fix bucket index - so a nice corner(ish) case.
PR https://github.com/username_2/jint/pull/677 should fix this issue.
Status: Issue closed
username_0: @username_1 Thank you very much!
username_1: @username_0 available on NuGet feed now 🚀
username_2: I think @username_0 already included this fix. I had published it on Nuget before today's fix.
username_0: I have already released [version 3.2.2](https://github.com/username_0/JavaScriptEngineSwitcher/releases/tag/v3.2.2) which supports the Jint version 3.0.0 Beta 1629.
username_1: Great, sorry for the confusion, hard to keep up with all the changes 👍🏻 |
naser44/1 | 159452439 | Title: شارك بوجبة.. تطبيق لـإطعام الأطفال اللاجئين السوريين
Question:
username_0: <a href="http://ift.tt/1UiG5tg">شارك بوجبة.. تطبيق لـ”إطعام” الأطفال اللاجئين السوريين</a> |
microsoft/jacdac | 931038683 | Title: data-science: Handle arithmetic and logical operations between strings and numbers in table columns
Question:
username_0: Right now you can compare strings to numbers. This has some nice features for middle school students (`5 == "5"`) but also some confusing features (`"a" > 5`). Maybe we need two separate blocks.
Answers:
username_1: We should always seperate strings and number in blocks and avoid relying on --very funky-- JS number -> string conversions. Also, type information allows for better field editors. |
jdavisclark/JsFormat | 400682294 | Title: Package Control install JsFormat,but not found.
Question:
username_0: https://codeload.github.com/jdc0589/JsFormat/zip/master download prompt 404(Not Found).
Status: Issue closed
Answers:
username_2: I still get this error when installing:
```
Package Control: Error downloading package. HTTP error 404 downloading https://codeload.github.com/jdc0589/JsFormat/zip/master.
````
username_1: @username_2 See https://github.com/jdavisclark/JsFormat#install |
wenzhixin/bootstrap-table | 152058695 | Title: Select column filter doesn't work due to class conflict (resolved).
Question:
username_0: When using the column filter extension's select filter, I am getting the following error
Uncaught TypeError: Cannot read property 'length' of undefined in line 41 of bootstrap-table-filter-control.js and the table stops rendering.
I did some digging and found out that name of the filtered column is applied to the select element as a class. When the filter is created, bootstrap-table-filter-control.js looks for elements with the class name matching the column name (Line 204), if you have another element using a class name similar to the column name you are filtering, all those elements will be pulled in and you'll end up with the error I specified above.
If you use console.log(selectControl) on line 204, you'll see the elements pulled in.

To address this issue, you can narrow down the elements you select by changing
`selectControl = $('.' + column.field);` to `selectControl = $('select.' + column.field);` on line 204
Here is an example of the issue, http://jsfiddle.net/9c7g64cr/2/
I tested it an it works well. :+1:

Answers:
username_1: @username_2 Same error when using develop version: http://jsfiddle.net/wenyi/9c7g64cr/3/.
username_2: Using the fixed version of filter-control and the develop version of bootstrap-table
http://jsfiddle.net/9c7g64cr/4/
@username_1
Status: Issue closed
|
cami-project/cami-project | 180639981 | Title: Sentry error integration
Question:
username_0: ## Why
We need to integrate a 3rd party provider that will help us aggregate the logs from cami. For this we chose [Sentry](https://sentry.io/for/django/).
## What
Integrate Sentry and configure logging in the next projects:
- [ ] frontend
- [ ] medical_compliance
## Notes
Status: Issue closed
Answers:
username_1: Issue fixed in PR #90.
username_2: @username_1 just use hub's issue to PR conversion from now on. Thanks! |
tylerrasor/party-party-partyinator | 560755731 | Title: url entry
Question:
username_0: going along with https://github.com/username_0/party-party-partyinator/issues/2, it would be nice to be able to just provide a url for the partying to get started...
the idea has been floated to allow adding a query param of a url that then just straight loads the partyified image |
publiclab/community-toolbox | 561839004 | Title: Move to Angular Framework (MVC) (Suggestion for GSOC)
Question:
username_0: ****
## Thank you!
Your help makes Public Lab better! We *deeply* appreciate your helping refine and improve this site.
To learn how to write really great issues, which increases the chances they'll be resolved, see:
https://publiclab.org/wiki/developers#Contributing+for+non-coders
Answers:
username_0: @jywarren @SidharthBansal @ebarry Please review this
I want to apply in GSOC again this year also, last time i haven't got that chance
username_1: Could you please elaborate more on this topic? Why would you prefer Angular over React for this app? What are the advantages that Angular has and that will benefit this project? Thanks. |
pi-kappa-devel/diseq | 804618164 | Title: Calculations of derivative expressions
Question:
username_0: - Provide documentation with the calculations.
- Provide sources for the generation of the code of the derivative expressions in C++ and in R.
Status: Issue closed
Answers:
username_0: The calculations of the gradients and the Hessians are part of the mathematical supplement of the paper, so documentation can be found there. The code that generates the R sources has a wider scope as it produces also parts of the paper. Therefore, it will not be included here. |
2017-fall-DL-training-program/VAE-GAN-and-VAE-GAN | 276827561 | Title: What should we use CelebA_aligned.h5 or CeleA_aligned_reduced.h5 ?
Question:
username_0: Dear TA:
As title, I found that IT provide two different data-sets with prefix name "CeleA_aligned".
I try to access google link outside MTK, and get CelebA_aligned.h5.
Don't we care about CeleA_aligned_reduced.h5 ?
Answers:
username_1: Hi @username_0 ,
Please use "CelebA_aligned_reduced.h5".
The google link in the pdf file is the "whole" dataset. To reduce the training time, I provide a reduced version and I have removed the link in the pdf file to avoid confusion.
By the way, no matter which one you use, you will get similar results. The only difference is the training time.
Thanks
username_0: Thanks !
Status: Issue closed
username_2: Dear TA,
I only see CelebA_aligned.h5 but not CelebA_aligned_reduced.h5 in following path /homework/dataset.
Where can I get CelebA_aligned_reduced.h5?
BTW, when I train DCGAN with CelebA_aligned.h5, I find that the discriminator will become too strong. That is D(x) ~ 1 and D(G(x0)) ~ 0. Shouldn't I use Celeba_aligned.h5?
Thanks. |
sequelize/sequelize | 256902084 | Title: docs did not talk about importing models
Question:
username_0: - [email protected]
- [email protected]
- [email protected]
- dialect: sqlite
```js
import fs from 'fs';
import path from 'path';
import sequelize from '../main_process/Database';
import Sequelize from 'sequelize';
let basename = path.basename(__filename);
let db = {};
fs.readdirSync(__dirname)
.filter(function(file) {
return (file.indexOf('.') !== 0) && (file !== basename) && (file.slice(-3) === '.js');
})
.forEach(function(file) {
let model = sequelize['import'](path.join('../../', __dirname, file));
db[model.name] = model;
});
Object.keys(db).forEach(function(modelName) {
if (db[modelName].associate) {
db[modelName].associate(db);
}
});
db.sequelize = sequelize;
db.Sequelize = Sequelize;
module.exports = db;
```
The code above will import all my models into the `models/index.js` so when I need my models I simply need to `import models from '../models';` which will then allow me to make transactions via `models.my_model`.
It works while on development, but when I package my app with `electron-builder` the `models directory` will no longer exist and if I run the app it will throw an error saying that it cannot find the `models directory`.
I bundle all my renderer files into one file called `app.js` and also bundle my main process files into a separate file called `main.js`. I actually don't want to include the `models directory` to the package since I bundle all my files it doesn't make sense to do that but `fs.readdirSync` would require the `models directory`.
I have tried importing all my models in my `index.js` one by one, but it didn't work.
```
import model_name from './model_name';
db['model_name'] = model_name;
```
I was hoping that you can somehow tell me or point me to a solution that will import all the models into the `models/index.js` file so that I don't have to include the `models directory` in the package and I will be able to use my models by doing
```js
import models from '../models'; // will import index.js from the models directory
ipcMain.on(...., (event, args) => {
models.my_model.all(....)
.then(....)
.catch(....);
});
```
I read the docs and can't find anything related to this.
Status: Issue closed
Answers:
username_0: I already found a way that works.
username_1: @username_0 Hi if you could kindly share with me what was your solution about this. I replied on your similar issue in https://github.com/sequelize/express-example/issues/71
Thanks in advance |
AntonyCorbett/OnlyT | 258602409 | Title: Create a meeting time report
Answers:
username_1: Similar to the PDF report found in SoundBox - a detailed record of the meeting talk times and a summary showing aveerage times by week.
username_2: I wanted to know how this is accomplished? I looked everywhere and have seen no setting about creating a time report. I only see the folder logs under documents/ Only T. Am I missing something?
Thanks
username_1: @username_2 Hi. The "Issues" section contains a list of bugs, feature requests ("enhancements"), etc. This item describes a feature that has been requested; it's not yet implemented.
username_2: Oic sorry for the confusion. Thanks for the quick reply!
Yb
Andre
username_3: This feature is something we're eagerly anticipating here. This is really the only thing that's keeping us from transitioning from the SoundBox timer.
username_4: Just a quick note that this can (after a fashion) be done using the OnlyT remote app:
https://onlyt.app/
username_5: Yes, this feature is very much missed since migrating from SoundBox. It's implementation is highly anticipated.
username_1: @username_5 Sorry, there's no timerline ATM. Shortly I will analyse what requests ther are and try to prioritise
username_1: @username_5 @username_3 @username_2 This feature should be ready to test shortly
username_5: @username_1 I look forward to testing this feature. Thank you for the work you have put into it.
Status: Issue closed
username_1: @username_5 Please see latest pre-release
username_6: Unfortunately the link does't work for downloading OnlyT 1.1.0.50
username_7: Works, I just checked.
I also checked the report generator. A folder is created but there are no records.
username_6: Yep, now it works, thanks.
I'll check the report generator and provide feedback.
username_1: @username_7 The report is only generated when OnlyT detects a full meeting agenda. Various heuristics are used to ensure that any experimental or accidental use of the timer does not generate gratuitous reports. This means that if you want to test it you will need to operate it in real time for ~105 minutes.
username_8: That explains why I didn't get a report either. Thanks.
username_6: The message that the report is generating confused me as well. So we need to test in real time then.
Anthony, is it possible to get the message only when the report is actually generated?
username_6: And also to have the possibility to change the timing folder?
username_1: @username_6 The message was meant to alert the users that there is some activity during which time they shouldn't close the app. In normal circumstances the report will be generated. I may need to rethink.
username_1: @username_6 Please create separate issues as necessary to raise requests, etc. Thanks for valuable feedback!
username_6: Sure Anthony, I will.
And a big thanks for all your efforts you put in helping others.
username_6: I've just tested this in real time. Actually the meeting exceeded 13 minutes and the report was not created due to duration being out of range.
2018-11-13 20:23:11.608 +02:00 [Information] ==== Launched ====
2018-11-13 20:23:11.639 +02:00 [Information] Version 1.1.0.50
....................
2018-11-13 22:21:13.457 +02:00 [Warning] Meeting duration is out of range (26.3 overtime)
2018-11-13 22:21:13.458 +02:00 [Warning] Meeting times invalid so not stored
Should't the range be more flexible? Let's say to be between 30min and 2hrs... just a suggestion.
username_1: @username_6 If the recorded meeting duration is more than 20 minutes longer than it should be (105 mins), OnlyV assumes that the data is not a valid cong meeting and ignores it. I have copied the code from SoundBox implementation so it's been this way for a couple of years,
username_7: @username_1 My report was not created. In the settings I chose the automatic mode. I'll check it again
username_1: @username_7 the log file should indicate why not.
username_6: Anthony, does it matter if I run the countdown before starting the meeting (and the meeting start time is specified) or this doesn't have to do with the report?
I'll test once again, let's see how it goes.
username_1: @username_6 the version you have ignores the countdown and assumes that a meeting starts on the nearest 15 minute interval to the start of the first timer.actually it takes 5 mins off the start of the first timer and uses the nearest quarter if an hour as mtg start
username_1: @username_7 From the wiki: OnlyT maintains a log (a simple text file) of activity which can be useful if you run into problems. The log is stored in the Documents\OnlyT\Logs folder.
username_7: oops! I tested it too briefly, thanks for your help :) (99,45 min)
I will test it again correctly anyway
username_6: I've tested again and the report is created.
But it gives just the overall status, if the meeting finished in time or not.
[2018-11-14.pdf](https://github.com/username_1/OnlyT/files/2578431/2018-11-14.pdf)
username_1: @username_6 thanks. That's odd. Did you go through each of the meeting timers?
username_6: @username_1 yes, I did.
But I stated the meeting at 11pm and finish it on 12am next day. Also the timer was counting up and at some point I've change it to count down. Maybe those influenced it somehow.
I'll try once again during the day and I won't do any changes.
username_6: @username_1 I've test it once again now the report is working.
[2018-11-14.pdf](https://github.com/username_1/OnlyT/files/2580747/2018-11-14.pdf)
Is it possible to have a summary report as well for the previous meetings like in SundBox?
Or this will be created automatically after more reports will be generated?
Thanks.
username_6: @username_1 one more question on this topic.
If I use /id=Name for the shortcut, will the report be stored in a separate folder with that Name?
username_1: @username_6 yes, it will be printed on a second page as soon as you have 5 or 6 weeks worth of timing data.
username_1: @username_6 Yes |
ncsm-vertnet/ncsm-fishes | 752349239 | Title: Monthly VertNet data use report for 2020-2, resource ncsm_fishes
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of this report at:
http://tools-usagestats.vertnet-portal.appspot.com/reports/2a79f202-3f3a-4d54-88fa-09aa8de1ac73/202002/
Raw text and JSON-formatted versions of the report are also available for
download from this link.
A copy of the text version has also been uploaded to your GitHub
repository under the "reports" folder at:
https://github.com/ncsm-vertnet/ncsm-fishes/tree/master/reports
A full list of all available reports can be accessed from:
http://tools-usagestats.vertnet-portal.appspot.com/reports/2a79f202-3f3a-4d54-88fa-09aa8de1ac73/
You can find more information on the reporting system, along with an
explanation of each metric, at:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
cerner/terra-framework | 404529004 | Title: DatePicker fails accessibility testing.
Question:
username_0: # Issue Description
Date picker fails accessibility testing
## Issue Type
<!-- Is this a new feature request, enhancement, bug report, other? -->
- [ ] New Feature
- [ ] Enhancement
- [x] Bug
- [ ] Other
## Expected Behavior
Passes accessibility tests
## Current Behavior
* Control button fails from not having text. Related to #1058
* Contrast ratio of selected day is not high enough. I think this is because the text is so small.
* The date picker doesn't generate a label for its input. *Note: this may just be an example update, but looking at doc I don't see anything on how to create a datepicker with a label*
## Steps to Reproduce
<!-- Provide a link to a live example, or an unambiguous set of steps to -->
<!-- reproduce this bug. Include code to reproduce, if relevant -->
1. Install [Axe Chrome Extension](https://chrome.google.com/webstore/detail/axe/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US)
2. Navigate to a [date picker example](http://engineering.cerner.com/terra-core/#/tests/date-picker-tests/default) and open the datepicker by activating the control.
3. Run axe by going to developer tools and clicking axe tab.
## Environment
<!-- Include as many relevant details about the environment you experienced the bug in -->
* Component Version:
* Browser Name and Version:
* Operating System and version (desktop or mobile):
Status: Issue closed
Answers:
username_1: Closing this in favor of https://github.com/cerner/terra-framework/issues/464 which has more information on the exact issues. |
igorkasyanchuk/rails_pdf | 469995286 | Title: How to install on Heroku?
Question:
username_0: Hey, this looks like a very neat solution. Sorry for the beginner-level question but I am unsure (and I bet others out there as well) how to deploy this to a Rails app that runs on Heroku?
Answers:
username_1: @username_0 not sure, never tried, I hope you can check this:
https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-google-chrome (or maybe create new buildpack)
if it works - please make a PR with instructions on how to do it.
username_0: Thanks for pointing out the buildpack. What I am unsure about is the installation of RelaxedJS, as I have zero experience with npm so far. Is the npm command something that needs to be repeated on the server, or is it enough to have the RelaxedJS files added to my Rails app's git repository before pushing it to Heroku?
username_1: @username_0 could you please create own?
Here is some example which I found
https://sendgrid.com/blog/create-first-heroku-buildpack/
https://www.petekeen.net/introduction-to-heroku-buildpacks
I never did it too, but maybe you can |
ChrisNZL/Tallowmere2 | 581479602 | Title: Quick Restart: Elemental weapon variation restarts with default weapon
Question:
username_0: Reported in 0.1.0.
Reproduce:
1. Choose Poison Bow or Poison Ball from the Weapon Rack.
2. Enter dungeon.
3. Die.
4. Choose Quick Restart.
5. End up respawning with a regular Bow, or a Fireball instead of the chosen weapon with elemental variation off the Weapon Rack.
Answers:
username_0: Fixed in 0.1.1.
Status: Issue closed
|
rust-lang/rust | 304550163 | Title: aarch64 musl binaries panic since 2018-02-05 nightly
Question:
username_0: aarch64-unknown-linux-musl binaries crash immediately when built using Rust nightly since 2018-02-05, including the current beta, 1.25.0-beta.9.
This happens in debug and release builds.
It works fine with 2018-02-04, or with stable Rust 1.24.1. This is building on an x86_64-unknown-linux-gnu host, which shows no errors or warnings, and running on an embedded Linux 4.9 device.
I tried this code:
A fresh `cargo new --bin testme`. (Same thing with a completely empty "main {}" function.)
I expected to see this happen:
The binary should run without panicking. With nightly 2018-02-04, it looks like this, through strace:
```
execve("/tmp/testme", ["/tmp/testme"], 0x7fe28422a0 /* 9 vars */) = 0
mmap(NULL, 448, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f87569000
set_tid_address(0x7f87569038) = 19524
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x425660}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RT_1 RT_2], NULL, 8) = 0
rt_sigaction(SIGSEGV, {sa_handler=0x408844, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO|SA_ONSTACK, sa_restorer=0x425660}, NULL, 8) = 0
rt_sigaction(SIGBUS, {sa_handler=0x408844, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO|SA_ONSTACK, sa_restorer=0x425660}, NULL, 8) = 0
sigaltstack(NULL, {ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}) = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f87566000
Hello, world!
sigaltstack({ss_sp=0x7f87566000, ss_flags=0, ss_size=12288}, NULL) = 0
brk(NULL) = 0x451000
brk(0x452000) = 0x452000
write(1, "Hello, world!\n", 14) = 14
sigaltstack({ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=12288}, NULL) = 0
munmap(0x7f87566000, 12288) = 0
exit_group(0) = ?
+++ exited with 0 +++
```
Instead, with 2018-02-05+, this happened:
```
thread panicked while processing panic. aborting.
Trace/breakpoint trap (core dumped)
```
With strace:
```
execve("/tmp/testme", ["/tmp/testme"], 0x7fe4522920 /* 9 vars */) = 0
mmap(NULL, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f82fb0000
set_tid_address(0x7f82fb0040) = 6644
rt_sigaction(SIGPIPE, {sa_handler=SIG_IGN, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x424780}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=0}, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RT_1 RT_2], NULL, 8) = 0
rt_sigaction(SIGSEGV, {sa_handler=0x40cad8, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO|SA_ONSTACK, sa_restorer=0x424780}, NULL, 8) = 0
rt_sigaction(SIGBUS, {sa_handler=0x40cad8, sa_mask=[], sa_flags=SA_RESTORER|SA_SIGINFO|SA_ONSTACK, sa_restorer=0x424780}, NULL, 8) = 0
sigaltstack(NULL, {ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}) = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f82fad000
sigaltstack({ss_sp=0x7f82fad000, ss_flags=0, ss_size=12288}, NULL) = 0
brk(NULL) = 0x44f000
brk(0x450000) = 0x450000
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=NULL} ---
write(2, "thread panicked while processing panic. aborting.\n", 50thread panicked while processing panic. aborting.
) = 50
[Truncated]
commit-date: 2018-02-04
host: x86_64-unknown-linux-gnu
release: 1.25.0-nightly
LLVM version: 4.0
```
My ~/.cargo/config:
```
[target.aarch64-unknown-linux-musl]
linker = "aarch64-unknown-linux-musl-gcc"
rustflags = [
"-C", "link-arg=-lgcc",
"-C", "target-feature=+crt-static"
]
```
`aarch64-unknown-linux-musl-gcc` is from GCC 7.2.0 via Buildroot 2017.08.
Note: I've tried adding `-C llvm-args=-fast-isel` per #48673 but it made no difference.
Answers:
username_0: I see the libc repo had a similar-sounding issue, but it was back in November 2017, and Rust builds have been working for me until February 5 2018. It may have prevented more visibility, though. See: rust-lang/libc#856 and https://github.com/rust-lang/libc/commit/bea4879eec9a1
username_0: A colleague found that dynamically linking libgcc (and libc) using "target-feature=-crt-static" works around this issue.
username_1: I wonder if it's related to https://github.com/rust-lang/rust/issues/46566
username_2: info: syncing channel updates for 'nightly-2018-02-05-x86_64-unknown-linux-gnu'
nightly-2018-02-05-x86_64-unknown-linux-gnu unchanged - rustc 1.25.0-nightly (0c6091fbd 2018-02-04)
```
username_2: @username_0 can you validate the commits from the two nightlies? That commit range looks a bit suspicious. Just include the `-vV` output from each rustc that you tested with.
username_2: triage: P-high
We should figure out what is happening here.
username_0: @username_2 Sure! Here are the outputs.
Working 2018-02-04 nightly:
```
$ rustup run nightly-2018-02-04-x86_64-unknown-linux-gnu rustc -vV
rustc 1.25.0-nightly (3d292b793 2018-02-03)
binary: rustc
commit-hash: 3d292b793ade0c1c9098fb32586033d79f6e9969
commit-date: 2018-02-03
host: x86_64-unknown-linux-gnu
release: 1.25.0-nightly
LLVM version: 4.0
```
Non-working 2018-02-05 nightly:
```
$ rustup run nightly-2018-02-05-x86_64-unknown-linux-gnu rustc -vV
rustc 1.25.0-nightly (0c6091fbd 2018-02-04)
binary: rustc
commit-hash: 0c6091fbd0eee290c651f73be899f221eeab3c05
commit-date: 2018-02-04
host: x86_64-unknown-linux-gnu
release: 1.25.0-nightly
LLVM version: 4.0
```
I also bisected with the beta releases using rustup and found that 2018-02-13 (1.24.0-beta.12) works, but the next beta 2018-02-20 (1.25.0-beta.2) crashes.
Last working beta:
```
$ rustup run beta-2018-02-13 rustc -vV
rustc 1.24.0-beta.12 (ed2c0f084 2018-02-12)
binary: rustc
commit-hash: ed2c0f08442915c628fc855e6a784c5979a4dc83
commit-date: 2018-02-12
host: x86_64-unknown-linux-gnu
release: 1.24.0-beta.12
LLVM version: 4.0
```
First crashing beta:
```
$ rustup run beta-2018-02-20 rustc -vV
rustc 1.25.0-beta.2 (1e8fbb143 2018-02-19)
binary: rustc
commit-hash: 1e8fbb1432cc124ba6687c95dc64ed5d21156d6e
commit-date: 2018-02-19
host: x86_64-unknown-linux-gnu
release: 1.25.0-beta.2
LLVM version: 6.0
```
I also confirmed that the just-released 1.25.0-beta.10 and nightly-2018-03-14 still crash.
My process is basically to `rustup toolchain install` the version, `rustup default` the version, `rustup target add aarch64-unknown-linux-musl`, and `cargo build --target aarch64-unknown-linux-musl --release`. (I used `rustup default` because I didn't know `target add` had a `--toolchain` argument until just now.)
username_0: Now I see what Niko meant about a potentially suspicious commit range... aside from an RLS/rustfmt change from #47991, there were only two merges: #47915 (which frankly I don't understand :)) and #47834 to disable ThinLTO.
username_3: @username_0 Could you check which one of the three PRs are the cause?
1. Install the nightly toolchain
2. Install [rustup-toolchain-install-master](https://crates.io/crates/rustup-toolchain-install-master)
```sh
cargo install rustup-toolchain-install-master
```
3. Install the build artifacts from these three commits in 3d292b793ade0c1c9098fb32586033d79f6e9969...0c6091fbd0eee290c651f73be899f221eeab3c05
```sh
rustup-toolchain-install-master \
9af374abf9d41c533afa46e62e1047097c190445 \
3986539df6eb3601cbd4e9c6c195583fca6dc10b \
0c6091fbd0eee290c651f73be899f221eeab3c05 \
-t aarch64-unknown-linux-musl
```
4. Check for each PR with
```sh
cargo +9af374abf9d41c533afa46e62e1047097c190445 build \
--target aarch64-unknown-linux-musl --release
```
username_0: @username_3 Sure, thanks for making the tool!
9af374abf9d41c533afa46e62e1047097c190445: works.
3986539df6eb3601cbd4e9c6c195583fca6dc10b: works.
0c6091fbd0eee290c651f73be899f221eeab3c05: crashes.
That's the ThinLTO change from #47834 fixing #45444.
Please let me know any other information I can provide!
username_3: Thanks! Looks like another trusting-trust issue then 🤷.
username_0: I tried with "-Z", "thinlto=no" and with "-Z", "thinlto=yes" in my ~/.cargo/config's rustflags, but it crashed either way.
username_2: Fascinating. That was our guess from the compiler team meeting, though it seemed unlikely as that PR ought to **improve** reliability in general.
username_2: I could use any suggestions for how to reproduce this problem =) Some have suggested qemu?
That said, I'm not sure where to start debugging this. Seems...likely, possible?...to be an LLVM problem? I'm sort of hoping that one of the LLVM upgrades will make it go away. =)
In any case, reproducing it would be a start.
username_0: Yeah, I think qemu-aarch64 is the way to go, and perhaps the "bleeding edge" aarch64/musl toolchain from https://toolchains.bootlin.com/ -- I'll see if I can build a repro using those tools.
username_0: Here's how I reproduced from scratch:
* I used a Fedora 27 EC2 instance, though anything recent with qemu should be fine - https://alt.fedoraproject.org/cloud/
* `sudo yum -y install qemu-user`
* I downloaded the aarch64/musl "bleeding edge" toolchain from https://toolchains.bootlin.com/ and extracted it in the home directory, to get the linker:
* `curl -O https://toolchains.bootlin.com/downloads/releases/toolchains/aarch64/tarballs/aarch64--musl--bleeding-edge-2018.02-1.tar.bz2 && tar xf aarch64--musl--bleeding-edge-2018.02-1.tar.bz2`
* I set ~/.cargo/config to:
```
[target.aarch64-unknown-linux-musl]
linker = "/home/fedora/aarch64--musl--bleeding-edge-2018.02-1/bin/aarch64-buildroot-linux-musl-gcc"
rustflags = [
"-C", "link-arg=-lgcc",
"-C", "target-feature=+crt-static",
]
```
* Install rustup using `curl https://sh.rustup.rs -sSf | sh && source $HOME/.cargo/env`
* `rustup toolchain install nightly-2018-02-04 nightly-2018-02-05`
* `rustup target add aarch64-unknown-linux-musl --toolchain nightly-2018-02-04`
* `rustup target add aarch64-unknown-linux-musl --toolchain nightly-2018-02-05`
* `cargo new --bin testme && cd testme`
* Build working version: `cargo +nightly-2018-02-04 build --target aarch64-unknown-linux-musl --release`
* `qemu-aarch64 target/aarch64-unknown-linux-musl/release/testme` and observe "Hello, world!"
* Build crashing version: `cargo +nightly-2018-02-05 build --target aarch64-unknown-linux-musl --release`
* `qemu-aarch64 target/aarch64-unknown-linux-musl/release/testme` and observe crash
username_1: This appears to be fixed again on nightly-2018-03-16, I guess because of https://github.com/rust-lang/rust/pull/48892?
username_0: I can confirm it started working for me as well in the latest nightly! This is with nightly-2018-03-17; nightly-2018-03-16 still crashed. To be specific:
Crashes:
```
$ rustup run nightly-2018-03-16 rustc -vV
rustc 1.26.0-nightly (392645394 2018-03-15)
binary: rustc
commit-hash: 39264539448e7ec5e98067859db71685393a4464
commit-date: 2018-03-15
host: x86_64-unknown-linux-gnu
release: 1.26.0-nightly
LLVM version: 6.0
```
Works!
```
$ rustup run nightly-2018-03-17 rustc -vV
rustc 1.26.0-nightly (55c984ee5 2018-03-16)
binary: rustc
commit-hash: 55c984ee5db73db2379024951457d1139db57f24
commit-date: 2018-03-16
host: x86_64-unknown-linux-gnu
release: 1.26.0-nightly
LLVM version: 6.0
```
I'll work on bisecting to pinpoint the merge that fixed it. Don't want this to reoccur!
username_0: Unfortunately rustup-toolchain-install-master wasn't able to fetch the artifacts for all the intervening commits; I'm guessing the artifacts either aren't uploaded for every build, they're just not uploaded yet, or some commits were built together. Anyway, it seems to work with all of these commits, but I'm not confident without being able to test all of them and confirm the negative case.
55c984ee5db73db2379024951457d1139db57f24
3b6412b94324b10f698a18ea5766ef6ff8921ae8
cc34ca1c9787fde84116637a0cee92fc5e375e3d
5f3996c3ec4824b92b2af251ac09406f9573e1ff
a7170b0412d1baa4e30cb31d1ea326617021f086
36b66873187e37a9d79adad89563088a9cb86028
username_0: To try to narrow it down a bit more, and confirm my findings, I tested the 2018-03-17 nightly with a few different settings.
I checked with release and debug builds, and with "-Z", "thinlto=yes" and "-Z", "thinlto=no" in rustflags in ~/.cargo/config, and all four combinations worked.
I tested with 'opt-level = "s"' and that was fine. (Output size is quite important to me.)
However, adding "lto = true" to Cargo.toml causes the crash again! I dug into the combinations a bit:
* debug build, no incremental compilation, lto = true => no crash! :)
* debug build, no incremental compilation, lto = false => no crash
* debug build, incremental compilation, lto = false => no crash
* release build, no incremental compilation, lto = true => crash! :(
* release build, no incremental compilation, lto = false => no crash
* release build, incremental compilation, lto = false => no crash
So I tried to figure out why the release build crashed when the debug build didn't. Adding "codegen-units = 1" still crashed. Adding "opt-level = 0" stopped the crash! "opt-level = 1" was still OK. "opt-level = 2" crashed! (And 'opt-level = "s"' and 'opt-level = 3' crash too, as you might expect.)
I hope this helps -- it's something about going from opt-level 1 to 2 with LTO that's still crashing, even with the new nightly. Non-LTO builds, and LTO builds with opt-level 0/1, are now fine.
username_2: @username_0 btw, @Mark-Simulacrum just authored this awesome project:
https://github.com/rust-lang-nursery/cargo-bisect-rustc
which might be worth trying out here (even if you've already got it narrowed down).
username_2: I'm just catching up -- glad that it's (partly at least) fixed. That also gives us a lot of data about what could be causing the problem.
username_0: Two other potentially interesting pieces of data:
I didn't try opt-level = "z" in my last round of testing, but I can confirm that one *works* with 2018-03-17. I wouldn't have expected that, when "s" crashes.
I had earlier reported "A colleague found that dynamically linking libgcc (and libc) using "target-feature=-crt-static" works around this issue." He says that this no longer works with the 2018-03-17 nightly that fixes some of the statically linked cases.
username_4: Looking at the backtrace, it seems that this issue is related to thread-local storage. It seems that this function is causing a segfault:
```
#35 std::sys_common::thread_info::current_thread::h1ca059562bf90a53 () at libstd/sys_common/thread_info.rs:38
```
I will have a look at the disassembly & relocations generated for that TLS access, I think something strange might be happening at LTO/link time.
username_4: OK, so I think that I've found the source of the bug:
```
402af0: d2a00000 movz x0, #0x0, lsl #16
402af4: f2800400 movk x0, #0x20 <-- THIS SHOULD BE 0x10
402af8: d503201f nop
402afc: d503201f nop
402b00: d53bd048 mrs x8, tpidr_el0
402b04: 8b000108 add x8, x8, x0
402b08: f9400d08 ldr x8, [x8, #24]
```
For some reason, the addresses of all TLS variables are offset by an additional 0x10. This behavior happens in nightly-2018-02-05 (broken) but not in nightly-2018-02-04 (good).
I think this may have gone unnoticed in the past since all TLS was shifted by 0x10, and the TLS was zero-initialized. In this specific case, one of the bytes of the TLS data has an initial value of 0x3, but due to the 0x10 shift it is accessed at the wrong offset by the program.
username_4: Now I'm completely stumped as to what actually caused this bug. It seems like a bug in LLVM rather than rustc, possibly related to LTO since the linker is getting confused about TLS offsets.
username_5: visited for triage. It seems we haven't made progress since the last report. I am wondering whether we can enlist someone to act as a local "LLVM LTO bug identification" expert...
username_2: @username_4 can we confirm that it is an LTO problem?
username_4: To me it looks like a linker bug: the TLS relocations are being resolved to the wrong value. Since it is very unlikely that the linker has been broken this whole time, I would blame it on LTO somehow interfering with the linker.
username_2: @username_4 -- question: do you think you can narrow this down to just LLVM IR inputs that reflect the error, so we can open a bug on the LLVM side?
username_2: triage: P-medium
Next steps are to diagnose the LLVM problem. Filing under https://github.com/rust-lang/rust/issues/50422.
username_4: I have a minimal reproduction:
```rust
#![feature(libc, thread_local, asm)]
#![no_main]
extern crate libc;
#[thread_local]
static mut ASDF: u8 = 74;
#[inline(never)]
fn get_tls_val() -> i32 {
// The asm here is just to prevent the TLS access from being optimized away
unsafe {
let out: &u8;
asm!("" : "=r" (out) : "0" (&ASDF));
*out as i32
}
}
#[no_mangle]
pub unsafe extern fn main() -> i32 {
let val = get_tls_val();
libc::printf(b"%d\n\0".as_ptr(), val);
// UNCOMMENT THIS LINE TO TRIGGER THE BUG
//std::thread::sleep_ms(1);
0
}
```
The bug only seems to trigger when libstd is linked into the final binary. The expected output is `74`, which is the initial value of the TLS variable. However when libstd is linked in, the output is `0` because the TLS offsets are incorrect.
Bad version:
```
0000000000400268 <hello::get_tls_val>:
400268: d10043ff sub sp, sp, #0x10
40026c: d53bd048 mrs x8, tpidr_el0
400270: 91400108 add x8, x8, #0x0, lsl #12
400274: 91008108 add x8, x8, #0x20 <-----
400278: f90007e8 str x8, [sp, #8]
40027c: f94007e8 ldr x8, [sp, #8]
400280: 39400100 ldrb w0, [x8]
400284: 910043ff add sp, sp, #0x10
400288: d65f03c0 ret
```
Good version:
```
0000000000400268 <hello::get_tls_val>:
400268: d10043ff sub sp, sp, #0x10
40026c: d53bd048 mrs x8, tpidr_el0
400270: 91400108 add x8, x8, #0x0, lsl #12
400274: 91004108 add x8, x8, #0x10 <-----
400278: f90007e8 str x8, [sp, #8]
40027c: f94007e8 ldr x8, [sp, #8]
400280: 39400100 ldrb w0, [x8]
400284: 910043ff add sp, sp, #0x10
400288: d65f03c0 ret
```
username_4: Switching the linker between bfd, gold and lld doesn't seem to make any difference.
username_0: There's a TLS-related fix in musl that applies to aarch64 and some other architectures; it will probably be in the 1.1.20 release. I wonder if it helps with this!
https://git.musl-libc.org/cgit/musl/commit/?id=610c5a8524c3d6cd3ac5a5f1231422e7648a3791
username_4: That might very well be the solution! I noticed that the `.tdata` section in libstd was aligned to 32 bytes, which may very well explain why it's not being handled correctly.
So basically, my earlier hypothesis is incorrect: the compiler/linker are calculating the TLS offsets correctly, it's just that musl isn't handling over-aligned TLS sections correctly.
username_6: @username_4 - I've confirmed that your minimal reproduction prints `74` when musl has `610c5a8` applied, and `0` otherwise.
username_7: I have build my project with rust compiled from sources with fresh musl - and can confirm it works.
username_8: @username_4 @username_7
Are you cross compiling? I'm trying cross compile to test the tls fix but I get the error from https://github.com/rust-lang/rust/issues/46651 and https://github.com/rust-lang-nursery/compiler-builtins/issues/201
username_4: @username_8
Use this command as a workaround:
```
cargo rustc --target aarch64-unknown-linux-musl -- -C link-arg=-lgcc
```
Status: Issue closed
|
beregond/jsonmodels | 48903232 | Title: Add test to check if fixtures are proper json schema.
Question:
username_0: For now there is possibility to generate wrong json schema and not notice that is wrong - it's first step to create own standard - test have to prevent this.
Answers:
username_0: And generated schema too. See #69 for example. |
oppia/oppia | 566472379 | Title: Cannot read property 'getExplorationId' of undefined
Question:
username_0: <!--
- Before filing a new issue, please do a quick search to check that it hasn't
- already been filed on the [issue tracker](https://github.com/oppia/oppia/issues)._
-->
This error occurred recently in production
```
Cannot read property 'getExplorationId' of undefined
at getExplorationId (webpack:///core/templates/dev/head/pages/exploration-player-page/services/player-position.service.ts:50:59)
at getCurrentStateName (webpack:///core/templates/dev/head/pages/exploration-player-page/suggestion-modal-for-learner-local-view/suggestion-modal-for-exploration-player.service.ts:40:62)
at apply (angular.js:4718:15)
at invoke (angular.js:10354:23)
at https://www.oppia.org/third_party/generated/js/third_party.min.js:1:1367865
at https://www.oppia.org/third_party/static/angularjs-1.5.8/angular.min.js:131:20
at m.$eval (angular.js:17682:28)
at $eval (angular.js:17495:14)
at $digest (angular.js:17790:12)
at $apply (angular.js:11831:35)
```
**General instructions**
There are no specific repro steps available for this bug report. The general procedure to fix server errors should be the following:
* Analyze the code in the file where the error occurred and come up with a hypothesis for the reason.
* Get the logic of the proposed fix validated by an Oppia team member (have this discussion on the issue thread).
* Make a PR that fixes the issue, then close the issue on merging the PR. (If the error reoccurs in production, the issue will be reopened for further investigation.)
Status: Issue closed
Answers:
username_0: Closing this issue for now since it has not occured on the server for a while. Will reopen if it surfaces again. Thanks! |
AlphamaxMedia/netv2-fpga | 605612692 | Title: Misalignment of Overlay and Stream
Question:
username_0: I'm trying to overlay images onto my stream however, the overlay seems to be offset a few pixels wrong. It seems to be different each time I plug it in. I can always dial this in manually in my software if needs be but wondered where this could be solved?
Answers:
username_0: To be clear The left side of the overlay wraps round to the right
username_0: To be clear The left side of the overlay wraps round to the right
username_1: The overlay alignment left/right drifts by a DRAM fetch line or two. From the best I've been able to trace out, it depends a bit upon how the FIFO timing versus the stream line up. Basically, the DRAM fetches a group of pixels all at once, and if the sync happens to fall before or after the fetch this will affect the alignment of the overlay to the image.
iirc the general alignment can be trimmed with a call to hdmi_core_out0_dma_line_align_write() (as seen at https://github.com/AlphamaxMedia/netv2-fpga/blob/936d239bc2d2c93c0fc979edec929ea728a391d4/firmware/ci.c#L720).
However, this option isn't exposed in the current firmware build. You could add a ci.c REPL command to expose it and tweak with it. But the problem is, it will shift from boot to boot depending upon the random luck of where the initial frame position is relative to the DRAM's timing at that moment.
username_0: Thanks! and thank you for replacing my board :)
username_1: glad it's working again. and thank you for being prompt in the return. Helps keep the exchange policy sustainable for everyone! |
FormidableLabs/radium | 76445121 | Title: Style component should take an object, not an array
Question:
username_0: Right now, the Style component takes an array of rules, which is technically correct but doesn't line up with the way we handle any of our other styles objects, which are all objects.
Should update `Style` so that it doesn't require such a verbose syntax.
Answers:
username_1: +1 for that, I'm currently getting around this by using `assign`/`merge`, otherwise dynamic styles are not getting rendered server-side. (when using the array syntax)
username_2: @username_1 is this a separate bug you are reporting?
Status: Issue closed
|
Xonshiz/comic-dl | 236636068 | Title: Broken Install?
Question:
username_0: Tried installing on both Debian and Mac, without success. (and with python 2.7. and 3.6, using pyenv)
Setup seems broken, could never get things to work, ie CLI wasn't ever responsive...
Answers:
username_1: don't use the setup.py file. Just follow the instructions mentioned in the ReadMe and it should work.
username_0: No, I tried the exact instructions (even the phantomjs stuff which isn't needed anymore, is it?), and no go. I run the command ./comic_dl.py and CLI returns without an error. Same for -h, --version, etc
[minor edit to remove personal info below]
$ pip install -r requirements.txt
Requirement already satisfied: bs4 in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from -r requirements.txt (line 1))
Requirement already satisfied: requests in /usr/local/lib/python2.7/site-packages (from -r requirements.txt (line 2))
Requirement already satisfied: cfscrape in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from -r requirements.txt (line 3))
Requirement already satisfied: clint in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from -r requirements.txt (line 4))
Requirement already satisfied: beautifulsoup4 in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from bs4->-r requirements.txt (line 1))
Requirement already satisfied: PyExecJS>=1.4.0 in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from cfscrape->-r requirements.txt (line 3))
Requirement already satisfied: args in .pyenv/versions/2.7.12/lib/python2.7/site-packages (from clint->-r requirements.txt (line 4))
Requirement already satisfied: six==1.10.0 in /usr/local/lib/python2.7/site-packages (from PyExecJS>=1.4.0->cfscrape->-r requirements.txt (line 3))
I know enough Python to know something's not right with the code... I git cloned, and I tried downloading your master zip. Neither works correctly (and as I said, on multiple different systems, OSX and Debian/Ubuntu). Something's afoot... Please review the instructions. And fix the setup for those of us who want to do an install and use the CLI correctly.
username_1: Try running `__main__.py --version`
username_0: Yes, that works correctly. As does giving it a -i url
so comic-dl.py isn't working but __main__.py is working for me.
username_1: That is how to use this script. When I wrote the older script, it had no classes and the over all code sucked and was slow in over all performance. Now, I forgot to update the read me. Thanks for bringing this up. :)
Status: Issue closed
username_1: `__main__.py` passes the command line arguments to `comic_dl.py`. Read [this SO answer](https://stackoverflow.com/a/4043007) if you want to know a little bit more about `__main__.py` files. |
scikit-hep/particle | 741632948 | Title: Idea: move particle search functionality to a new Particles class
Question:
username_0: The idea has been around in our heads for a while. It came out again in discussions in https://github.com/scikit-hep/particle/issues/263. Let's have this as a possible API for version 1.0.
For reference: indeed the present `Particle` class is both the class for particle properties and for searches. For example `Particle.from_pdgid()` returns a `Particle` instance whereas `Particle.findall()` returns a list of `Particle` instances. While very powerful, this may be confusing at first. One could split functionality between a slightly different `Particle` class and a `Particles`/`ParticleTable` class.
Answers:
username_1: Sounds good. Why not make find() and findall() functions on the module-level? Why attach them to a class? I think the fundamental question is whether you want to expose the underlying database to the user or keep it an implementation detail. If you want to expose the database and make it easy for users to change the data source of the database, then it would be better to expose a ParticleTable class or ParticleDB.
I am not sure whether this is a good idea. I like the simplicity of particle and I trust it to use the most recent world average data when I import the most recent version. I don't want to use outdated masses and life-times and I cannot imagine people will. So I think it is a good choice to not expose the DB to the user as it is now.
username_2: I like those names better than `Particles`, on reflection, easier to distinguish. Really, this would be the "correct" API:
```python
from particle import ParticleDB
pdb = ParticleDB() # default particle table
p = pdb.find(...)
```
However, this requires a user to learn about Particle databases, to keep their own state (`pdb` above), and to work with more classes, all of which are counter-productive to the goal of Particle to provide fast, simple, and interactive access to the pdg data. Most users do not need to load special data, so this extra manipulation just gets in their way. Also, this would allow the creation of multiple states, and the original design did not allow multiple particles to share the same PDGID; so this would give you apparent freedom to do things not allowed by design. This might be changed now, or might be changeable in the future.
The current design is a balance between the two uses - you have a simple way to load particles, and an "advanced" way to modify, replace, or extend the particle table.
---
For a proposal to move forward, I would recommend the following: Add a ParticleDB class as seen above, and refactor the current `_table` attribute to hold a default "latest" ParticleDB table. Then `.find*` class method shortcuts would forward to the cached instance. We could remove the load table functionality on Particle; anyone wishing to use an explicit version of the particle table could use the API shown above; the shortcuts would only be for people who only want the most recent world averages.
This of course would need to happen after thought is put into what it means to compare two particles that were loaded from different ParticleDB's.
username_0: I'm on the fence here and will need to think carefully, not Friday at the end of the busy week ;-). In any case my really general comment is that we should not forget that so far we hardly ever had a complaint on the API. I would conclude from there that it's pretty good. In other terms, whatever we do should not totally break the way we have people working with the package, as that's likely not so welcome.
On general grounds I always liked very much the fact that the user does not have to care with the DB/files, which are behind the scenes. We provide the latest info (and the user can go back in history if they so want) with a powerful API, and that's paramount. Now, if we want to have a, separate, way to "play" with the DB ...
OK, more when I get to think about the future of Particle properly :-). Have a good weekend. |
ArxOne/BestTest | 296090426 | Title: Add verbosity levels
Question:
username_0: Currently there's only one verbosity level.
Considering levels:
* 0: nothing is shown (implies /nologo)
* 1: show only assessment (total/succeeded/failed, etc.)
* 2: show one line per test (whatever the result)
* 3: show stack trace when test fails
* 4: show full test trace (capture from console output)<issue_closed>
Status: Issue closed |
aws/chalice | 858165009 | Title: Cannot find reference 'SNSEvent' in 'app.pyi
Question:
username_0: after an upgrade to 1.22.4 all the SNSEvent used classes gives error
Answers:
username_1: Thanks for reporting, there's a few extra types missing in the app.pyi file even after https://github.com/aws/chalice/pull/1685. I'll go through another pass and add the missing types.
Status: Issue closed
username_2: Thanks for fixing this @username_1. Looking forward to this being released. |
gitterHQ/docs | 143697082 | Title: Update room request tags parameter format.
Question:
username_0: Hi guys,
I think it will be cool to change **tags** parameter format in [update room](https://github.com/gitterHQ/docs/blob/master/03.Rooms-resource.md#update-room) request from one string with values separated by come to simple array with string tags.
Current:
```json
{
"tags": "tag1, tag2, tag3"
}
```
Suggested:
```json
{
"tags": [
"tag1",
"tag2",
"tag3"
]
}
```
What do you think about this?
Answers:
username_1: @username_0 Yes, it will be more consistent. I didn't know why the update doesn't work until I found your issue. |
pcmueller/mod_0_skills | 787322371 | Title: Assessment Results
Question:
username_0: @pcmueller - Excellent work on this assessment, Peter. You are officially `technical ready`. You're showing a clear understanding of basic OOP principles, variable declaration with proper syntax, and data types. A few things to keep in mind as you continue to practice:
- Be aware of the proper syntax for each data type (for ex. strings should have quotes around the text) ie. `material = "stainless steel"`
- The `datetime` data type is a little tricky, but for all intents and purposes you should treat it as a string so it would follow string syntax with quotes ie. `dateLastOrdered = "12/03/20"`
- For the `weigh` method, does the weight of a pan commonly change or fluctuate? That seems like an attribute that wouldn't be modified much, if at all.
Keep up the great work and let me know if you have any questions. |
jitsi/jitsi-meet | 44371956 | Title: 'Old' browser version number warning
Question:
username_0: I have been told in the Jitsi IRC channel that Chromium/Chrome < 36 has buggy webrtc support.
If so, Jitsi webrtc should print a warning for users of sufficiently 'old' browsers to tell them they may experience issues if they don't upgrade to a suitable version.
I don't know nor did I ask about the minimum Opera version required but this should be detected too if it has similar issues.
Answers:
username_1: Currently, there is an unsupported browser webpage.
Status: Issue closed
|
cgnieder/acro | 612447221 | Title: How to support both version 2 and 3 in a single document?
Question:
username_0: I am the maintainer of an internal template at our organization which uses and provides defaults fo acro. We have users on different versions of texlive and are now running into the problem of failing to compile documents on texlive-2020 due to acro.
`\usepacke[version=2]{acro}` does not solve our problem since this fails on texlive-2019 which has acro-2.11c.
Is there a better way to support the acro versions from texlive-2020 and texlive-2019 in a single document, then copying a single acro version into our template?
Answers:
username_1: You can check the value of `\c_acro_version_major_number_tl`. This should be a reliable test.
```latex
\documentclass{article}
\usepackage{acro}
\begin{document}
\ExplSyntaxOn
\int_compare:nTF { \c_acro_version_major_number_tl = 3}
{version~ 3}
{not~ version~ 3}
\ExplSyntaxOff
\ifnum\numexpr\csname c_acro_version_major_number_tl\endcsname=3\relax
version 3%
\else
not version 3%
\fi
\end{document}
```
username_0: But then I have to have two versions of my document and select which one to use depending on the acro version. Is there a way, where I don't have to do everything twice for two different acro versions?
username_1: Not everything, of course, but you probably cannot avoid some duplication. Can you show me an example code that is causing you trouble? Maybe I can make some suggestions…
----
PS: You said *document*? Aren't you maintaining some sort of package which hides the necessary code from the user?
username_0: I can show you two examples which do not work anymore in acro-3:
```
\DeclareAcronym{LP}{long=Logical Process,short-indefinite=an,long-plural=es}
```
and
```
`\ProvideAcroEnding{possessive}{'s}{'s}
\ProvideAcroEnding{possessiveplural}{s'}{s'}
\ExplSyntaxOn
\NewAcroCommand \acg
{
\acro_possessive:
\acro_use:n {#1}
}
\NewAcroCommand \acsg
{
\acro_possessive:
\acro_short:n {#1}
}
\NewAcroCommand \aclg
{
\acro_possessive:
\acro_long:n {#1}
}
\NewAcroCommand \acgp
{
\acro_possessiveplural:
\acro_use:n {#1}
}
\NewAcroCommand \acsgp
{
\acro_possessiveplural:
\acro_short:n {#1}
}
\NewAcroCommand \aclgp
{
\acro_possessiveplural:
\acro_long:n {#1}
}
\ExplSyntaxOff
```
So it seems to me, that everything acro related now has to be implemented twice within such a switch statement.
Do you still maintain the [version=2] variant of acro or does this give the same result as me copying the last acro-2 version into my package? I could build something which detects whether the [version=2] argument is necessary and then remember this in the aux file and restart compilation. But this only brings an advandage if you still maintain the [version=2] variant.
username_1: `version=2` just loads a copy of the release before version 3. That is it *does* give the same result as you copying the last version. I will leave in `acro` for at least a year.
Not giving the `short` form always produced a warning (and in early versions an error) and was never encouraged. Version 3 now explicitly allows it if you say
```latex
\acsetup{use-id-as-short=true}
```
The file below compiles with version 2 and version3:
```latex
\documentclass{article}
\usepackage{acro}
\newcommand\ifacrothree{%
\ifnum\numexpr\csname c_acro_version_major_number_tl\endcsname=3\relax
}
\ifacrothree
\acsetup{use-id-as-short}
\def\ProvideAcroEnding{\DeclareAcroEnding}
\fi
\ProvideAcroEnding{possessive}{'s}{'s}
\ProvideAcroEnding{possessiveplural}{s'}{s'}
\ifacrothree
\NewAcroCommand\acg{m}{%
\acropossessive
\UseAcroTemplate{first}{#1}%
}
\else
\ExplSyntaxOn
\NewAcroCommand \acg {
\acro_possessive:
\acro_use:n {#1}
}
\ExplSyntaxOff
\fi
\DeclareAcronym{LP}{
long=Logical Process,
short-indefinite=an,
long-plural=es
}
\begin{document}
1: \ac{LP} \par
2: \iac{LP} \par
3: \acg{LP} \par
4: \aclp{LP}
\end{document}
```
username_0: Thanks a lot. This helped me to port our arco setup to work with acro-3 and acro-2. When ubuntu-20.04 is no longer supported in 2025 I can then finally remove the acro-2 setup code.
Status: Issue closed
username_1: I can keep the version 2 file in acro until then if you like.
<NAME>
https://github.com/username_1 |
docker/compose | 110140438 | Title: Warning: There is a boolean value, True in the 'environment' key.
Question:
username_0: Trying to run compose on a fresh Windows 10 and Docker Toolbox install. Haven't customized or modified the environment, running in the msys based terminal that comes with toolbox.
I get the following error:
```
Warning: There is a boolean value, True in the 'environment' key.
Environment variables can only be strings.
Please add quotes to any boolean values to make them string (eg, 'True').
This warning will become an error in a future release.
```
My YAML files don't have booleans in them, so I'm not sure where it's finding this boolean value or where I should be changing it if it's something on my end.
Answers:
username_1: Please include a copy of your `docker-compose.yml`, or at least a sanitized version of the `environment` section to help us debug this issue.
It would be nice to have more detail in the warning, but I'm not sure if that's possible with the formatters we're using the generate it.
username_0: ```
data:
container_name: project_dev_data
image: busybox
volumes:
- /var/lib/mysql
- ./storage:/var/www/storage
mysql:
container_name: project_dev_mysql
image: mariadb
ports:
- "3306:3306"
volumes_from:
- data
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_DATABASE: project
MYSQL_ALLOW_EMPTY_PASSWORD: yes
web:
container_name: project_dev_web
build: .
volumes_from:
- data
links:
- mysql:mysql
ports:
- "8000:8000"
- "8080:8080"
volumes:
- .:/var/www
- ~/.ssh:/root/.ssh
volumes_from:
- data
```
Would it be that `yes` that's causing this issue?
username_2: Correct - lots of things resolve to booleans in YAML: http://yaml.org/type/bool.html
Put it in quotes (`"yes"`) to make sure it goes through unchanged.
Status: Issue closed
username_0: Alright, will try once I'm back at that machine, thanks! :) |
Scarwolf/pr0p0ll-viewer | 451176085 | Title: Anpassen der Summe beim Ausblenden von Antworten
Question:
username_0: Wenn man eine Antwortmöglichkeit ausblendet muss die Anzahl der Antworten in der Überschrift angepasst werden.
Beispiel:

Hier müsste beim Ausblenden von Pink die Summe oben auf 302 angepasst werden.<issue_closed>
Status: Issue closed |
CopperMantis/CopperMantis | 112488024 | Title: Run submited solution code
Question:
username_0: - [ ] Take code from BLOB or local file
- [ ] Set in job queue
- [ ] Let "docker service" to run a docker container of with this code
- [ ] Add special entrypoint which compares the output at the end. |
rhboot/shim-review | 386856824 | Title: Shim-x64 for ROSA FRESH
Question:
username_0: Make sure you have provided the following information:
- [ ] link to your code branch cloned from rhboot/shim-review in the form user/repo@tag
username_0/shim-review@ntcitrosa-shim-x64-20181129
- [ ] completed README.md file with the necessary information
https://github.com/username_0/shim-review/blob/ntcitrosa-shim-x64-20181129/README.md
- [ ] shim.efi to be signed
https://github.com/username_0/shim-review/blob/ntcitrosa-shim-x64-20181129/shimx64.efi
- [ ] public portion of your certificate embedded in shim (the file passed to VENDOR_CERT_FILE)
https://github.com/username_0/shim-review/blob/ntcitrosa-shim-x64-20181129/rosa.cer
- [ ] any extra patches to shim via your own git tree or as files
https://abf.io/signer/shim-unsigned
- [ ] any extra patches to grub via your own git tree or as files
https://abf.io/import/grub2
- [ ] build logs
https://abf.io/build_lists/2953577
###### What organization or people are asking to have this signed:
`LLC "NTC IT ROSA"`
###### What product or service is this for:
`"ROSA Fresh" - Linux Desktop`
###### What is the origin and full version number of your shim?
`https://github.com/rhboot/shim/tree/13`
###### What's the justification that this really does need to be signed for the whole world to be able to boot it:
`ROSA Fresh is a non-profit Linux distribution developed by the community and has a long history. Is deployed on a high number of nodes already using it in SecureBoot mode enabled.`
###### How do you manage and protect the keys used in your SHIM?
`Shim has the public key of the EV Code Signing key pair (issued by DigiCert) built-in. The key is used to validate GRUB boot loader. No private keys are embedded.Shim binary itself is signed, so the built-in public key cannot be modified or removed without making the signature invalid. This guarantees that if shim has been tampered with and is then used in SecureBoot environments, this will be detected immediately.`
###### Do you use EV certificates as embedded certificates in the SHIM?
`Shim has the public key of the EV Code Signing key pair (issued by DigiCert) built-in.`
###### What is the origin and full version number of your bootloader (GRUB or other)?
`The source code of GRUB is available here: ftp://ftp.gnu.org/gnu/grub/grub-2.02.tar.xz <ftp://ftp.gnu.org/gnu/grub/grub-2.02.tar.xz>. The patches to GRUB 2.02 specific to ROSA Linux, as well as build scripts, are available here: https://abf.io/import/grub2 `
###### If your SHIM launches any other components, please provide further details on what is launched
`Apart from the boot loader (GRUB), shim can launch MokManager tool (developed alongside shim, https://github.com/rhboot/shim )`
###### How do the launched components prevent execution of unauthenticated code?
`Same as GRUB, MokManager executable must be signed with the appropriate key (EV Code Signing) for shim to validate and launch it. MokManager itself executes no unauthenticated code.`
###### Does your SHIM load any loaders that support loading unsigned kernels (e.g. GRUB)?
`Shim launches GRUB`
###### What kernel are you using? Which patches does it includes to enforce Secure Boot?
`Our current kernel is based on the kernel 4.15.0-40.43-generic from Ubuntu 18.04 LTS (http://kernel.ubuntu.com/git/kernel-ppa/mirror/ubuntu-bionic.git/), which already contains the patches to enforce SecureBoot as needed. We have no additional SecureBoot-related patches on top of that.`
`Our patches, configs and build instructions for the kernel (RPM spec file) are available here: https://abf.io/import/kernel-desktop-4.15 `
###### What changes were made since your SHIM was last signed?
```
This is an update from v0.9 to v13. From the changelog:
* MokManager: Stop using EFI_VARIABLE_APPEND_WRITE
* Better PCR usage for TPM
* Use authenticode signature length from WIN_CERTIFICATE structure
* More configurable build via make variables
* Workaround for signtool.exe bugs
* Bug fix for wrong options passed to second stage
* generate_hash(): fix the regression
* Ignore BDS when it tells us we got our own path on the command line
* Handle various different load option implementation differences
* TPM 1 and TPM 2 support`
* Use OpenSSL 1.0.2k
* Lots of minor bug fixes
```
###### What is the hash of your final SHIM binary?
`sha256: b407cdeae8fee3c51300b6974599dff39cb5863223dc2617662fcdb07c68c55b shimx64.efi`
Answers:
username_1: Any updates on this one?
If there are problems with setting up the build environment, etc., please let us know.
username_1: The build system that ROSA uses (abf.io) was set to remove the data for the builds older than a month or so. So, the build info referred to above is no longer available at https://abf.io/build_lists/2953577. The administrators of abf.io could do nothing about that setting, unfortunately.
However, the full RPM build log from that build is still available here, in case you need to inspect it: https://github.com/username_0/shim-review/blob/ntcitrosa64/shim-x64-build-2953577.log
username_0: To speed up the process, you can put an tested image with updates (Pre R11)
https://abf.io/platforms/rosa2016.1/products/190/product_build_lists/23672
username_2: The rebuild looks fine to me. However, do note that the shimx64.efi file you provided is already signed using your EV cert -- that signature will be replaced when Microsoft signs the binary. I noticed it due to extra differences between the file I built and the one you provided, the differences were mostly just EV cert code.
I find this shim acceptable for signing.
`mtrudel@rosafresh ~/shim-x64 $ sha256sum shimx64.efi`
`b407cdeae8fee3c51300b6974599dff39cb5863223dc2617662fcdb07c68c55b shimx64.efi`
username_1: Yes, the shim itself was signed with ROSA cert.
This way we could test it a bit more before submission (add the public key as trusted on a test machine, etc.).
Moreover, when we submitted some of the previous version of shims to Microsoft for signing, a few years ago, they required it to be signed that way, IIRC. Not sure if that requirement is still there, but - won't hurt to sign it.
username_1: @username_2
Just FYI
Looks like ROSA signature in the shim binaries does cause problems for us: the system Microsoft uses to sign the binaries reports unspecified errors when trying to process the shim. It seems, they did not know what the actual problem was and recommended us to re-submit the files - that did not work either, same errors.
We eliminated several other things that could cause that - no effect, same errors. So, our only remaining guess is that the binary itself **must not** be signed by our key, only the submitted .cab archives should be signed.
I'll rebuild shim binaries, just in case, and my colleagues will open new validation requests here. Then, if everything goes well, we'll re-submit the validated binaries to Microsoft.
Status: Issue closed
|
8bitPit/Niagara-Issues | 1185854263 | Title: SMS, Call, and Email Shortcuts don't work in search
Question:
username_0: To recreate the issue:
1. Install Sesame
2. Search on a contact name
3. Swipe right on the contact name
4. Try to tap SMS, Email, or Call
Expected behavior: The default SMS, phone, or email app is opened and prepopulated with the contact's info.
Observed behavior: Tapping the shortcuts does nothing. |
esi-neuroscience/syncopy | 1128276766 | Title: Quickstart for Syncopy
Question:
username_0: Write a quickstart guide, presenting major Syncopy features and analysis methods in an informal-manner.
Answers:
username_0: Basic quickstart is now in dev with #248, covering freqanalysis and connectivity. It will probably get more content when more features are available, but the groundwork is done so I am closing this.
Status: Issue closed
|
fbaligand/vscode-logstash-editor | 862634270 | Title: Filebeat / Elasticsearch template language mode
Question:
username_0: For Logstash config files, when the file name doesn't match the expected naming scheme. The work around is to trigger the "Change Language Mode" action and then select "Logstash". This is great.
For Filebeat/Elasticsearch, there is no "Filebeat YAML" or "Elasticsearch JSON Template" language mode. So if the file name doesn't match the expected naming scheme you're stuck.
Answers:
username_1: Hi @username_0,
Filebeat configuration has `YAML` language and Elasticsearch index template configuration has `JSON` language.
So as you mention, there is no specific language.
That said, in your vscode settings, you can associate your Elasticsearch JSON or Filebeat YAML file to the right JSON schema provided by Logstash Editor.
**For elasticsearch index template:**
- go to vscode "Settings", and search for "json schemas"
- click on "Edit in settings.json"
- define this:
``` json
"json.schemas": [
{
"fileMatch": [
"my-elasticsearch-index-template.json"
],
"url": "https://raw.githubusercontent.com/username_1/vscode-logstash-editor/master/jsonschemas/elasticsearch-template-es7x.schema.json"
}
]
```
Bonus: since few days, you can associate your elasticsearch index template to a specific Elasticsearch minor version, look here:
https://github.com/username_1/vscode-logstash-editor#advanced-tip-choose-elasticsearch-index-template-minor-version
**For Filebeat configuration:**
- go to vscode "Settings", and search for "yaml schemas"
- click on "Edit in settings.json"
- define this:
``` json
"yaml.schemas": {
"https://raw.githubusercontent.com/username_1/vscode-logstash-editor/master/yamlschemas/filebeat.yml.schema.json": [
"my-filebeat.yml"
]
}
```
Tell me if that answers your need ;)
username_0: It works, thanks for your help.
username_1: Nice, thanks for your feedback ;)
Status: Issue closed
username_1: @username_0
By the way, a "persistent" way to associate a Logstash conf file pattern to "Logstash" language is to do this:
- Go to vscode "Settings"
- Search for "Files: Associations"
- Click "Add Item"
- Fill "your-logstash-conf-file-pattern" as Key
- Fill "logstash" as Value
- Click "OK" and you're done ;) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.