repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
yugalme/yugal1 | 296673953 | Title: GitHubtitle_1518515231
Question:
username_0: GitHub desc
Event:
Booking exception: City: Bora Bora, French Polynesia Hotel: Le Meridien#012Traceback (most recent call last):#012File "/root/jatin/harshil/views.py", line 16, in index#012 raise KeyError#012KeyError
Permalink:
https://fegen11.gen3.qa.loggly.net/search#terms=tag:Django&from=2018-02-13T09:40:16.980Z&until=2018-02-13T09:50:16.980Z&source_group= |
JimmyLv/reading | 291477996 | Title: 项目规范
Question:
username_0: ## 项目规范 ·<br>
JavaScript工程项目的一系列最佳实践策略 当您在青葱的田野里翻滚一般欢乐(而不受约束)地开发一个新项目,对其他人而言维护这样一个项目简直就是一个潜在的可怕的噩梦。以下列出的指南是我们在 hive…<br>
<br>
January 25, 2018 at 03:44PM<br>
via Instapaper https://github.com/wearehive/project-guidelines/blob/master/README-zh.md#structure-and-naming |
cwtickle/danoniplus | 685879366 | Title: [要望] リザルトデータの簡略化対応
Question:
username_0: ## 改善詳細 / Details of Improvement
- リザルトデータの中身について、結果画面に表示した内容をそのままTwitter用に出力しているが、
一部冗長な部分もあるため、可能であれば見直したい。
## スクリーンショット / Screenshot
## 期待する見せ方・挙動 / Expected Behavior
例えば、以下が考えられる。
- フリーズアローが無い場合、フリーズアロー判定数とコンボ数を省略する。
```
Before: .../350-30-7-2-9/0-0/140-0/...
After : .../350-30-7-2-9//140/...
```
- アシストプレイを行った場合、コピー用のリザルト表記を簡略化する。
```
Before: Normal -Onigiriless
After : Normal -O // AutoPlayの頭1文字
```
## その他検討事項 / Other Considerations
- アシストプレイに関しては個別にカスタマイズできるため、省略しない方が良いかもしれない。<issue_closed>
Status: Issue closed |
Garderoben/MudBlazor | 939977084 | Title: Could not find domService in window
Question:
username_0: **Describe the bug**
This bug throws the following error on .NETCore 3.1 application when trying to use tabs
Additional Detail:
.NETCore 3.1 MVC using Blazor Components in the Views
Other Components work fine
Directions were followed
**Suggested:**
**Can i get a CDN or a copy of the mudblazor.js and mudblazor.css? I want to try to see if this is fixed if I reference them locally**
**Expected behavior**
Tabs component to be created
Razor File:
` <MudTabs Elevation="2" Rounded="true" ApplyEffectsToContainer="true" PanelClass="pa-6">
<MudTabPanel Text="Tab One">
<MudText>Content One</MudText>
</MudTabPanel>
<MudTabPanel Text="Tab Two">
<MudText>Content Two</MudText>
</MudTabPanel>
<MudTabPanel Text="Tab Three">
<MudText>Content Three</MudText>
</MudTabPanel>
<MudTabPanel Text="Tab Disabled" Disabled="true">
<MudText>Content Disabled</MudText>
</MudTabPanel>
</MudTabs>`
Error Code:
```
warn: Microsoft.AspNetCore.Components.Server.Circuits.RemoteRenderer[100]
Unhandled exception rendering component: Could not find 'domService' in 'window'.
Error: Could not find 'domService' in 'window'.
at https://localhost:5001/_framework/blazor.server.js:8:35211
at Array.forEach (<anonymous>)
at p (https://localhost:5001/_framework/blazor.server.js:8:35171)
at https://localhost:5001/_framework/blazor.server.js:8:35881
at new Promise (<anonymous>)
at e.beginInvokeJSFromDotNet (https://localhost:5001/_framework/blazor.server.js:8:35854)
at https://localhost:5001/_framework/blazor.server.js:1:20247
at Array.forEach (<anonymous>)
at e.invokeClientMethod (https://localhost:5001/_framework/blazor.server.js:1:20217)
at e.processIncomingData (https://localhost:5001/_framework/blazor.server.js:1:18028)
Microsoft.JSInterop.JSException: Could not find 'domService' in 'window'.
Error: Could not find 'domService' in 'window'.
at https://localhost:5001/_framework/blazor.server.js:8:35211
at Array.forEach (<anonymous>)
at p (https://localhost:5001/_framework/blazor.server.js:8:35171)
at https://localhost:5001/_framework/blazor.server.js:8:35881
at new Promise (<anonymous>)
at e.beginInvokeJSFromDotNet (https://localhost:5001/_framework/blazor.server.js:8:35854)
at https://localhost:5001/_framework/blazor.server.js:1:20247
at Array.forEach (<anonymous>)
at e.invokeClientMethod (https://localhost:5001/_framework/blazor.server.js:1:20217)
at e.processIncomingData (https://localhost:5001/_framework/blazor.server.js:1:18028)
at Microsoft.JSInterop.JSRuntime.InvokeWithDefaultCancellation[T](String identifier, Object[] args)
[Truncated]
fail: Microsoft.AspNetCore.Components.Server.Circuits.CircuitHost[111]
Unhandled exception in circuit 'hQ9_Y-e_S-bYFgyIp3fi-RiGIYyP079tD_BkHaYnhHE'.
Microsoft.JSInterop.JSException: Could not find 'domService' in 'window'.
Error: Could not find 'domService' in 'window'.
at https://localhost:5001/_framework/blazor.server.js:8:35211
at Array.forEach (<anonymous>)
at p (https://localhost:5001/_framework/blazor.server.js:8:35171)
at https://localhost:5001/_framework/blazor.server.js:8:35881
at new Promise (<anonymous>)
at e.beginInvokeJSFromDotNet (https://localhost:5001/_framework/blazor.server.js:8:35854)
at https://localhost:5001/_framework/blazor.server.js:1:20247
at Array.forEach (<anonymous>)
at e.invokeClientMethod (https://localhost:5001/_framework/blazor.server.js:1:20217)
at e.processIncomingData (https://localhost:5001/_framework/blazor.server.js:1:18028)
at Microsoft.JSInterop.JSRuntime.InvokeWithDefaultCancellation[T](String identifier, Object[] args)
at MudBlazor.MudTabs.UpdateSlider(Boolean scrollToActivePanel)
at MudBlazor.MudTabs.CalculateTabsSize()
at MudBlazor.MudTabs.OnAfterRenderAsync(Boolean firstRender)
at Microsoft.AspNetCore.Components.RenderTree.Renderer.GetErrorHandledTask(Task taskToHandle)``
```
Answers:
username_0: **DISREGARD: I fixed the issue by pulling the js and css from nuget file on pc. :)**
Status: Issue closed
|
usnistgov/REFPROP-wrappers | 342458492 | Title: REFPROP with Matlab function block
Question:
username_0: Matlab Version: R2015b
REFPROP version: 9.1
OS: WIndows 7
Problem: Using matlab function block to call refprop and calculate enthalpy.
function Hin = fcn(Tfin,p)
Hin =refpropm('H','T',Tfin,'P',p*1e2,'R1233zd');
end

When I run the model I get errors stating that codegen doesn’t support a lot of the functions that refrprop uses.
I placed the EX_C2 file in my refprop folder. I am not a software person so please be patient with me!
I can make maps of enthalpy and put it into simulink but this approach seems so much easier so I'd like to know if it can be done.
Answers:
username_1: What errors exactly do you get with "errors stating that codegen doesn’t support a lot of the functions that refrprop uses" ? In the future, please fill out the complete issue template.
username_0: Here is the full list.
1
refpropm
190
This operation does not support cell arrays. Use curly braces instead.
2
refpropm
195
Function 'libisloaded' is not supported for code generation. Consider adding coder.extrinsic('libisloaded') at the top of the function to bypass code generation.
3
refpropm
204
Function 'exist' is not supported for code generation. Consider adding coder.extrinsic('exist') at the top of the function to bypass code generation.
4
refpropm
208
Expected a scalar. Non-scalars are not supported in IF or WHILE statements, or with logical operators. Instead, use ALL to convert matrix logicals to their scalar equivalents.
5
refpropm
211
Anonymous functions are not supported for code generation.
6
refpropm
219
The function 'strcat' is not supported for standalone code generation. See the documentation for coder.extrinsic to learn how you can use this function in simulation.
7
refpropm
223
The function 'strcat' is not supported for standalone code generation. See the documentation for coder.extrinsic to learn how you can use this function in simulation.
8
refpropm
224
The function 'strcat' is not supported for standalone code generation. See the documentation for coder.extrinsic to learn how you can use this function in simulation.
9
refpropm
227
The function 'strcat' is not supported for standalone code generation. See the documentation for coder.extrinsic to learn how you can use this function in simulation.
10
refpropm
227
Undefined function or variable 'notfound'. The first assignment to a local variable determines its class.
11
refpropm
227
Undefined function or variable 'warnings'. The first assignment to a local variable determines its class.
12
refpropm
231
Attempt to extract field 'FluidType' from 'mxArray'.
13
refpropm
233
Attempt to extract field 'FluidType' from 'mxArray'.
14
refpropm
234
Attempt to extract field 'mixFlag' from 'mxArray'.
15
refpropm
[Truncated]
19
refpropm
241
Attempt to extract field 'BasePath' from 'mxArray'.
20
refpropm
243
Undefined function or variable 'mixFile'. The first assignment to a local variable determines its class.
21
refpropm
244
Function 'unicode2native' is not supported for code generation. Consider adding coder.extrinsic('unicode2native') at the top of the function to bypass code generation.
22
refpropm
245
Undefined function or variable 'hmxnme'. The first assignment to a local variable determines its class.
23
refpropm
245
Undefined function or variable 'hmxnme'. The first assignment to a local variable determines its class.
username_1: ans =
373.1243
```
I'm not sure if loadlibrary can be used in a Simulink block. If not, this is a dead end and there is no way it can work.
username_0: Hi,
Yes. Refprop works well with matlab commands. Looks like codegen with simulink doesn't work with loadlibrary.
Thanks for looking into this.
Status: Issue closed
username_2: I've run into the same problem. Refprop works when using Matlab commands, but when I try to call Refprop in Simulink using a Matlab function block, I get a lot of errors and it doesn't work. Is there any known way to use Refprop in Simulink?
username_1: Does simulink support calling Python? If so, then you can use the new interface. Otherwise, you'll have to contact Mathworks.
username_3: Hi to everyone, did you solve these problems? I have the same problems. I want to simulate ORC Rankine with simulink using refpropm. This program works on matlab but it produce the same errors of username_0 in simulink. Do you have some kind of information for solve these problems?
username_4: Hello, Is this issue solved? I am trying to link simulink with refprop and it is not working.
It is working between matlab and refprop.
username_1: Did you contact Mathworks?
username_4: Not really. I've been trying to do it myself.
I will try to contact them. |
eggjs/egg | 284442258 | Title: egg socket.io 在连接关闭时,无法识别具体是哪个socket 关闭
Question:
username_0: disconnect 或者 disconnecting 事件发生时,
this.ctx.socket = { remoteAddress: '127.0.0.1', remotePort: 7001 }
无法识别 具体是哪个连接断开了
Answers:
username_1: https://github.com/eggjs/egg-socket.io/blob/master/lib/io.js#L75 具体看这里。断开时的socket和正常连接的socket是同一个对象。所以,检查一下你的转发服务是不是配错了,导致 remoteAddress 错误。
Status: Issue closed
username_2: 我也碰到这个问题了。。。一模一样的报错
由于是在本地开发,所以不存在啥转发服务
username_2: 我知道了,因为 egg-core 里把 [utils.callFn](https://github.com/eggjs/egg-core/blob/4979b984e12cd39516ed1c6df5f1284c8faede2f/lib/utils/index.js#L40-L45) 写的是 async
,它最终会被解释成 promise ,然后又属于 Microtask , 等到它执行的时候被同步执行的 onclose 中 socket 的有用信息早就已经被 [remove](https://github.com/socketio/socket.io/blob/master/lib/socket.js#L442-L453) 掉了
感觉这是个问题啊,建议可以再扩充一个同步版本的 utils.callFn,然后根据 handler 是不是 async 的来选择是否要用 async 的 utils.callFn 执行它。
username_3: disconnect 或者 disconnecting 事件发生时,
this.ctx.socket = { remoteAddress: '127.0.0.1', remotePort: 7001 }
无法识别 具体是哪个连接断开了
username_1: 有个快速能够修复的方法就是:写个连接中间,放在最前面,把 socket 信息复制存储到 ctx 上,这样当断开连接后,不会因为 socket 对象清空而拿不到信息。
至于插件层面怎么改进,我再研究一下。
username_2: 感谢。
刚刚试了一下,发现了新问题,这次真不到原因了。。。
我直接改了 egg-socket.io 里的 lib/io.js 的源代码。。。

然后两次打出来的东西不一致,这就很致命了,我已经懵逼的不知所措了。。。

username_1: @username_2 https://github.com/eggjs/egg-socket.io/pull/34
username_1: @username_2 试试 [email protected]
username_2: @username_1 解决了,666666 |
littleflute/littleflute | 605801578 | Title: 司马南的那些【荒唐苦笑剧】在美国被夹脑袋 长颈鹿茶馆 2020-04-24
Question:
username_0: https://mp.weixin.qq.com/s/SgJ_O4BGMZolBhPiTc2Ccg
Answers:
username_0: dt
https://mp.weixin.qq.com/s?__biz=MzA5MzMwNTc0Ng==&mid=2247490907&idx=3&sn=490da184cb1678a90ad43e893db40467&chksm=905eb3a6a7293ab0d3174be649bcec49b8ac18860c623c130ee1611b726514a72bcdd4fd06ee&token=<PASSWORD>&lang=zh_CN#rd
Status: Issue closed
|
Igalia/wpewebkit.org | 654856628 | Title: FAQ brainstorming
Question:
username_0: - Is Wayland required to run WPE?
- What is this wayland-protocols build dependency about in Cog?
- Are open dialogs/popups menus supported?
Answers:
username_1: * What is (and isn't) cog?
* Something isn't working for me... help? (flatpak, rpi, etc?)
* What Web features does WPE support?
username_2: **What's up with EME? How can I support this feature in my WPE-based product?**
There is code in WebKit to support EME but in any case you will need a license to access it since this part is not open source. There are three ways you can get this working:
A. Getting a license and use the Thunder OCDM plugin.
B. Write a Thunder compatible API complement that will glue with your DRM system.
C. Write a new CDM backend for WebKit using your DRM system.
username_3: Maybe this is not exactly FAQ material, but we often get people stuck with some gotchas while trying to get WPE to work. Maybe they can be formatted as questions and answers somehow. For example:
Q: Why does the browser/launcher (e.g. Cog) crash at startup?
A: If you are building an embedded system image yourself, make sure there is at least one font installed that can be used as fallback and that Fontconfig knows about it.
Status: Issue closed
|
mhng-feedback/mhng-mammals | 752350928 | Title: Monthly VertNet data use report for 2020-2, resource mhng_mammals
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of this report at:
http://tools-usagestats.vertnet-portal.appspot.com/reports/5a659248-1f70-11e3-b2c5-00145eb45e9a/202002/
Raw text and JSON-formatted versions of the report are also available for
download from this link.
A copy of the text version has also been uploaded to your GitHub
repository under the "reports" folder at:
https://github.com/mhng-feedback/mhng-mammals/tree/master/reports
A full list of all available reports can be accessed from:
http://tools-usagestats.vertnet-portal.appspot.com/reports/5a659248-1f70-11e3-b2c5-00145eb45e9a/
You can find more information on the reporting system, along with an
explanation of each metric, at:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
nachocruel/projeto_de_software_2018-2 | 364554012 | Title: Elaborar Documento de Visão
Question:
username_0: - [ ] Elaborar Documento
- [ ] Commitar
[https://docs.google.com/document/d/1XIYJsPtFNKtSs0gV4LKlgX9tsP2cetUxSrOYGCbqEZU/edit?usp=sharing](url)
Answers:
username_0: @nachocruel @GustavoMarquesR e Marcus depois editem no doc os dados pessoais de vocês.
Status: Issue closed
|
opengeospatial/sensorthings | 597240759 | Title: Ad-hoc European Air-Quality Data
Question:
username_0: Hi SensorThingsEnthousiasts
Since there was mention that the global response to the current Corona virus pandemic has an influence on the air quality, we figured it was a good idea to make air quality data available using the SensorThings API.
In the context of the API4INSPIRE project, we've harvested the air quality data for 5 Countries (Austria, France, Germany, Italy & Switzerland) and imported those into two endpoints. The data covered ~100 million Measurements, 8185 Datastreams, 2117 Things.
More information can be found at:
http://datacove.eu/ad-hoc-air-quality/ |
PublicarNuevosNegocios/ProveedoresOnLine | 84175649 | Title: Información financiera
Question:
username_0: En el balance general, al momento de consultar el año 2012 debería aparecer de primer (2012 y 2011) esto con el fin de realizar el análisis vertical y horizontal

Answers:
username_1: Este es el mismo 186
Status: Issue closed
|
zeroengineteam/ZeroCore | 367348870 | Title: Move AudioInputOutput to Platform (and all implementations)
Question:
username_0: # Description
Description was not present
# User Data
- **UserName**: Username was not present
# Zero Engine Data
- **Revision**: Revision was not present
- **ChangeSet**: changeSet was not present
- **Platform**: Platform was not present
- **Build Version Name**: BuildVersion was not present<issue_closed>
Status: Issue closed |
iluuu1994/ITProgressBar-iOS | 310265050 | Title: vertical progress bar?
Question:
username_0: Hi, thanks you for great lib, can you provide me some way to make it become vertical progresbar?
Answers:
username_1: Hi!
You can actually simply rotate the UIView:
https://stackoverflow.com/a/31781115/1320374
Let me know if this worked for you 🙂
Status: Issue closed
username_0: it's work. Thanks you |
matrix-org/matrix-federation-tester | 436557189 | Title: Test both ipv6 and ipv4
Question:
username_0: I've had several problems due to a misconfigured nginx server that didn't respond to ipv6 but did for ipv4. For example the .well-known wasn't accessible for that reason on ipv6, but I spent a long time searching for the issue, because wget apparently preferred ipv4 (or automatically tried both).
It would be great if the federation tester would test both ipv4 and ipv6 and report issues for them separately.
Answers:
username_0: Thanks for your reply. You are right that I stated the issue to broadly: it seems you do report results for both ipv4 and ipv6, just not in case of the .well-known info.
username_1: closing as a dup of #103.
Status: Issue closed
|
Flexberry/FlexberryEmberTestStand.ODataBackend | 397229035 | Title: Сгенерировать OData-бакенд для ember-flexberry dummy приложения
Question:
username_0: Требуется сгенерировать ODataBackend и скрипты создания БД для SQLServer, Postgres и Oracle для тестового стенда ember-flexberry.
Модель для генерации располагается в приватном репозитории: \Тестовые стенды\Flexberry Ember Dummy\EmberFlexberry\.
Генерацию следует выполнять последней версией Flexberry Designer, взятой с сайта https://designer.flexberry.net без замены плагина генерации.
В репозиторий следует положить файл .gitignore из репозитория ORM.<issue_closed>
Status: Issue closed |
PauloHMattos/TeoriaDosGrafos | 487482759 | Title: Distâncias e diâmetro
Question:
username_0: Sua biblioteca deve ser capaz de determinar a distância entre dois
vértices do grafo (utilizando como primitiva a BFS) assim como calcular o diâmetro do grafo.
Lembrando que o diâmetro ´e a maior distância entre qualquer par de vértices do grafo (ou
seja, o comprimento do maior caminho mínimo do grafo).
Answers:
username_0: Distância implementada no commit 36fcc4b
Status: Issue closed
|
koral--/android-gif-drawable | 799130712 | Title: creash on pl.droidsonroids.gif.GifInfoHandle.a+60
Question:
username_0: DEBUG : #12 pc 001c3dc8 /data/data/com.packagename/oat/arm/source.vdex (pl.droidsonroids.gif.GifInfoHandle.a+60)
pc 001c3efc /data/data/com.packagename/oat/arm/source.vdex (pl.droidsonroids.gif.GifInfoHandle.<init>+22
Answers:
username_1: Could you share a gif which causes this issue?
username_1: Closing due to inactivity
Status: Issue closed
|
atom/settings-view | 54977773 | Title: Use native checkboxes in core
Question:
username_0: It's also easier to add styling on top of core, instead of trying to reset some fancy styles coming from core. The settings-view will still adapt to themes, but only things like background color or text color that come from a theme's `syntax-variables.less`. Custom checkboxes would go too far.
#### What does this mean for themes?
Changing the checkboxes back to native will make them look a bit strange in dark themes. So themes will have to custom style them on their own (if they want). I'll move the current custom checkboxes into the [One](https://github.com/atom/one-dark-ui) themes, if somebody wants to copy them.
Status: Issue closed
Answers:
username_0: Closing.. see discussion on #350 |
vmware-tanzu/velero | 708364766 | Title: Backup from one cluster and restore in another using rook ceph storage.
Question:
username_0: **What steps did you take and what happened:**
Installed velero in 2 cluster which uses rook ceph storage. I can take backups and restore them to the same cluster after deleting all resources in a namespace.
But when restoring to another cluster, the PVC cannot be provisioned by rook ceph. Both rook-ceph and velero were deployed the same way in both clusters.
I tried both way, backup in cluster 1 and restore in cluster 2 and vice-versa, in both cases the error was the same. PVC provisioning failure:
`rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-dbc67ffdc-vj2r8_0550728b-5527-41d3-ad5a-c0c6307056b7 failed to provision volume with StorageClass "rook-ceph-block-storage": rpc error: code = Internal desc = key not found: no snap source in omap for "csi.snap.84f0f012-fe89-11ea-bbf7-2ee9609329c4"`
**What did you expect to happen:**
Be able to take a rook-ceph volumes backup from one cluster and restore it on another (is this known to be possible?).
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
- `kubectl logs deployment/velero -n velero`
https://gist.github.com/username_0/91148b5ba1126fcca771cc447de7c957
- `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
https://gist.github.com/username_0/50b9e74abf6ca944cb36d2e970fa2d12
- `velero backup logs <backupname>`
https://gist.github.com/username_0/a3244002f96dd40bcbc683c3cfa87bf1
- `velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
https://gist.github.com/username_0/33a8771de9d64b4ddda075e3f061d6cc
- `velero restore logs <restorename>`
https://gist.github.com/username_0/67a4ff961ded1884bf6a7a1e8289cb70
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
Using Rook 1.4.3
**Environment:**
- Velero version (use `velero version`):
`Client:
Version: v1.4.2
Git commit: <PASSWORD>
Server:
Version: v1.4.2`
- Velero features (use `velero client config get features`):
`features: EnableCSI`
- Kubernetes version (use `kubectl version`):
`Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6+k3s1", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2020-07-16T20:46:15Z", GoVersion:"go1.13.11", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6+k3s1", GitCommit:"<PASSWORD>a48b8b6fdefa8eb7ead2015a4b3a", GitTreeState:"clean", BuildDate:"2020-07-16T20:46:15Z", GoVersion:"go1.13.11", Compiler:"gc", Platform:"linux/amd64"}`
- Kubernetes installer & version:
k3s version v1.18.6+k3s1 (6f56fa1d)
- Cloud provider or hardware configuration:
- OS (e.g. from `/etc/os-release`):
K3OS
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
Answers:
username_1: @username_0 did you managed to make it work?
username_0: @username_1 no, I got it working with restic though. The problem above was using the csi-plugin.
username_2: Is the Ceph cluster shared or are these different storage systems? The snapshot is going to be a Ceph snapshot and is only accessible within the Ceph cluster.
username_0: @username_2 yes, they are different storages and ceph clusters. I was simulating the situation where if one cluster blew up, velero backups would be able to restore it on a completely new cluster.
Status: Issue closed
username_2: So if you're using the CSI plugin, currently that will take a snapshot using the Ceph snapshotting facility. Ceph snapshots are stored within the cluster. When the cluster is lost/removed all snapshot data will be lost as well. Unfortunately CSI snapshots do not specify whether the snapshot is "durable" (survives loss of primary storage) or not. You should use Restic backup of your data, the snapshots on the Ceph cluster are not really backups.
username_0: Understood. To clarify, CSI plugin are not viable to do disaster recovery backups? Is there intention to make them viable?
username_3: Same questions as @username_0. Not much documentation on this limitation if CSI snapshots are not meant for DR scenarios.
username_2: I added a warning note in the README - https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/main/README.md
We will be addressing this in a future release but for the moment I recommend you use a Restic backup.
username_4: @username_0 Could you share your instructions, scripts and anything else to demonstrate how to do a full backup of a Kubernetes cluster which uses rook-ceph?
I am also looking to utalize Velero for my backup and restore solution to a new cluster, in case of something catastrophic happening. Also good for Testing regions and simulating issues on a test region prior to taking it into production...
Are you able to share your steps/scripts/instructions how you achieved it? Maybe a blog writeup somewhere?
username_5: I saw the same problem with OCP 4.9 ad ODF 4.9. The problem went away when the 'Delete Policy' was changed from Delete to Retain for VolumeSnapshotClass object (there are two of them). The restore was completed without any errors. |
ConsoleTVs/Charts | 229884799 | Title: mechanism for adding script and style tags from ->render() to blade's @push() ?
Question:
username_0: My Laravel app is powered by vue.js, and vue.js does not like it when there are <script> and <style> tags inside of its container.
The very easy approach to this is to create @stack('scripts') and @stack('styles') inside your header/footer blades which are outside of the scope of vue.js, and then in your blade views you do
@push('scripts')<script>something()</script>@endpush and this way everyone is happy, vue doesn't throw a horrible wall of red text in the console and your render function still gets to use it's inline dependencies
Note that this is different from the stuff that assets() returns, which I am already handling like so
```
@push('header-mixed')
{!! Charts::assets() !!}
@endpush
```
Why render() adds more inline style/script tags I do not know.. but they are the cause of my issues
Could you possibly add this feature? Because right now displaying your charts in my app throws this:
```
app.c61e6fe….js:81071 [Vue warn]: Error compiling template:
... gigantic wall of red text comprising of the entire page's source code ...
- Templates should only be responsible for mapping the state to the UI. Avoid placing tags with side-effects in your templates, such as <script>, as they will not be parsed.
- Templates should only be responsible for mapping the state to the UI. Avoid placing tags with side-effects in your templates, such as <script>, as they will not be parsed.
- Templates should only be responsible for mapping the state to the UI. Avoid placing tags with side-effects in your templates, such as <script>, as they will not be parsed.
- Templates should only be responsible for mapping the state to the UI. Avoid placing tags with side-effects in your templates, such as <script>, as they will not be parsed.
(found in <Root>)
```
An easy way to address this would be to have the ->render() return the html separate from the js/css so I could do something like that
```
{!! $chart->renderHtml() !!}
@push('assets'){!! $chart->renderAssets() !!}@endpush
```
or
```
<?php $stuffToRender = $chart->render() ?>
@push('assets'){!! $stuffToRender['assets'] !!}@endpush
{!! $stuffToRender['html'] !!}
```
instead of
```
{!! $chart->render() !!}
```
Answers:
username_0: to be very clear... this is the current output of ->render() for a pie chart
```
<script type="text/javascript">
$(function () {
var RbnsoUpxIl = new Highcharts.Chart({
colors: [
"#2196F3",
"#F44336",
],
chart: {
renderTo: "RbnsoUpxIl",
height: 500,
width: 1000,
plotBackgroundColor: null,
plotBorderWidth: null,
plotShadow: false,
type: 'pie'
},
title: {
text: "Genders",
},
credits: {
enabled: false
},
tooltip: {
pointFormat: '{point.y} <b>({point.percentage:.1f}%)</strong>'
},
plotOptions: {
pie: {
allowPointSelect: true,
cursor: 'pointer',
dataLabels: {
enabled: true,
format: '<b>{point.name}</strong>: {point.y} ({point.percentage:.1f}%)',
style: {
color: (Highcharts.theme && Highcharts.theme.contrastTextColor) || 'black'
}
}
}
},
legend: {
},
series: [{
colorByPoint: true,
data: [
{
name: "male",
y: 49
},
{
name: "female",
y: 57
},
]
}]
})
});
</script>
[Truncated]
@endpush
<div class="charts" style="background: inherit;">
<div data-duration="500" class="charts-loader enabled" style="display: none; position: relative; top: 220px; height: 0;">
<center>
<div class="loading-spinner" style="border: 3px solid #000000; border-right-color: transparent;"></div>
</center>
</div>
<div class="charts-chart">
<div id="RbnsoUpxIl" style="height: 500px;
width: 1000px;
"></div>
</div>
</div>
```
Or if you could somehow simply have the <script> part be part of the {!! Charts::assets() !!} then that would work just as well..
thanks!!
username_0: As a quickly thrown together workaround I am now using this which correctly displays all my charts (4 now, 2 line charts, 1 pie chart and 1 area chart) correctly.
But I imagine this could easily break if you start injecting inline script tags differently than you do now
```
@inject('charts', 'App\Services\ChartsService')
@push('header-mixed')
{!! Charts::assets() !!}
@endpush
@foreach($charts->getCharts() as $chart)
<?php
$render = $chart->render();
$scriptEnds = strrpos($render, '</script>') + strlen('</script>');
$script = substr($render, 0, $scriptEnds);
$html = substr($render, $scriptEnds);
?>
@push('header-mixed')
{!! $script !!}
@endpush
{!! $html !!}
@endforeach
```
username_1: Hey! Sorry for the inconvenience! I see where you're going and I find myself really out of time to work on this, howerver, you can just vendor:publish the assets and modify the templates you're using with the code you showed me on the 2nd comment. This will generate the correct templates when they are rendered!
username_2: @username_0
From your comments, I'd understand you've got this working on laravel 5.4. I'm using vue.js but i dont know how to get this to work.. I've used the documentation i think the outcome is how i wanted but i dont get the visuals to work.. This is the outcome underneath.
https://gyazo.com/48ee390b1cb9c64df2383d5b6c6d6eee
username_0: @username_2 hrm is it possible you are using {{ $html }} instead of {!! $html !!} ?
The code I posted works for me. This is the contents of my ChartsService.php
```
class ChartsService
{
public static function getCharts()
{
$charts = [];
$users = User::all();
$charts['users'] = Charts::database($users, 'line', 'highcharts')
->title("Sign-ups per day")
->elementLabel("Members")
->dimensions(1000, 500)
->responsive(true)
->groupByDay();
return $charts;
}
}
```
While I was able to get the charts working inside of my hybrid vuejs/laravel application. Prior to the above fix the charts were also working, but vuejs was throwing a fit in the console because of in the script tags it had to parse. But other than the red in the console it wasn't breaking the layout or the app.
I suggest you get the charts working normally and outside of the scope of your vue wrapper first
username_2: @username_0
My controller looks like this :
``` public function index()
{
$unusedLeads = Leads::whereNull('date_called')->count();
$usedLeads = Leads::whereNotNull('date_called')->count();
$leadChart = Charts::create('pie','highcharts')
->title('Leads')
->labels(['used','unused'])
->values([$unusedLeads,$usedLeads])
->dimensions(0,300)
->responsive(true);
return view('back.statistics.index',[
'amountOfLeadsCalled' => $amountOfLeadsCalled,
'amountOfProspects' => $amountOfProspects,
'totalProspects' => $totalProspects,
'leadChart' => $leadChart
]);
}
How would i need to do it on my blade to fix this issue?
username_1: use {!! $leadChart->render() !!} instead of {{ $leadChart->render() }} to output the chart
username_0: @username_2 what he said ^
username_1: Well you said it as well, but idk, he prob did not even try
username_2: @username_1 @username_0 {[$leadChart->render()}} Doesnt output anything for me.. As {{!! $leadChart !!} outputs The full html and script for me.. Ill make a example asap..
username_3: @username_0 your "thrown together workaround" works for me thanks

Status: Issue closed
username_4: @username_0 I came up with the very same issue. I took your solution of splitting the JS and HTML parts and brought it to an adapter util class. That will kill the view file a bit cleaner and remove the red VUE warning as well.
https://gist.github.com/username_4/d94b3570bf2f6efb1712f49b99d0de93
username_1: Since I'm currently developing an app with Vue, I'm a fraid I'll face this issue in few days, will see how will I handle it!
username_0: @username_4 brilliant! I love it! |
void-linux/void-packages | 665619429 | Title: Blender VSE not working with libswscale 4.3..something
Question:
username_0: ### System
musl 64bit, shouldn't matter though. Had the same issue over at Arch. That's why I went here.
Also Void's Blender doesn't write a good /tmp/blender.crash.txt. In Arch it says it's because the ...scale.so.something. That's how I knew. In Void it gives you nothing.
Blender VSE crashes when you have a video in the time line or if you see a thumbnail of one in the file manager.
Here is what I did to fix it:
sudo xdowngrade libavdevice-4.2.3_5.x86_64-musl.xbps libavfilter-4.2.3_5.x86_64-musl.xbps libswscale-4.2.3_5.x86_64-musl.xbps libavformat-4.2.3_5.x86_64-musl.xbps libavcodec-4.2.3_5.x86_64-musl.xbps libavutil-4.2.3_5.x86_64-musl.xbps libavresample-4.2.3_5.x86_64-musl.xbps libpostproc-4.2.3_5.x86_64-musl.xbps libswresample-4.2.3_5.x86_64-musl.xbps
Here the Arch Issue: https://blenderartists.org/t/blender-crashes-when-opening-video-folder-in-vse-2-83-2/1241877
Answers:
username_1: @Gottox and @username_2 ?
username_2: https://trac.ffmpeg.org/ticket/8747
Status: Issue closed
|
cadets/cadets-ui | 299713636 | Title: Separate "Details" and "Neighbours" from "Inspector" window.
Question:
username_0: Now that we have a UI framework for doing MDI properly, let's make the "Details" and "Neighbours" views available independent of how the "Inspector" window is scrolled. Some workflows / stages of analysis may benefit from showing all three sets of details together; if they are not required they can always be minimized or closed.<issue_closed>
Status: Issue closed |
benedictchen/google-chrome-html5video-controls | 82043427 | Title: doesn't work with Vimeo embedded videos
Question:
username_0: viewing videos on vimeo.com works
but when a site embeds a vimeo video, things aren't happy. the controller shows up at first, but doesn't work. after a while, it disappears. Meanwhile, every click is interpreted by the vimeo player.
Here is an example:
https://www.kickstarter.com/projects/1598272670/chip-the-worlds-first-9-computer/description
(not the top video, which is flash - lower down, "pocket chip demonstration").
thanks for this extension. it is awesome.
Status: Issue closed
Answers:
username_1: It was an annoying issue where Vimeo overlays an invisible div on top of everything and intercepts all clicks. It should be solved, please wait as the Chrome Dev store propagates the changes. |
getsentry/sentry-elixir | 437635385 | Title: exq worker exception not sent
Question:
username_0: ### Environment
Tested with two configurations
* Elixir version (elixir -v): 1.7.3 / 1.8.1
* Erlang/OTP version (erl): 21.3.3
* Sentry version (mix deps): 6.4.2 / 7.0.6
### Description
Exceptions raised on [exq](https://github.com/akira/exq) workers are not sent to sentry when running app via a release distillery.
By running the app with mix, exceptions are correctly sent.
Answers:
username_1: Hi @username_0, thanks for opening an issue!
For the exceptions being raised, are they exceptions that would crash the process, or are you capturing them in the job and sending them manually?
exq offers a middleware hook when a job fails in `after_failed_work`. Is that something you're using?
I'm unfortunately not super familiar with distillery deployments, and am unsure what would cause such an issue. Do other Sentry exceptions work, and it's just exq workers with the problem?
username_0: Hi @username_1
Actually I finally catch manually exceptions in `after_failed_work` hook. But I didn't need anything when running the app with `mix phx.server`.
It looks like the release assembled by distillery doesn't embed something useful to catch automatically the exception. (it does not depends on the env, tested with dev or prod release)
Yes it's only on exq worker.
username_1: It seems this issue is outside of Exq, Distillery, and Sentry. I've opened up a PR in Elixir here that should fix the issue: https://github.com/elixir-lang/elixir/pull/9020
Should it be merged, a new Elixir version will need to be released, etc. before the fix is available
In the meantime, it's probably best to rely on the `after_failed_work` hook and send events manually if you are running under a Distillery release:
```elixir
defmodule ErrorReports.ExqSentryMiddleware do
@behaviour Exq.Middleware.Behaviour
def before_work(pipeline) do
pipeline
end
def after_processed_work(pipeline) do
pipeline
end
def after_failed_work(pipeline) do
{error, stacktrace} = pipeline.assigns.error
Sentry.capture_exception(error, [stacktrace: stacktrace])
pipeline
end
end
```
username_0: Thanks for the search and for the info.
We will keep the manual `capture_exception` into `after_failed_work` until the fix on the elixir side.
username_1: Sounds good, thank you again for reporting this issue 🙂
I'm going to close this issue, but feel free to comment back if something comes up and it needs to be reopened.
Status: Issue closed
username_0: Thank you for your help @username_1 !
username_1: A bit behind on sharing it, but the fix was released in [Elixir 1.8.2](https://github.com/elixir-lang/elixir/blob/v1.8/CHANGELOG.md#v182-2019-05-11) |
wheaton5/souporcell | 607877422 | Title: running souporcell in terminal
Question:
username_0: hello username_1 -
I am trying to use your program to cluster a single cell RNA seq sample that should have cells from two genotypes. I am running it on my institute's supercomputing machine (remotely) using terminal on my mac. I'm getting this error - do you know what could be going on? do I need to run this within python in my terminal?
I used this form after installing singularity -
singularity exec /path/to/souporcell.sif souporcell_pipeline.py -i /path/to/possorted_genome_bam.bam -b /path/to/barcodes.tsv -f /path/to/reference.fasta -t num_threads_to_use -o output_dir_name -k num_clusters
I used num_threads = 8 (how do you choose this?)
num_clusters = 2 (should be two "genotypes")
checking modules
imports done
checking bam for expected tags
Traceback (most recent call last):
File "/opt/souporcell/souporcell_pipeline.py", line 59, in <module>
for (index, line) in enumerate(barcodes):
File "/usr/local/envs/py36/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8b in position 1: ordinal not in range(128)
thanks I would appreciate any help! I'm an immunologist trying to run my own code.
could also email me - <EMAIL>
Answers:
username_1: I probably know the issue, but not 100%.
Singularity only mounts the directories downstream of the working directory from which you run it. So if if you are running from /this/path and you are pointing at files in /this/otherpath/... or /otherpath/... it will not see that file. It will only see files in /this/path/whatever
Also I don't know your cluster situation, but most scientific computing clusters will have head nodes which you are login to and then worker nodes on which large jobs are to be run. In order to run a large job such as souporcell you need to submit it to a cluster job scheduler such as SGE or LSF or SLURM. You may want to ask your system administrators/help desk about your particular system. Depending on your cluster settings you might be able to get away with running it on a head node but to reduce memory usage (usually limited on head nodes) and threads (also commonly limited on head nodes) you could run with --skip_remap True --common_variants <1kgenomes common variants linked in my readme> and --num_threads 4
As to your question of how I choose num threads, I normally choose 8, more will be faster but as not every stage is able to use all threads available, there will be some amount of diminishing returns with higher threads. Also in a job scheduler system you are gonna get an 8 thread job pretty fast but might have to wait a long time to even start a 32 thread job.
I hope this is helpful and I'm happy to answer any more question.
Best,
Haynes
username_0: thanks so much for the quick reply!!! I really appreciate it.
I will try the skip_remap and I know how to set up for my supercomputing institute's cluster job scheduler.
so for my other problem with the directories - I will try to simply just set up all the files for souporcell and single cell bam files in the same directory.
username_0: hello! I think i've almost gotten this to work but keep seeing the same error when I run the code with skip remap and common variants. I've been trying to figure it out but would love any help! I'm sorry if this is a dumb question.
running vartrix
Traceback (most recent call last):
File "/opt/souporcell/souporcell_pipeline.py", line 556, in <module>
vartrix(args, final_vcf, bam)
File "/opt/souporcell/souporcell_pipeline.py", line 481, in vartrix
subprocess.check_call(cmd, stdout = out, stderr = err)
File "/usr/local/envs/py36/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['vartrix', '--mapq', '30', '-b', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/souporcell_minimap_tagged_sorted.bam', '-c', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/barcodes.tsv', '--scoring-method', 'coverage', '--threads', '8', '--ref-matrix', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/ref.mtx', '--out-matrix', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/alt.mtx', '-v', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/common_variants_covered.vcf', '--fasta', '/panfs/roc/groups/1/tolarj/jriedl/.singularity/GRCh38_latest_genomic.fna', '--umi']' returned non-zero exit status 1.
I'm using this file GRCh38_latest_genomic.fna for the ref fasta from https://www.ncbi.nlm.nih.gov/genome/guide/human/.
the error file says (vartrix.err): [W::vcf_parse] Contig '1' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '2' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '3' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '4' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '5' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '6' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '7' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '8' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '9' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '10' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '11' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '12' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '13' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '14' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '15' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '16' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '17' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '18' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '19' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '20' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '21' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '22' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig 'X' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '1' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '2' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '3' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '4' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '5' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '6' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '7' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '8' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '9' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '10' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '11' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '12' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '13' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '14' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '15' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '16' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '17' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '18' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '19' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '20' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '21' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig '22' is not defined in the header. (Quick workaround: index the file with tabix.)
[W::vcf_parse] Contig 'X' is not defined in the header. (Quick workaround: index the file with tabix.)
17:42:23 [ERROR] Sequence 1 not seen in FASTA
do you think this is an issue with the fasta I chose or with my data files?
thanks! I really appreciate the error files as well - so helpful!!! Was already able to work around a few problems.
username_1: Check your bam vs your fasta (also vs the common_variants vcf). You either may be on a different reference (hg19 vs GRCh38) or a different contig naming scheme (chr1 vs 1).
You can also remove the --skip_remap and the --common_variants and it will remap to the reference you choose and call variants from that bam so everything should be on the same fasta at that point. I've had this problem accidentally many times because I mismatched the bam reference and the fasta or the common variants vcf.
username_0: yes!! Thank you for that, you were totally right. I changed to a .fa file and it worked!! So exciting.
This is so helpful for my work. I really really appreciate your willingness to answer questions!
Thanks again and take care. |
panyanbin/blog-comments | 930682785 | Title: mongodb开启远程连接和账号密码登录 | 老潘的博客
Question:
username_0: https://www.username_0.com/article/c602b9e2.html
安装好mongodb数据库后,默认只能在本地进行连接,并且是无密码连接。但是,我们大多数情况都会在本地连接远程的mongodb数据库,进行数据维护等操作,既然远程连接,那肯定不能无密码就连接,不然只要知道服务器ip和端口,谁都可以访问到。 本文将对开启mongodb数据库可远程连接和启用用户密码认证进行操作说明。
Answers:
username_1: 创建普通管理员,最后面少了一个")"
db.createUser({user: 'myadmin', pwd: '<PASSWORD>', roles:["dbAdminAnyDatabase", "readWriteAnyDatabase"]} |
origin-energy/java-snapshot-testing | 1088503318 | Title: Triple newlines can be generated by the toString serializer and cause false negatives
Question:
username_0: The ToStringSnapshotSerializer can generate a string that contains triple newlines, the reserved sequence, if adjacent objects contain one newline at the border each. [This code](https://github.com/origin-energy/java-snapshot-testing/blob/master/java-snapshot-testing-core/src/main/java/au/com/origin/snapshots/serializers/ToStringSnapshotSerializer.java#L28) in the serializer can produce triple non-escaped newlines if one object has the string "...\n" and one has the string "\n...". The resulting string contains triple newlines and breaks snapshot tests where running the same test will error on the second run because of the unescaped sequence.
Answers:
username_1: I'm thinking this only happens when using varargs param right.
Yeah this is a problem.
```
expect.toMatchSnapshot("A "," B")
```
I was thinking of removing the varargs param in a future version of this library because of another issue I was having with it. This scenario confirms my thought that maybe it's a bad idea.
Let me thing about a solution, in the meantime you might need to RightTrim your objects or don't use the varargs param and join your string yourself
```
expect.toMatchSnapshot("A " + " B")
```
username_0: Yeah it only happens when multiple strings are passed. My workaround has been to join them into a string manually with newlines. Then the built-in triple newline detection can come into action when necessary.
Status: Issue closed
|
awslabs/amazon-kinesis-video-streams-webrtc-sdk-c | 955523379 | Title: [QUESTION] Cross-compile cmake fails
Question:
username_0: I'm using the latest committed version
Please tell me the workaround
Cmake Command:
cmake .. -DBUILD_OPENSSL=TRUE -DBUILD_OPENSSL_PLATFORM=linux-generic32 -DBUILD_LIBSRTP_HOST_PLATFORM=x86_64-unknown-linux-gnu -DBUILD_LIBSRTP_DESTINATION_PLATFORM=arm-unknown-linux-uclibcgnueabi
Cmake log:
CMake Error at CMakeLists.txt:245 (set_target_properties):
set_target_properties called with incorrect number of arguments.
Answers:
username_1: Please follow the steps here:
https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/.travis.yml#L170-L187
We CI that runs an ARM cross compile with each commit, I've verified that it is passing. Also tried locally on a Ubuntu box with an arm toolchain and it was working fine.
username_0: I followed the procedure but it fails
failing with libsrtp
-- Check size of unsigned long
-- Check size of unsigned long - done
-- Check size of unsigned long long
-- Check size of unsigned long long - done
-- Performing Test HAVE_INLINE
-- Performing Test HAVE_INLINE - Success
-- Found OpenSSL: /home/admin/sdk/20210730/amazon-kinesis-video-streams-webrtc-sdk-c/open-source/lib/libssl.so;/home/admin/sdk/20210730/amazon-kinesis-video-streams-ebrtc-sdk-c/open-source/lib/libcrypto.so (found version "1.1.1g")
CMake Error at CMakeLists.txt:245 (set_target_properties):
set_target_properties called with incorrect number of arguments.
-- Configuring incomplete, errors occurred!
See also "/home/admin/sdk/20210730/amazon-kinesis-video-streams-webrtc-sdk-c/open-source/libsrtp/build/src/project_libsrtp-build/CMakeFiles/CMakeOutput.log".
See also "/home/admin/sdk/20210730/amazon-kinesis-video-streams-webrtc-sdk-c/open-source/libsrtp/build/src/project_libsrtp-build/CMakeFiles/CMakeError.log".
gmake[2]: *** [build/src/project_libsrtp-stamp/project_libsrtp-configure] error1
gmake[1]: *** [CMakeFiles/project_libsrtp.dir/all] error 2
gmake: *** [all] error 2
CMake Error at CMake/Utilities.cmake:72 (message):
CMake step for libsrtp failed: 2
Call Stack (most recent call first):
CMakeLists.txt:131 (build_dependency)
username_0: sorry.
I upgraded the cmake version and it succeeded.
v3.6.3 -> v3.20
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 944988713 | Title: Remove "vets.gov" tagging from GTM
Question:
username_0: We still have several objects related to vets.gov. As that domain is long decommissioned, we can remove them...
- `Environment Lookup IDs` contains two rows that can be removed
- `Vets.gov Umbrella Tracking ID` can be removed
- `Testing Tracker ID` can be removed
- `Navigation Click - Vets.gov to Preview Button` and its trigger `Sitewide Alert - preview.va.gov Button Click` can be removed
- Remove vets.gov from `Cross-Domains` variable
Answers:
username_1: I deleted 3 rows in the `Environment Lookup IDs` variable:
1. `(dev|staging|preview)\.vets\.gov` that had the output of `{{Testing Tracker ID}}` (also deleted)
2. `vets\.gov` that had the output of `{{Vets.gov Umbrella Tracking ID}}` (also deleted)
3. `^www\.|preview\.)?va\.gov` that had the output of `{{WBC VA.gov Production Tracking ID}} (also deleted since the default value of this lookup table is the prod tracking ID: UA-50123418-16)
username_0: LGTM though I did restore the `{{WBC VA.gov Production Tracking ID}}` so we don't have the value hardcoded anywhere.
Status: Issue closed
username_0: Published to production. Closing. |
bradwestfall/webdevphoenix | 67428826 | Title: When doing PR, emails come from github
Question:
username_0: Just a note, when a user is contributing a company they may do this within the Github interface. I personally forked the project and created a company page which will later be requested as a pull. Forking the project then receives the following emails.
```
The page build completed successfully, but returned the following warning:
CNAME already taken: webdevphoenix.com
For information on troubleshooting Jekyll see:
https://help.github.com/articles/using-jekyll-with-pages#troubleshooting
If you have any questions you can contact us by replying to this email.
```
Answers:
username_1: Hmm, yea that's interesting. So I'm not sure how familiar you are with github pages, but the idea is that if you have a repo with a "gh-pages" branch, then that branch automatically gets turned into a public facing website. Having a file called CNAME in that branch tells GitHub that you intend to point a domain name to GitHub. So it's somewhat of a race, the first repo to claim a domain name by virtue of having a CNAME file will create this error message for other repos on github that try to claim the same name.
So that's how it works. I don't really know what we can do about it. Unless it is possible to only fork the master branch and work there? Then do a pull request to the webdevphoenix master branch. If that were possible, github wouldn't complain about the CNAME file in a master branch, only a gh-pages branch
username_0: Yep, the good news is that the email doesn't keep populating my inbox. I received only two in the first hour and no further emails. A suggested fix might be to do a git submodule for the 'companies' directory, allowing a separate queue for PR of bugs (this repo) and PR of company updates (git submodule repo).
OR update contrib.md with the preferred method to add a PR and/or just list this as a known issue. Either way with this issue reported, others won't likely report the same issue.
username_0: #20 |
facebookresearch/habitat-sim | 492293093 | Title: Error during install: recompile with -fPIC
Question:
username_0: ## Steps to reproduce
On Ubuntu 16.04.4 (headless) with miniconda installed:
```bash
sudo apt update && sudo apt install libjpeg-dev libpng-dev libglfw3-dev libglm-dev libx11-dev libomp-dev libegl1-mesa-dev
conda create -n habitat python=3.6 cmake=3.14
conda activate habitat
git clone https://github.com/facebookresearch/habitat-sim.git
cd habitat-sim
pip install -r requirements.txt
python setup.py install --headless
```
## Observed Results
It fails with the following error:
```
Linking CXX shared module ../../lib.linux-x86_64-3.6/habitat_sim/_ext/habitat_sim_bindings.cpython-36m-x86_64-linux-gnu.so
/usr/bin/ld: /home/rpg_students/mouritzen/.local/lib/libgflags.a(gflags.cc.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/home/rpg_students/mouritzen/.local/lib/libgflags.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
```
<details><summary>Full output:</summary><p>
```
$ python setup.py install --headless
running install
running bdist_egg
running egg_info
creating habitat_sim.egg-info
writing habitat_sim.egg-info/PKG-INFO
writing dependency_links to habitat_sim.egg-info/dependency_links.txt
writing requirements to habitat_sim.egg-info/requires.txt
writing top-level names to habitat_sim.egg-info/top_level.txt
writing manifest file 'habitat_sim.egg-info/SOURCES.txt'
reading manifest file 'habitat_sim.egg-info/SOURCES.txt'
writing manifest file 'habitat_sim.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/__init__.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/geo.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/logging.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/utils.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/sensor.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/gfx.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/errors.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/simulator.py -> build/lib.linux-x86_64-3.6/habitat_sim
copying habitat_sim/scene.py -> build/lib.linux-x86_64-3.6/habitat_sim
creating build/lib.linux-x86_64-3.6/habitat_sim/nav
copying habitat_sim/nav/greedy_geodesic_follower.py -> build/lib.linux-x86_64-3.6/habitat_sim/nav
copying habitat_sim/nav/__init__.py -> build/lib.linux-x86_64-3.6/habitat_sim/nav
creating build/lib.linux-x86_64-3.6/habitat_sim/sensors
copying habitat_sim/sensors/__init__.py -> build/lib.linux-x86_64-3.6/habitat_sim/sensors
copying habitat_sim/sensors/sensor_suite.py -> build/lib.linux-x86_64-3.6/habitat_sim/sensors
creating build/lib.linux-x86_64-3.6/habitat_sim/agent
copying habitat_sim/agent/agent.py -> build/lib.linux-x86_64-3.6/habitat_sim/agent
copying habitat_sim/agent/__init__.py -> build/lib.linux-x86_64-3.6/habitat_sim/agent
creating build/lib.linux-x86_64-3.6/habitat_sim/bindings
[Truncated]
self.build()
File "/home/rpg_students/mouritzen/.conda/envs/habitat/lib/python3.6/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/home/rpg_students/mouritzen/.conda/envs/habitat/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/rpg_students/mouritzen/.conda/envs/habitat/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "setup.py", line 202, in run
self.build_extension(ext)
File "setup.py", line 274, in build_extension
shlex.split("cmake --build {}".format(self.build_temp)) + build_args
File "/home/rpg_students/mouritzen/.conda/envs/habitat/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', 'build', '--config', 'RelWithDebInfo', '--', '-j']' returned non-zero exit status 2.
```
</p></details>
Recompiling with `CMAKE_CXX_FLAGS="-fpic" python setup.py install --headless` gives the same result.
Any idea what is going wrong?
Answers:
username_1: This seems to be a similar case like in #179 (and another TODO for #121), it's apparently picking up gflags that are in `/home/rpg_students/mouritzen/.local/lib/` (it doesn't *need* them to work). Can you try passing `CMAKE_DISABLE_FIND_PACKAGE_gflags=TRUE` to CMake?
Status: Issue closed
username_0: I removed `~/.local/lib` (don't need it anyway) and now it worked. Thanks for the hint!
username_1: Opened #247 to fix properly on our side. |
sing1ee/elasticsearch-jieba-plugin | 227271042 | Title: 为什么解压到plugins下面的jieba目录后在启动报错呢
Question:
username_0: org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Could not load plugin descriptor for existing plugin [elasticsearch-jieba-plugin-5.3.0]. Was the plugin built before 2.0?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:127) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:58) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.3.0.jar:5.3.0]
Caused by: java.lang.IllegalStateException: Could not load plugin descriptor for existing plugin [elasticsearch-jieba-plugin-5.3.0]. Was the plugin built before 2.0?
at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:295) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:131) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.node.Node.<init>(Node.java:302) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.node.Node.<init>(Node.java:238) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.3.0.jar:5.3.0]
... 6 more
Caused by: java.nio.file.NoSuchFileException: E:\elasticsearch-5.3.0\elasticsearch-5.3.0\plugins\elasticsearch-jieba-plugin-5.3.0\plugin-descriptor.properties
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79) ~[?:?]
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) ~[?:?]
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) ~[?:?]
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230) ~[?:?]
at java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_102]
at java.nio.file.Files.newByteChannel(Files.java:407) ~[?:1.8.0_102]
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) ~[?:1.8.0_102]
at java.nio.file.Files.newInputStream(Files.java:152) ~[?:1.8.0_102]
at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:86) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:292) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:131) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.node.Node.<init>(Node.java:302) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.node.Node.<init>(Node.java:238) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap$6.<init>(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:360) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123) ~[elasticsearch-5.3.0.jar:5.3.0]
... 6 more
Answers:
username_1: 目录组织得不对。
username_1: - elasticsearch-5.3.0
- plugins
- jieba
- dic
- elasticsearch-jieba-plugin-5.3.0.jar
- jieba-analysis-1.0.2.jar
- plugin-descriptor.properties
- plugin.xml
username_0: jieba-analysis-1.0.2.jar,请问这个jar包在哪里下载
username_0: 搞定,多谢~原来是要把zip包复制解压进去啊,我解压了jar,所以不对
Status: Issue closed
|
micahstubbs/cuda-compat-table | 265604403 | Title: port google sheet to proper web page
Question:
username_0: google sheet
https://docs.google.com/spreadsheets/d/e/2PACX-1vRzXswjp7VepR8tpYIMHlf33_zOca0mX5hvbCa23ABoD5L0Ctp7mkoDa1vOL72kLRQqJm1l-tCNhxE5/pubhtml
an example of a proper web page
https://kangax.github.io/compat-table/es6/ |
rollup/rollup-plugin-json | 370032093 | Title: Should pass Array objects to dataToEsm
Question:
username_0: Currently according to source, if a parsed JSON were determined to be *not* an Object, as in `Object.prototype.toString.call(data) !== '[object Object]'`, it will not be passed on to `dataToEsm`. However this behavior excludes the case when the JSON is an Array. Since by [definition](https://www.json.org/) JSON can be an object or an array, I believe it's appropriate to modify the conditional to allow `'[object Array]'` or perhaps remove the block altogether. Thanks for consideration.
Answers:
username_1: I'm also trying to import a json file which contains an array. It does not get outputted in the bundle. I also tried the fix in $48 but that did not seem to fix the issue. 🤔
username_2: I think I have a similar issue too. I have a few json files, and they contain arrays of objects.
when i run `rollup -c` i get `(json plugin) SyntaxError: Unexpected token ; in JSON at ...`
username_2: @username_3 sorry for bothering you. Maybe you can advice what we need to do?
username_3: Fix at #53
Status: Issue closed
|
rust-lang/rustfmt | 558615033 | Title: rustfmt doesn't recognize efiapi calling convention
Question:
username_0: There was a similar report #1375, maybe the solution is similar to just upgrade syntax dependencies which is undergoing in #4022. This would probably be fixed by that?
Answers:
username_0: The `efiapi` was added in https://github.com/rust-lang/rust/pull/65809
username_1: Believe this is a duplicate of #3903, but yes, this will be resolved once #4022 is merged (and then released)
username_0: Yeah it's probably a duplicate of that.
Status: Issue closed
|
godotengine/godot | 397396961 | Title: Exported iOS projects cannot be deployed with 3.1 beta
Question:
username_0: **Godot version:**
3.1 beta
**OS/device including version:**
macOS Mojave, Xcode
**Issue description:**
I've got a project that I'd like to export to iOS. The project can be found here: --- to be added --
The export itself succeeds and I end up with an Xcode project file.
Deploying the project to a device fails with an error saying:
"App installation failed - No code signature found."
The reason is that no signing certificate is selected but *cannot* be selected. My assumption is that there is some Xcode project file compatibility issues. See video for details.
**Steps to reproduce:**
See videos for details. It will show exactly what I'm doing:
https://www.youtube.com/watch?v=lk3ulZAql3g
* Open the project referenced above
* Export to iOS
* Try to deploy in Xcode
**Minimal reproduction project:**
see above
Answers:
username_1: watched your video and downloaded your project. Managed to get it running on my iPhone by changing some of your export settings in Godot and let Xcode automatically sign the project
had to type in the text as shown in red (text was grey)

also had to change the Deployment target to iOS 10 to stop a flood of errors (had to do this on my project as well)
don't forget to add StoreKit.framework to your linked libraries as well or you will get a "linker command failed error" and the project will not build

username_2: I'm on it
username_3: CC @bruvzg - I think some of these items have been implemented since then, is the current state good enough to close this issue or is there more validation that could be done? |
sghall/react-compound-slider | 657387978 | Title: Tutorial step generates TypeError: children is not a function
Question:
username_0: The tutorial step where you use an array of tick values for the first time fails.
https://sghall.github.io/react-compound-slider/#/getting-started/tutorial
On the line: `<Ticks values={[0, 25, 50, 75, 100]}> // pass in an array of values`
It results in : `TypeError: children is not a function`
Removing the `// pass in an array of values` fixes the problem.
Thanks for the great tutorial! Very handy to have all of the code for each step, instead of just the changes, for easy copying and pasting. (It might be nice to highlight the diffs from each step, now that I'm thinking about it.) |
chef/supermarket | 234597242 | Title: Error Uploading Cookbook to Supermarket.
Question:
username_0: When i try to upload cookbook i get an exception
ERROR: Tarball has contents that are not what they are reported to be
knife supermarket share cookbook_name -VV -o . --supermarket-site https://XXXXXXXX.YYYYY.com
Knife Supermarket gem version in 0.3.0.
Thanks ,
<NAME>aran
Answers:
username_1: This is an error coming from the file handler within Supermarket. When it validated the uploaded gzipped tar file, it found that the file type of the actual contents did not appear to be a gzipped tarball.
Some things to try:
+ Use the latest [ChefDK](https://downloads.chef.io/chefdk).
+ Do not use the `knife-supermarket` gem separately; the gem was deprecated when the `supermarket` commands were added directly to `knife` in core Chef v12.11.19
+ Try publishing the cookbook with a different client, like [`stove`](https://github.com/sethvargo/stove)
<details><summary>stove example</summary><p>
```
stove --no-git --endpoint https://XXXXXXXX.YYYYY.com/api/v1 --path </path/to/cookbook_name>
```
</p></details>
username_1: @username_0 Did you try some of the options offered above? I'll be closing this issue in a couple days unless there is some more information to troubleshoot with.
Status: Issue closed
|
appium/appium | 196153267 | Title: Fix WDA starting (1.6.3+)
Question:
username_0: In 1.6.3 real device starting is problematic for different times. And in general it is slow.
Move to a `ping`-based startup so we don't have to do anything to the logs to see if the server has started on the device.<issue_closed>
Status: Issue closed |
rocknsm/rock | 281710326 | Title: We need to add DAG enabled Suricata to Rock
Question:
username_0: We need to add support for the DAG to Suricata in Rock. Current working code to do this is:
[026_sensor_suricata-install.sh.txt](https://github.com/rocknsm/rock/files/1555258/026_sensor_suricata-install.sh.txt)
Answers:
username_1: As long as we're doing this, might as well implement #219 as well.
username_1: Same discussion applies as to the Bro DAG support (see #217). Suricata doesn't use a plugin approach to capture methods, so the only way to do this is to create a "suricata-dag" package and ensure it declares conflict and providing equivalency to the "suricata" package.
Also, since Endace libraries are closed source, we cannot build them on the [RockNSM COPR](https://copr.fedorainfracloud.org/coprs/g/rocknsm/rocknsm-2.1/) like we build the rest of the packages.
Status: Issue closed
username_1: This is no longer needed. |
Stan125/GREA | 166265475 | Title: Install package failed - unexpected symbol in "Microsoft DiskPart"
Status: Issue closed
Question:
username_0: Hi Stani,
I installed R on my secondary drive which is always attached inside my computer.
Regards,
<NAME>
Answers:
username_1: Hi there,
could be an issue with R 3.3.1, I'm only using R 3.3.0.
However googling this issue seems to bring up some problems regarding installing something on external devices. Are you running R on an external drive?
Best,
Stani
username_0: Hi Stani,
I installed R on my secondary drive which is always attached inside my computer.
Regards,
<NAME>
username_1: Hmm... there seems to be something wrong with installing stuff on this drive.
username_0: same result |
mbdavid/LiteDB | 261980576 | Title: Query DbRef without use Include
Question:
username_0: Hi,
I use a base class with generic methods to get data from collections and the children classes have different properties using **DbRef** attribute.
There is any possibility to query **DbRef** fields without need to **Include** each of them?
Thanks.
Answers:
username_1: Hi @username_0, are you want query data from a collection using external document (from another collection, used `DbRef`)? It's not possible (in LiteDB), not even using `Include`. You can only using LinqToObject.
It's because filter data if possible only if data exists inside the document are you testing. There is no `JOIN` like relation database. If you need filter a document, all data must be inside in this document (you can use sub-document to that).
username_0: I understand. Thanks @username_1
Meanwhile, I'm removing the DbRef and storing the data on the same document.
Only to show my generic implementation, case somebody want to know how I did it.
Creating a custom class Attribute to name collections directly on classes.
```csharp
[AttributeUsage(AttributeTargets.Class)]
public class CollectionNameAttribute : Attribute
{
public string Name { get; set; }
public CollectionNameAttribute(string name)
{
Name = name;
}
}
```
Two simple classes:
```csharp
[CollectionName("users")]
public class User : BaseModel<User>
{
[BsonField("name")]
public string Name { get; set; }
[BsonField("age")]
public int Age { get; set; }
}
public class UserMatch : BaseModel<UserMatch>
{
[BsonField("user")]
public User User { get; set; }
[BsonField("score")]
public int Score { get; set; }
}
```
The BaseModel with:
- Id property to be shared with all children classes.
- CollectionName method to return the name configured directly on class OR the name of class in plural.
- Generic methods to Load/Save data.
```csharp
public class BaseModel<T> where T : new()
{
[BsonIndex(true)]
public int Id { get; set; }
private const string DATABASE_PATH = @"C:\Temp\MyData.db";
public static string CollectionName()
{
var tableAttr = (CollectionNameAttribute)typeof(T).GetCustomAttributes(typeof(CollectionNameAttribute), true).FirstOrDefault();
if (tableAttr != null)
{
return tableAttr.Name;
}
[Truncated]
{
using (var db = new LiteDatabase(DATABASE_PATH))
{
LiteCollection<T> collection = db.GetCollection<T>(CollectionName());
collection.Upsert((T)(object)this);
}
}
public void SaveList(List<T> list)
{
using (var db = new LiteDatabase(DATABASE_PATH))
{
LiteCollection<T> collection = db.GetCollection<T>(CollectionName());
collection.Upsert(list);
}
}
}
```
username_1: Hi @username_0, I recommend you to use `Find` and `Expression` do use LiteDB index query. In you example, you always run a "full search" and will filter document after read all from database.
username_0: Thanks for the attention @username_1.
Yes, I agree with you. But at the first time I used the Expression directly in the Find, I found an error to execute complexes expressions. So ended up changing my generic methods to execute a FindAll and later execute complexes expressions using Where. If you have any tips about this, it will be very welcome :)
PS.: I'm using LiteDB in Unity (dll from unity3d branch).
Here is the old method:
```c-sharp
public static List<T> GetAll(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
using (var db = new LiteDatabase(DATABASE_PATH))
{
LiteCollection<T> collection = db.GetCollection<T>(CollectionName());
List<T> result = collection.Find(predicate).ToList();
return result;
}
}
```
This simple example does not work on LiteDB default Expression:
```c-sharp
List<Word> result = Word.GetAll(x => x.Name.ToLower() == "avestruz");
```
Error returned:
```c-sharp
LiteException: Property 'Name.ToLower(' was not mapped into BsonDocument.
LiteDB.QueryVisitor`1[AlfaEBetoSolucoes.AprenderALer.Models.Contents.Word].GetField (System.Linq.Expressions.Expression expr, System.String prefix)
LiteDB.QueryVisitor`1[AlfaEBetoSolucoes.AprenderALer.Models.Contents.Word].VisitExpression (System.Linq.Expressions.Expression expr, System.String prefix)
LiteDB.QueryVisitor`1[AlfaEBetoSolucoes.AprenderALer.Models.Contents.Word].Visit (System.Linq.Expressions.Expression`1 predicate)
LiteDB.LiteCollection`1[AlfaEBetoSolucoes.AprenderALer.Models.Contents.Word].Find (System.Linq.Expressions.Expression`1 predicate, Int32 skip, Int32 limit)
Teste.Teste1 () (at Assets/zz_LocalTests/Teste.cs:36)
```
username_1: Hi, if you are using in Unity, ok, because there are some "non-working" thing in v4 that doesnt work in Unity. But, I still recommend you than create 2 methods: one for use index and other for full scan.
username_2: Hi! With the objective of organizing our issues, we are closing old unsolved issues. Please check the latest version of LiteDB and open a new issue if your problem/question/suggestion still applies. Thanks!
Status: Issue closed
|
MSLNZ/msl-loadlib | 975459178 | Title: ImportError: bad magic number in 'app': b'U\r\r\n'
Question:
username_0: Background: I was using msl-loadlib 0.7.0 two years ago with Python 3.6 and in a frozen environment using https://github.com/marcelotduarte/cx_Freeze. Now with msl-loadlib 0.7.0 with a newer Python version 3.8 I stumbled across https://github.com/MSLNZ/msl-loadlib/issues/21. I subsequently updated msl-loadlib (tested with 0.8.0. and 0.9.0) and now the following code
```
Client64.__init__(
self, module32="app.can_wrapper", append_sys_path=vendor_path,
)
```
errors with
```
msl.loadlib.exceptions.ConnectionTimeoutError: Timeout after 10.0 seconds. Could not connect to 127.0.0.1:63307
ImportError: bad magic number in 'app': b'U\r\r\n'
The missing module must be in sys.path (see the --append-sys-path option)
```
The code in question works well when executed within a regular Python script (unfrozen env) with all mentioned msl-loadlib versions.
Are there any best practices for using msl-loadlib in frozen environments? Do you have any idea whether it is the import machinery of msl-loadlib or the input machinery of cx_Freeze that is causing this?
Answers:
username_1: 3 + 5 = 8
```
Would you be able to share a minimal example that raises this error?
username_0: Traceback (most recent call last):
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\cx_Freeze\initscripts\__startup__.py", line 104, in run
module_init.run(name + "__main__")
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\cx_Freeze\initscripts\Console.py", line 15, in run
exec(code, module_main.__dict__)
File "app/issue25.py", line 31, in <module>
File "app/issue25.py", line 19, in __init__
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\msl\loadlib\client64.py", line 199, in __init__
utils.wait_for_server(host, port, timeout)
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\msl\loadlib\utils.py", line 282, in wait_for_server
raise ConnectionTimeoutError(
msl.loadlib.exceptions.ConnectionTimeoutError: Timeout after 10.0 seconds. Could not connect to 127.0.0.1:61160
ImportError: bad magic number in 'app': b'U\r\r\n'
The missing module must be in sys.path (see the --append-sys-path option)
The paths in sys.path are:
C:\Users\LEIMGR~1\AppData\Local\Temp\_MEI208242
C:\Users\LeimgruberF\dev\msl-loadlib-issue25\build\exe.win-amd64-3.8
C:\Users\LeimgruberF\dev\msl-loadlib-issue25\build\exe.win-amd64-3.8
C:\Users\LeimgruberF\dev\msl-loadlib-issue25\build\exe.win-amd64-3.8\lib\library.zip
C:\Users\LeimgruberF\dev\msl-loadlib-issue25\build\exe.win-amd64-3.8\lib
C:\Users\LeimgruberF\dev\msl-loadlib-issue25\build\exe.win-amd64-3.8\vendor
Cannot start the 32-bit server.
username_0: Following the lead in https://github.com/marcelotduarte/cx_Freeze/issues/525#issuecomment-902759437, when replacing `build\exe.win-amd64-3.8\lib\app\__init__.pyc` and `build\exe.win-amd64-3.8\lib\app\lib_wrapper.pyc` with their `*.py` source versions, then I get this error:
```
❯ .\issue25.exe
Traceback (most recent call last):
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\cx_Freeze\initscripts\__startup__.py", line 104, in run
module_init.run(name + "__main__")
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\cx_Freeze\initscripts\Console.py", line 15, in run
exec(code, module_main.__dict__)
File "app/issue25.py", line 31, in <module>
File "app/issue25.py", line 19, in __init__
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\msl\loadlib\client64.py", line 199, in __init__
utils.wait_for_server(host, port, timeout)
File "C:\Users\LeimgruberF\AppData\Local\Continuum\Miniconda3\envs\msl-loadlib-issue25\lib\site-packages\msl\loadlib\utils.py", line 282, in wait_for_server
raise ConnectionTimeoutError(
msl.loadlib.exceptions.ConnectionTimeoutError: Timeout after 10.0 seconds. Could not connect to 127.0.0.1:50254
Instantiating the 32-bit server raised the following exception:
FileNotFoundException: Unable to find assembly 'cpp_lib32'.
at Python.Runtime.CLRModule.AddReference(String name)
Cannot start the 32-bit server.
```
which is to be expected and should work with the actual 3rd party DLLs in place. Now this workaround is not really feasible as the final version shipped to customers uses Cython for compilation to avoid the shallow pyc.
username_1: Thanks for the MRE. I could reproduce the bad-magic-number error.
Re-freezing the 32-bit server using a 32-bit version of Python that is the same Python version that cxfreeze used for the build does fix this issue. I don't see any other solution.
To make the frozen 32-bit server yourself you would install the appropriate Windows x86 version from [python.org](https://www.python.org/downloads/) and then follow the instructions [here](https://msl-loadlib.readthedocs.io/en/stable/refreeze.html). I recommend downloading from python.org and not from a 32-bit (mini)conda installer because I can confirm that the python.org installer will solve this issue.
Please let me know if you have any problems creating your own 32-bit server.
username_0: Thanks for trying it out and your suggested fix. I managed to re-freeze the server using poetry + pyenv (finally an excuse to look into both...) following your instructions ("Using the CLI"). Can confirm it fixes this issue.
Thanks again for your support and your efforts on msl-loadlib!
Status: Issue closed
|
filesender/filesender | 286622508 | Title: uploading: wifi brownout to silent unrecoverable failure
Question:
username_0: On a mac running Firefox 56 I started an upload and turned off wifi to see what sort of error message would be presented. After over more than 500 seconds without network I saw the timers keep increasing per chunk but no errors reported. Turning on wifi again the laptop will get the same ip address but the chunks do not start moving again. I am not sure what the default timeout on mac is for xhr but it seems to be very long or terasender perhaps not picking up the fault on osx. The requests timeout on the server side so it seems there is nothing for the browser to continue with but it is not detecting this stale state.
This is using terasender and upload_display_per_file_stats=true to see how long each chunk is taking to upload.
Answers:
username_0: This is still reproducible. On chrome the timeout is 0. Firefox on osx (and maybe others) still does not recover from wifi brownout.
username_0: Timeout default is 0 on firefox on osx
Status: Issue closed
username_0: This is fixed with the above, sometimes it takes a few automatic retry attempts to get going but that all happens without user interaction assuming the wifi dropped out and came back again. |
keystonejs/keystone | 24433355 | Title: Create button 'Save and Published' in admin
Question:
username_0: Not comfortable at first set state Published and then choose to press Save
Answers:
username_1: We're closing all questions and support requests to keep the issue tracker unpolluted. Please ask this question on [Stackoverflow](https://stackoverflow.com/questions/tagged/keystonejs) or [Gitter](https://gitter.im/keystonejs/keystone)!
Status: Issue closed
|
hrg921/hrg921.github.io | 266401537 | Title: implement work experience
Question:
username_0: # project
this feature should be implemented with general page frame.
## information
none
Answers:
username_0: this feature should be implemented after #8 is closed.
username_0: this feature should be implemented after #18 is closed. |
vuejs/vue-loader | 910130287 | Title: Error TS2694 when using [email protected]
Question:
username_0: <!--
IMPORTANT: Please use the following link to create a new issue:
https://new-issue.vuejs.org/?repo=vuejs/vue-loader
If your issue was not created using the app above, it will be closed immediately.
-->
**package.json:**

**tsconfig.server.json**

**server.ts**

**When I use command `tsc -p tsconfig.server.json` , i got an error**

**Plaese help me!!!** |
chaimPaneth/react-native-jw-media-player | 1139346417 | Title: Error Rendering Player In Outside Of ScrollView - should have parent view controller:<RNSScreen> but actual parent is:<UIViewController>
Question:
username_0: We have been stuck with this annoying problem for days and we're totally out of idea, although im pretty sure the solution is something simple.
Essentially we have the component working, although only under the condition that it's nested within a `ScrollView`, the second we remove the `ScrollView` and just wrap the componet in a standard `View` we get this error.
```Exception thrown while executing UI block: child view controller:<JWPlayerKit.JWPlayerViewController: 0x7f91c807ac00> should have parent view controller:<RNSScreen: 0x7f9168512950> but actual parent is:<UIViewController: 0x7f91aef1d920>```
As an example, the following will throw this error and crash the app.
```
<View>
<JWPlayer .../>
</View>
```
This however will work fine, and we have no idea why.
```
<ScrollView>
<JWPlayer .../>
</ScrollView>
```
We can see that `RNSScreen` is something to do with React Native Screens, but that all appears to be fine, and enabled.
Can anyone assist here? We're completely stuck and about to have to scrap the use of this component due to this one problem.
Answers:
username_0: Worth pointing out that I found this issue, althoug the suggested fix does not solve the issue.
https://github.com/chaimPaneth/react-native-jw-media-player/issues/159
username_1: same issues, but i solved this by replace **createStackNavigator** with **createNativeStackNavigator**
like this
❌
import { createStackNavigator } from '@react-navigation/stack'
const Stack = createStackNavigator()
✅
import { createNativeStackNavigator } from '@react-navigation/native-stack'
const Stack = createNativeStackNavigator()
username_0: <RootStack.Screen name="Root" component={BottomTabNavigator} />
<RootStack.Screen
name="Articles"
component={ArticleScreen}
options={{
title: "",
headerLeft: () => <BackButton />,
}}
/>
</RootStack.Navigator>
);
```
Navigating to the articles screen with the player throws this error and crashes the app.
username_0: For anyone hitting this error, I managed to solve the issue, it was related to navigation and re-rendering.
We were navigating to new pages which loaded new video players, but also changing the config passed into the player using `useState` which subsequently re-rendered the player, that combined with the page navigation seemed to throw it off.
We solved the issue by essentially rendering the page once, passing in a ref from the parent screen which we associated with the player, then calling the `setPlaylistIndex` method instead of navigating to a new page and hardcoding the config to prevent re-renders. It's still a little flaky but we got past this error.
Status: Issue closed
|
kreait/laravel-firebase | 690033562 | Title: notifications are not received despite 200 response code
Question:
username_0: Hello All,
I am facing one critical issue. I am trying to send notification to multiple tokens but one at time in loop. I am getting code=200 as response but actual notifications are not received by mobile user.(This is happening with some of token from that list)
I checked sending notification from console to those tokens and that is getting delivered. But not when sent via code.
Any suggestion how I can go about this issue?
Having really bad time will this as all notifications are getting delivered
Answers:
username_1: I'm afraid these messages seem have been lost by firebase then - if the API returns a 200 code in return, we have to assume that the message has been successfully delivered, and there's not much we can do code-wise.
Is there a reason why you are sending the messages one by one instead of doing a multicast (https://firebase-php.readthedocs.io/en/5.8.0/cloud-messaging.html#send-messages-to-multiple-devices-multicast)?
username_2: I am also facing the same issue. Am not sure this package exactly working or not.
this is my test code
```
$title = 'My Notification Title';
$body = 'My Notification Body';
// $imageUrl = 'http://lorempixel.com/400/200/';
$notification = Notification::fromArray([
'title' => $title,
'body' => $body,
// 'image' => $imageUrl,
]);
$notification = Notification::create($title, $body);
$changedNotification = $notification
->withTitle('Changed title')
->withBody('Changed body');
// ->withImageUrl('http://lorempixel.com/200/400/');
//return $notification;
$topic = 'a-topic';
$message = CloudMessage::withTarget('topic', $topic)
->withNotification($notification) // optional
//->withData($data) // optional
;
$messaging->send($message);
```
username_1: Since you double-posted, let me double-answer 😅
https://github.com/kreait/laravel-firebase/issues/61#issuecomment-691654035
I'm closing this now because this is most likely nothing that can be fixed from the side of the Laravel package or SDK, but feel free to add further comments or share your findings. 🤞
Status: Issue closed
|
sigp/lighthouse | 517526728 | Title: Apply spec
Question:
username_0: ## Description
Please provide a brief description of the issue.
## Present Behaviour
Describe the present behaviour of the application, with regards to this
issue.
## Expected Behaviour
How _should_ the application behave?
## Steps to resolve
Please describe the steps required to resolve this issue, if known.
Answers:
username_1: Working on it!
Status: Issue closed
|
blairck/claudius | 398614636 | Title: Unable To Select Single Move With Coordinate
Question:
username_0: In this position, the player should be able to select the only legal move "6644" by just typing "44". This is the single legal move, so it should be returned.
```
1 2 3 4 5 6 7 8 9 0
0 b b b b b 0
9 b b b b b 9
8 b . b b b 8
7 b b . b b 7
6 . . b . . 6
5 . . a . . 5
4 a . . a a 4
3 a a a a a 3
2 a a a a a 2
1 a a a a a 1
1 2 3 4 5 6 7 8 9 0
Enter a move: 44
Unknown or invalid move, try again<issue_closed>
Status: Issue closed |
CreateJS/PreloadJS | 285289472 | Title: Priority asset loading.
Question:
username_0: Are there any plans to support **asset loading with priority**?
* Files or Manifastes having high priority should be loaded/fetched before low priority.
Answers:
username_1: We plan on revising the LoadQueue to ensure that manifests respect priority, `maxConnections` and `maintainOrder`. |
aizatto/issues | 611809654 | Title: Power button pressed during wake transition after
Question:
username_0: ```
Power button pressed during wake transition after 228992 ms.
Failure code:: 0xd47ca481 00000032
================================================================
Date/Time: 2020-05-04 19:39:48 +0800
OS Version: ??? ??? (Build ???)
Architecture: x86_64
Report Version: 29
Data Source: Stackshots
Shared Cache: 0x8241000 08E18162-3927-3CAC-BD2B-FE419A6EF0A9
Event: Sleep Wake Failure
Duration: 0.00s
Steps: 1
Boot args: chunklist-security-epoch=0 -chunklist-no-rev2-dev
Time Awake Since Boot: 2000s
Process: kernel_task [0]
UUID: AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6
Architecture: x86_64
Version: Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64
Footprint: 53.38 MB
Start time: 2020-05-04 19:39:48 +0800
End time: 2020-05-04 19:39:48 +0800
Num samples: 1 (1)
Thread 0x6b 1 sample (1) priority 95 (base 95)
<IO tier 0>
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 815422) [0xffffff80002c713e] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1537116) [0xffffff800037745c] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1317935) [0xffffff8000341c2f] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1324017) [0xffffff80003433f1] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 2388456) [0xffffff80004471e8] 1
Thread 0x6c 1 sample (1) priority 95 (base 95)
<IO tier 0>
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 815422) [0xffffff80002c713e] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1537116) [0xffffff800037745c] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1317935) [0xffffff8000341c2f] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1324017) [0xffffff80003433f1] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 2388456) [0xffffff80004471e8] 1
Thread 0x6e 1 sample (1) priority 95 (base 95)
<IO tier 0>
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 815422) [0xffffff80002c713e] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 2297044) [0xffffff8000430cd4] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1317935) [0xffffff8000341c2f] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 1324017) [0xffffff80003433f1] 1
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 2388456) [0xffffff80004471e8] 1
Thread 0x6f Thread name "IOServiceTerminateThread" 1 sample (1) priority 81 (base 81)
<IO tier 0>
*1 ??? (<AB0AA7EE-3D03-3C21-91AD-5719D79D7AF6> + 815422) [0xffffff80002c713e] 1
[Truncated]
Graphics: kHW_IntelUHDGraphics630Item, Intel UHD Graphics 630, spdisplays_builtin
Graphics: kHW_AMDRadeonPro5500MItem, AMD Radeon Pro 5500M, spdisplays_pcie_device, 4 GB
Memory Module: BANK 0/ChannelA-DIMM0, 8 GB, DDR4, 2667 MHz, SK Hynix, -
Memory Module: BANK 2/ChannelB-DIMM0, 8 GB, DDR4, 2667 MHz, SK Hynix, -
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x7BF), wl0: Feb 28 2020 15:31:53 version 9.30.357.3172.16.58.3 FWID 01-29ff5c69
Bluetooth: Version 7.0.4f6, 3 services, 27 devices, 1 incoming serial ports
Network Service: Wi-Fi, AirPort, en0
USB Device: USB 3.1 Bus
USB Device: Apple T2 Bus
USB Device: Composite Device
USB Device: Touch Bar Backlight
USB Device: Touch Bar Display
USB Device: Apple Internal Keyboard / Trackpad
USB Device: Headset
USB Device: Ambient Light Sensor
USB Device: FaceTime HD Camera (Built-in)
USB Device: Apple T2 Controller
Thunderbolt Bus: MacBook Pro, Apple Inc., 55.3
Thunderbolt Bus: MacBook Pro, Apple Inc., 55.3
``` |
airbnb/epoxy | 267087525 | Title: carousel cann't scroll to first position after scrolled once
Question:
username_0: [sample link](https://github.com/username_0/CarouselIssue)
**step to reduce:**
- run app
- scroll right
- scroll left
carousel will have a bounce when reach first position and auto scroll a offset **happened in lands landscape**
Answers:
username_1: I believe this is the behavior of the default LinearSnapHelper. Something similar can be observed in the epoxy sample app.
Given that this snap helper is the standard behavior provided by google I didn't look into it too much yet, but it would be a good thing to investigate.
In the meantime can set your own default snap helper to change the snapping behavior, or set it to no snap helper.
```
Carousel.setDefaultGlobalSnapHelperFactory(null to disable or a custom value if you want)
```
username_1: I'm considering using a library such as https://github.com/rubensousa/RecyclerViewSnap as the default, but I want to be a little careful about including other libraries.
username_0: Thanks, it works , but why SnapHelperFactory is abstract static class
Status: Issue closed
username_2: https://stackoverflow.com/questions/44766186/linearsnaphelper-doesnt-snap-on-edge-items-of-recyclerview
This sort of explains this problem with LinearSnapHelper
username_3: [https://github.com/rubensousa/RecyclerViewSnap](https://github.com/rubensousa/RecyclerViewSnap )
I used the above mentioned library in the onBind callback, and set the global default to `null`. (else it will start to dance between the 2 snapping strategies)
```
Carousel.setDefaultGlobalSnapHelperFactory(null)
carousel {
id("some_id")
numViewsToShowOnScreen(2.2f)
models(listOfModels)
onBind{ _, recyclerView, _ ->
GravitySnapHelper(Gravity.START)
.attachToRecyclerView(recyclerView)
}
}
```
username_4: Mine is still dancing... I'm confuse
username_5: I forgot to add this `Carousel.setDefaultGlobalSnapHelperFactory(null)` and mine were dancing left and right.
The above solution works. |
Huachao/vscode-restclient | 190882356 | Title: allow refresh from results view
Question:
username_0: It would be nice to allow the `cmd-r` action to trigger the request again when on the results view. Like a replay action.
Answers:
username_1: @username_0 nice suggestion, I will implement the replay function ASAP
Status: Issue closed
username_1: @username_0 you could try the latest version 0.11.0 to verify this fix 😄
username_0: its working, but would be nice to have a visual indication that the request was taking place. otherwise if the response is identical, it's harder for one to realize its been completed.
thanks!
username_1: @username_0 nice suggestion, I will consider this carefully |
junegunn/fzf.vim | 158079466 | Title: Command Tags changes working directory in all tabs
Question:
username_0: I'm using minimal neovim configuration. Steps to reproduce the problem:
- Start neovim
- Check working directory
```
:pwd
/home/sviatoslav/bug
```
- Create new tab:
```
:tabnew
```
- Change local working directory:
```
:lcd some/path
```
- Show tags:
```
:Tags
```
- Switch to previous tab:
```
gT
```
- Check working directory again:
```
:pwd
/home/sviatoslav/bug/some/path
```
This happens both if there is tags file in some/path directory, and if it is generated by fzf.vim
<hr>
- Category
- [ ] Question
- [x] Bug
- [ ] Suggestion
- OS
- [x] Linux
- [ ] Mac OS X
- [ ] Windows
- [ ] Etc.
- Vim
- [ ] Vim
- [x] Neovim
Status: Issue closed
Answers:
username_1: Fixed in https://github.com/username_1/fzf/commit/412c2116556204d64d1f40afeb995cae1c5bfdcb.
Thanks for the report.
username_0: Thank you once again! |
DemocracyClub/electionleaflets | 62025163 | Title: Group leaflets on constituency page by month
Question:
username_0: e.g. https://electionleaflets.org/constituencies/65927/cambridge
Although the leaflets are displayed most recent first, it’s still quite difficult to discern which leaflets are from the current campaign, and which are old ones.
Grouping by campaign (e.g. GE2015, European election 2014 etc) would be ideal, but I guess tricky. Grouping by month spotted (or year even) would be useful.
Answers:
username_1: Is this a stopgap until #33 is done, or will this still be needed then?
username_0: Aha! If “Implement elections” means what I think it means, then yep, this is just a stopgap / subsumed by that ticket. |
DataDog/datadogpy | 452593999 | Title: API Calls Failing Due to TypeError (0.29.0)
Question:
username_0: Hello,
Since the release of version `0.29.0` two hours ago, I've been encountering the following error for multiple API calls:
```
Traceback (most recent call last):
File "test.py", line 14, in <module>
print (api.Monitor.get_all())
File "/usr/local/lib/python3.7/site-packages/datadog/api/monitors.py", line 57, in get_all
return super(Monitor, cls).get_all(**params)
File "/usr/local/lib/python3.7/site-packages/datadog/api/resources.py", line 181, in get_all
return APIClient.submit('GET', cls._resource_name, api_version, **params)
File "/usr/local/lib/python3.7/site-packages/datadog/api/api_client.py", line 161, in submit
response_obj['response_headers'] = response_headers
TypeError: list indices must be integers or slices, not str
```
Currently using Python version 3.7.2. So far, I have encountered this when using `api.Monitor.get_all()` and `api.Downtime.get_all()`. When I revert to `0.28.0`, I no longer see these errors for these API calls.
Answers:
username_1: We have experienced the same error `TypeError: list indices must be integers, not str` issue when making calls to the `api.Monitor.get_all()` Python version 2.7.5.
username_2: I'm also having this issue using Python 3.5. The issue appears to come from PR #378 - the response object appears to be decoded as a list and the response headers are added as though it's a dict.
Status: Issue closed
username_3: Hey @username_0, @username_1 and @username_2 I've just released version 0.29.1 of the datadogpy library that should resolve this issue. Thanks for the detailed issue report and the investigation. Let me know if you continue to experience any issues here. |
randombit/botan | 195327506 | Title: botan 1.10 - amalgamation build for arm generates c++11 code
Question:
username_0: Hi,
I used botan 1.10 on mac, windows and linux. On linux the gcc compiler used is v4.7.2. When I create an amalgation build for 32 and 64 bit cpu arch it works fine and generates code with no c++11 functions:
`configure.py --cc=gcc --cpu=x86_32 --no-autoload --gen-amalgamation --enable-modules=rsa,eme_pkcs,emsa3,emsa4,sha1,auto_rng,cbc,kdf2,asm_engine,pthreads,unix_procs`
but when I use it to generate the amalgamation build for arm
`configure.py --cc=gcc --cpu=armv7-a --no-autoload --gen-amalgamation --enable-modules=rsa,eme_pkcs,emsa3,emsa4,sha1,auto_rng,cbc,kdf2,asm_engine,pthreads,unix_procs`
It generates file with c++11 function like std::to_string() and compilation fails. I also get a plethora of other others like
```
botan_all_gcc_arm.cpp:20662:13: error: redefinition of ‘std::__cxx11::string Botan::Charset::transcode(const string&, Botan::Character_Set, Botan::Character_Set)’
std::string transcode(const std::string& str,...
```
Answers:
username_1: This should have already been fixed in https://github.com/username_1/botan/commit/632e3478262ac85137266d05dc96b367418f36d4 but this fix has not been part of a new release yet (probably coming shortly). Can you verify the referencedchange fixes the problems you are seeing?
username_0: No it doesn't, I get other errors like
```
botan_all_gcc_arm.cpp:20662:13: error: redefinition of ‘std::__cxx11::string Botan::Charset::transcode(const string&, Botan::Character_Set, Botan::Character_Set)’
std::string transcode(const std::string& str,...
```
and list is so long that console fails to display them all
username_0: My host OS is Debian 7, it has gcc compiler v4.7.2. I will look into it further and revert back. I hope my configure command is fine.
username_0: I get the redefinition error for every function:
```
error: redefinition of ‘void Botan::HMAC_RNG::add_entropy_source(Botan::EntropySource*)’
void HMAC_RNG::add_entropy_source(EntropySource* src)
```
username_0: Do you know why it is happening?
username_1: No clue, sorry. Maybe try a clean rebuild, or use `nm` to find in which object(s) the symbols are being defined - why are there two copies and where are they coming from?
FWIW while GCC 4.7 probably works it is not tested by CI or developers, if possible upgrade to GCC 4.8 or later.
username_1: Closing as inactive. @username_0 if you are still having issues with latest version feel free to open a new issue.
Status: Issue closed
|
levz0r/gmail-tester | 480443044 | Title: Uses a vunuerable version of googleapis
Question:
username_0: From https://www.npmjs.com/advisories/791
### Overview
Versions of googleapis prior to 38.0.0 are vulnerable to Improper Authorization. Setting credentials to one client may apply to all clients which may cause requests to be sent with the incorrect credentials.
### Remediation
Upgrade to version 38.0.0.
### Resources
(GitHub Issue)[https://github.com/googleapis/google-api-nodejs-client/issues/1594]
Status: Issue closed
Answers:
username_0: Wow, that was a super quick response- MIND. BLOWN! Thank you!!
username_1: Thanks for reporting @username_0! |
JohnSnowLabs/spark-nlp | 490273674 | Title: com.amazonaws.services.s3.model.AmazonS3Exception: Unauthorized
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
## Description
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
Unable to download a pretrained pipeline. Getting the following error.
```python
Py4JJavaError: An error occurred while calling z:com.johnsnowlabs.nlp.pretrained.PythonResourceDownloader.getDownloadSize.
: com.amazonaws.services.s3.model.AmazonS3Exception: Unauthorized (Service: Amazon S3; Status Code: 401; Error Code: 401 Unauthorized; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
```
## Steps to Reproduce
```python
import pyspark
from sparknlp.pretrained import PretrainedPipeline
username =
password =
ss = pyspark.sql.SparkSession.builder\
.master('local[4]')\
.config('spark.app.name', 'spark-nlp-demo') \
.config('spark.driver.memory','8G') \
.config('spark.sql.crossJoin.enabled','true') \
.config('spark.ui.port', '8889') \
.config(
'spark.driver.extraJavaOptions',
f'-Dhttp.proxyHost=proxy.mlp.com -Dhttp.proxyPort=3128 -Dhttp.proxyUser={username} -Dhttp.proxyPassword={<PASSWORD>}' + \
f' -Dhttps.proxyHost=proxy.mlp.com -Dhttps.proxyPort=3128 -Dhttps.proxyUser={username} -Dhttps.proxyPassword={password}'
) \
.config('spark.jars.packages', 'JohnSnowLabs:spark-nlp:2.2.1') \
.getOrCreate()
pipeline = PretrainedPipeline('explain_document_dl', 'en')
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* `pyspark==2.4.3` installed with `pip`
* `spark-nlp==2.2.1` installed with `pip`
* behind a proxy<issue_closed>
Status: Issue closed |
edoburu/django-private-storage | 199393563 | Title: TypeError: allow_staff() takes exactly 1 argument (2 given)
Question:
username_0: ## What's Wrong
It appears that allow_staff() is being called from within PrivateStorageView as a member function, and self is being passed along to the function in addition to private_file parameter.
## Steps to duplicate
To replicate the problem, check out https://github.com/username_0/privstor
```
$ git clone https://github.com/username_0/privstor
$ mkvirtualenv privstor
$ cd privstor ; pip install -r requirements.txt
$ python manage.py migrate
$ python manage.py createsuperuser
$ python manage.py runserver
```
- Navigate to http://localhost:8000/admin/meetings/meeting/
- Add a meeting, and upload a PDF file for the 'Agenda' field
- Save and continue editing, then try & access the pdf through the link

Answers:
username_1: Thanks for reporting this! I've released version 1.0.2, where this issue is fixed (among a few others I found in Python 3)
Status: Issue closed
|
aws/aws-cdk | 1055091241 | Title: (aws-lambda-nodejs): cannot read directory "asset-input/index.ts": operation not permitted
Question:
username_0: 2 errors
/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/asset-staging.ts:398
throw new Error(`Failed to bundle asset ${this.node.path}, bundle output is located at ${bundleErrorDir}: ${err}`);
^
Error: Failed to bundle asset Cloud-Service-NLU/com.amazonaws.cdk.custom-resources.s3file-provider/s3file-on-event/Code/Stage, bundle output is located at /Users/francoislevasseur/Documents/botpress-root/infra/cloud/nlu/cdk.out/bundling-temp-77552b2d484d3e17df650ce05f2f47f950bd62aec24810c9c288be76885fb977-error: Error: docker exited with status 1
at AssetStaging.bundle (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/asset-staging.ts:398:13)
at AssetStaging.stageByBundling (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/asset-staging.ts:246:10)
at stageThisAsset (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/asset-staging.ts:137:35)
at Cache.obtain (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/private/cache.ts:24:13)
at new AssetStaging (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/core/lib/asset-staging.ts:162:44)
at new Asset (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/aws-s3-assets/lib/asset.ts:68:21)
at AssetCode.bind (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/aws-lambda/lib/code.ts:183:20)
at new Function (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/aws-lambda/lib/function.ts:338:29)
at new NodejsFunction (/Users/francoislevasseur/Documents/botpress-root/infra/commons/node_modules/@aws-cdk/aws-lambda-nodejs/lib/function.ts:53:5)
at new S3FileProvider (/Users/francoislevasseur/Documents/botpress-root/infra/commons/src/s3/index.ts:92:23)
Subprocess exited with error 1
```
</details>
This error only occurs on my pc, my colleges have no problem.
### Reproduction Steps
As stated above, this error only occurs on my pc, my colleges have no problem.
### What did you expect to happen?
the error not throwing.
### What actually happened?
there's an error.
### CDK CLI Version
1.130.0
### Framework Version
_No response_
### Node.js Version
v16.9.1
### OS
macos Big Sur 11.5.2
### Language
Typescript
### Language Version
3.9.10
### Other information
Many thanks in advance!
Answers:
username_0: Found 2 solutions:
1 - Move my project in the home folder. It is weird, but it works.
2- Install esbuild so you don't need to fallback on docker.
username_0: I'll close the issue since I found workarounds
thank you anyway
Status: Issue closed
|
opencv/opencv | 330456301 | Title: cv::Mat::convertTo(…) undesired new behavior
Question:
username_0: (lldb)
```
I understand that this can be workaround-ed with code, however I don't this that it's a desired behaviour.
If this looks like an issue to you, please also inspect the other changes that have been included in the same changeset.
Answers:
username_1: Accessing to empty `cv::Mat` should be avoided, including type, cols, rows, etc.
It is basically done to avoid typos and using of "just declared" variables like these:
```
Mat desc1, desc2;
akaze->detectAndCompute(img, noArray(), keypoints1, desc2);
... try using desc1 as input ...
```
There is no sense to process "empty inputs". Usually it is the error. See this issue comments: #8300
IMHO, mentioned patch should raise an "early problem" **exception** instead of passing it forward (especially with incorrect output state).
In your example, just add this check after `detectAndCompute()`:
```
if (keypoins1.empty())
{
... warn user / write log about black/blurred/unsuitable image ...
... go to the next frame/image ...
}
```
username_0: OK, but wouldn't it make the spirit of the following code now _illegal_, as `cols` and `type` are used in an empty matrix?
```.cpp
cv::Mat m = cv::Mat::zeros(0, 15, CV_32F); // an "empty matrix" representing a table with 15 columns of CV_32F type
// somewhere further in a loop
{
cv::Mat row = cv::Mat::ones(1, 15, CV_32F); // some code that generates a row
m.push_back(row);
}
std::cout << m << std::endl;
```
(though this code still compiles and works).
username_2: Would it will be a problem if I use cuda::GpuMat::convertTo() ? I met a problem that I can't convert a CV_32F GpuMat to CV_8U like the way @username_0 said. I always get an error when I do this
`buffer.buff.convertTo(buffer.buff, CV_8U, 255, stream);`
The error is
`OpenCV Error: Gpu API call (unspecified launch failure) in cv::cudev::grid_transform_detail::TransformDispatcher<true, Policy>::call` |
aquasecurity/trivy | 518311165 | Title: Go library: downloaded zip file too large
Question:
username_0: Hi, I am trying to use trivy as a library to build a Go application to use in a github action.
However:
```
user@laptop:~$ go mod vendor
go: finding github.com/aquasecurity/trivy v0.1.7
go: finding github.com/aquasecurity/trivy v0.1.7
go: downloading github.com/aquasecurity/trivy v0.1.7
github.com/ironpeakservices/action-containerscan imports
github.com/aquasecurity/trivy: downloaded zip file too large
```
Any ideas?
Answers:
username_1: @username_0 I'm trying to fix it. Thanks.
Status: Issue closed
username_1: For now, you need to specify `master` branch.
```
go get github.com/aquasecurity/trivy@master
```
I will release a new version in the near future. After that, you can specify the version. |
Lienol/openwrt-package | 655094495 | Title: 非常欢迎passwall回归,但目前版本无法通过链接或订阅添加节点,手工添加正常。
Question:
username_0: 
Answers:
username_1: 我没有问题
username_0: 
username_2: 看来你uci库兼容性不太好,试试下面的指令看是不是能复现 string expected, got boolean
```shell
ubus call uci add '{"config": "passwall", "type": "nodes", "name": "cfg123456"}'
ubus call uci set '{"config": "passwall", "section": "cfg123456", "values": {"test": true}}'
ubus call uci set '{"config": "passwall", "section": "cfg123456", "values": {"test": "0"}}'
ubus call uci changes
username_0: 刚试了一下指令,故障依旧。
username_2: 看下显示结果
username_0: 
username_2: 我是说运行上面指令的回显是些什么
username_0: 
username_2: 不懂了,还是试着修一下吧。(#488)
username_2: 试下临时的简单方案:
```shell
sed -i 's/\(result.* = allowInsecure_default\)$/\1 and "1" or "0"/' /usr/share/passwall/subscribe.lua
```
username_0: 试用临时方案后,成功解析!
username_2: 那还剩下 Trojan-Go 的订阅会有同样的问题。
username_0: 期待修复好!
username_2: 这bug可以报到你openwrt/uci上游源去
打我提交的完整补丁可以修复。
Status: Issue closed
|
ioos/compliance-checker | 596881953 | Title: CF flag_values check
Question:
username_0: It seems that there's a bit of a discrepancy testing for the type of a variable's `flag_values` when using `xarray` vs `netCDF4-python`. I can encode as a `numpy` array with type `|S1`:
```
In [26]: ds2["depthflag"].attrs["flag_values"] = np.array([b'S', b'S'], dtype='|S1') [12/7481]
In [27]: ds2["depthflag"]
Out[27]:
<xarray.DataArray 'depthflag' (station: 2)>
array([b'S', b'S'], dtype='|S1')
Coordinates:
* station (station) int32 39 41
time (station) datetime64[ns] ...
longitude (station) float32 ...
latitude (station) float32 ...
Attributes:
long_name: Bottom Depth Flag
flag_values: [b'S' b'S']
flag_meanings: Measured_at_Station Estimated_from_GTOPO30_Bathymetric_Da...
```
but when tested with the Compliance Checker, I get an error:
```
§3.5 Flags
* depthflag's flag_values must be an array of values not <class 'list'>
```
Looking at how `netCDF4-python` reads in this data type (since that API is used to load the NetCDF file being tested), it looks like it's being converted to a plain list:
```
In [4]: ds.variables["depthflag"]
Out[4]:
<class 'netCDF4._netCDF4.Variable'>
|S1 depthflag(station, string1)
long_name: Bottom Depth Flag
flag_values: ['S', 'G']
flag_meanings: Measured_at_Station Estimated_from_GTOPO30_Bathymetric_Database
coordinates: time latitude longitude
unlimited dimensions:
current shape = (2, 1)
filling on, default _FillValue of used
In [5]: getattr(ds.variables["depthflag"], "flag_values")
Out[5]: ['S', 'G']
In [6]: type(getattr(ds.variables["depthflag"], "flag_values"))
Out[6]: list
```
Additional investigation is needed into what the `|S1` type can be represented as, and maybe a workaround will be implemented.
Pinging @username_1, you might find this interesting.
Answers:
username_1: Maybe not a compliance checker issue since we don't have any control over how data is generated, but interesting nonetheless.
username_0: ### EDIT
After a bit more investigation, I found this in the `netCDF4-python` code:
https://github.com/Unidata/netcdf4-python/blob/06e58422204cc77946fa21effd31ffb9421bd139/netCDF4/_netCDF4.pyx#L1560-L1577
Lines:
```
if value_arr.dtype.char in ['S','U']:
# force array of strings if array has multiple elements (issue #770)
N = value_arr.size
if N > 1: force_ncstring=True
if not is_netcdf3 and force_ncstring and N > 1:
string_ptrs = <char**>PyMem_Malloc(N * sizeof(char*))
if not string_ptrs:
raise MemoryError()
try:
strings = [_strencode(s) for s in value_arr.flat]
for j in range(N):
if len(strings[j]) == 0:
strings[j] = _strencode('\x00')
string_ptrs[j] = strings[j]
issue485_workaround(grp._grpid, varid, attname)
ierr = nc_put_att_string(grp._grpid, varid, attname, N, string_ptrs)
finally:
PyMem_Free(string_ptrs)
```
You'll notice the list comprehension which creates a list of the attributes, and then the for-loop that assigns them to the recently-allocated `string_ptrs` `char` array. Next, `nc_put_att_string()` assigns the attribute to the variable. That's defined here:
https://github.com/Unidata/netcdf-c/blob/e4003be502b196fe0e2a5a40140f0187cbffc2c6/libdispatch/dattput.c#L75-L83
Lines:
```
nc_put_att_string(int ncid, int varid, const char *name,
size_t len, const char** value)
{
NC* ncp;
int stat = NC_check_id(ncid, &ncp);
if(stat != NC_NOERR) return stat;
return ncp->dispatch->put_att(ncid, varid, name, NC_STRING,
len, (void*)value, NC_STRING);
}
```
The attribute will thus be encoded as a Python `list` type, not an array with `|S1` data type in `numpy`, even though `numpy` arrays are used to represent most other data types. We'll have to develop a workaround in regards to type checks when they come about. |
Azure/azure-powershell | 309513726 | Title: [Compute] New-AzureRmVm cmdlet needs improvement.
Question:
username_0: <!--
If this issue is a bug report:
- Upgrade to the latest version of AzureRM and verify you are able to reproduce the issue
- You can install the latest version of AzureRM from the PowerShell Gallery
- https://www.powershellgallery.com/packages/AzureRM
- You can also install the latest version from the Releases section
- https://github.com/Azure/azure-powershell/releases
- Ensure that you repro the issue with $DebugPreference = "Continue" to receive the debug stream
- If this bug involves an exception being thrown, please run Resolve-AzureRmError to receive extended information on the error
- Fill out the below template
If this issue is not a bug report, please remove the below template
-->
### Description
1. If not loged in, the cmdlet reports a confusing error:
```Powershell
PS C:\Users\vlashch> New-AzureRmVm -Name $VmName -ImageName $ImageId -Credential $VmCredential -Location $Location
New-AzureRmVm : Object reference not set to an instance of an object.
At line:1 char:1
+ New-AzureRmVm -Name $VmName -ImageName $ImageId -Credential $VmCreden ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [New-AzureRmVM], NullReferenceException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.Compute.NewAzureVMCommand
```
2. The cmdlet parameter name **ImageName** is misleading and should be renamed to **ImageId**.
3. To successfully find an image the correct **-Location** parameter should be provided. So, the Location param should be mandatory in case of creating a VM from an image.
### Script/Steps for Reproduction
<!-- Please provide the necessary script(s) that reproduce the issue -->
```powershell
```
### Module Version
<!-- Please run (Get-Module -Name AzureRM -ListAvailable) to get the version(s) of AzureRM installed on your machine -->
```powershell
Get-Module -Name AzureRM -ListAvailable
Directory: C:\Program Files\WindowsPowerShell\Modules
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Script 5.6.0 AzureRM
Script 4.4.1 AzureRM
```
### Environment Data
<!-- Please run $PSVersionTable to get the necessary environment data -->
[Truncated]
Name Value
---- -----
PSVersion 5.1.16299.251
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.16299.251
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
```
### Debug Output
<!-- Please run the above script with $DebugPreference = "Continue" and paste the resulting debug stream in the below code block -->
```
```
Answers:
username_1: 1) Update error when not signed in to conform to the rest of our cmdlets "run Connect-AzureRmAccount" @username_3
2) @username_2 Spec this out ImageName parameter.
3) @username_2 spec it out send it to @username_3
Put it //build for both spec and coding.. @username_3 don't let twitchy keep you down
username_2: Ok, here it goes.
2. Rename `-ImageName` to `-Image`, and alias to `-ImageName`. Next, allow `-Image` to properly take a resource id for an image. Fail gracefully, otherwise.
3. Due to the design of (2), this is unneeded. If an image cannot be found, or if it is not specified with the fully qualified resource id, then we should fail. Please find me if this needs more discussion.
username_3: created issue https://github.com/Azure/azure-powershell/issues/6081 to track the error message fix required on the Compute service side.
username_4: Hello,
I have the same error, was solved with AzureRm Powershell Version 5.7.0
regards |
certbot/certbot | 251466632 | Title: how to upgrade certbot from a package instead of source
Question:
username_0: ## My operating system is (include version):
Ubuntu 16.04,
nginx version: nginx/1.10.3 (Ubuntu)
## I installed Certbot with (certbot-auto, OS package manager, pip, etc):
I installed Certbot from source last year (v0.9.3), i.e.
`sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt`
`cd /opt/letsencrypt`
`./certbot-auto`
## I ran this command and it produced this output:
I want to upgrade Certbot with
`./certbot-auto`
but output the following:
```
Upgrading certbot-auto 0.9.3 to 0.17.0...
Replacing certbot-auto...
[sudo] password for xxxx:
Creating virtual environment...
rm: cannot remove '/home/viking/.local/share/letsencrypt/bin/activate': Permission denied
.....
```
Then I ran
`sudo ./certbot-auto'
## Certbot's behavior differed from what I expected because:
No response in a long time.
Now I don't want this source build. I'd like to install/upgrade it from a package.
How to do?
## Here is a Certbot log showing the issue (if available):
###### Logs are stored in `/var/log/letsencrypt` by default. Feel free to redact domains, e-mail and IP addresses as you see fit.
```
2017-08-20 04:49:04,446:DEBUG:certbot.main:certbot version: 0.17.0
2017-08-20 04:49:04,446:DEBUG:certbot.main:Arguments: []
2017-08-20 04:49:04,446:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#apache,PluginEntryPoint#manual,PluginEntryPoint#nginx,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)
2017-08-20 04:49:04,461:DEBUG:certbot.log:Root logging level set at 20
2017-08-20 04:49:04,462:INFO:certbot.log:Saving debug log to /var/log/letsencrypt/letsencrypt.log
2017-08-20 04:49:04,462:DEBUG:certbot.plugins.selection:Requested authenticator None and installer None
2017-08-20 04:49:04,804:WARNING:certbot.plugins.util:Failed to find executable apache2ctl in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
2017-08-20 04:49:04,805:DEBUG:certbot.plugins.disco:No installation (PluginEntryPoint#apache): Cannot find Apache control command apache2ctl
Traceback (most recent call last):
File "/home/viking/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot/plugins/disco.py", line 130, in prepare
self._initialized.prepare()
File "/home/viking/.local/share/letsencrypt/local/lib/python2.7/site-packages/certbot_apache/configurator.py", line 173, in prepare
'Cannot find Apache control command {0}'.format(restart_cmd))
NoInstallationError: Cannot find Apache control command apache2ctl
2017-08-20 04:49:04,833:DEBUG:certbot.plugins.selection:No candidate plugin
2017-08-20 04:49:04,833:DEBUG:certbot.plugins.selection:Selected authenticator None and installer None
```
## Here is the relevant nginx server block or Apache virtualhost for the domain I am configuring:
Answers:
username_1: Try installing certbot from Jessie Backports. You may enable the repository by following the instructions here: https://backports.debian.org/Instructions/
Afterward, install certbot with the following command: `sudo apt-get -t jessie-backports install certbot python-certbot-apache`
username_2: Actually on Ubuntu you'll need to enable the certbot PPA, instructions can be found at https://certbot.eff.org
Additionally you can probably just remove that letsencrypt directory in .local if you're not planning to use to source built version. Is it possible that pip put it there? Could pip remove it?
username_2: Also @username_1 I've noticed that you've been helping answer questions on github! Awesome!
Want to join our IRC chatroom or give me an email at <EMAIL> to coordinate? That way we can maybe split responsibility or I can help you find issues to help people with?
username_1: @username_2 that would be great!
(Now back to the real problem before this becomes a full blown off-topic conversation :D)
@username_0 try @username_2's suggestion. Hope it works!
Status: Issue closed
|
petabridge/akka-bootcamp | 736903562 | Title: Lesson 2.2: what is more common: ReceiveActor, or Untyped Actor with pattern matching?
Question:
username_0: I'm just starting with Akka and have a question regarding whether to use `ReceiveActor`s or `UntypedActor`s, as I've seen them both used.
My understanding is that, as of C# 7 introducing pattern matching, `UntypedActor`s are favoured. The [docs mention](https://getakka.net/articles/actors/untyped-actor-api.html) "UntypedActor API is recommended for C# 7 users.", and I saw a talk by <NAME> at NDC Sydney 2016 [also mentioning (at 27m05s)](https://www.youtube.com/watch?v=ozelpjr9SXE#t=27m05s) pattern matching would be the way to go once it arrived.
This lesson does not seem to echo that sentiment though; are there any reasons to prefer `ReceiveActor` over `UntypedActor`?
I've searched around to see if there is any consensus, but haven't found anything. Issue #236 also asked the question, but didn't receive any response. |
kubernetes/kubernetes | 315713766 | Title: Unable to release resource in resource-quota when scale down deployment
Question:
username_0: **What happened**:
Resource is not released in resource quota when pod is deleted.
Step 1: Check resource limit is full

step 2: Scale down deployment to 0, and pod is confirmed to be killed, but resource is not released

so that I can not create the new resources due the quota exceed.
**What you expected to happen**:
When the pod was deleted, it means the resources was released, so the namespace should have available resource for new request.
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`): k8s 1.9.1
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`): ubuntu 16.04
- Install tools:
- Others:
Answers:
username_1: /sig scalability
/sig scheduling
username_0: seems the issue can not reproduced in my environment, first close this issue.
Status: Issue closed
|
Exawind/nalu-wind | 643917224 | Title: Regression test for simple fixed wing actuator pathway
Question:
username_0: The NGP team (@alanw0, @overfelt, @jrood-nrel, and myself) have an upcoming ECP Q4 milestone to demonstrate a CPU-GPU simulation of actuator line simulations running on Summit. To achieve this, we will need to figure out that all the field syncs are working correctly on GPU builds.
To help the NGP team (and possibly members of STK team) to help with debugging we would like to have two regression tests in the Nalu-Wind suite: 1. simple actuator fixed wing (without OpenFAST dependency), and 2. convert NREL-5MW actuator line case to NGP. The second test will be the basis for the demonstration simulation that we will perform for the Q4 milestone.
To help the NGP team, @username_2 and @username_1 can you please create a new regression test for the non-OpenFAST version of actuator line pathway? From what I can tell #577 only introduced a unit test and not a regression test.
Answers:
username_1: @username_0 as written the current fixed wing implementation is host only. Is maximizing GPU execution for that section of code part of this issue, or just a nice bonus if we have time to pull it off?
username_0: @username_1 As written maximizing GPU execution is just a nice bonus if we have time to pull it off.
username_2: Hi @username_0 -- Yes, I can set up a fixed wing regression case based on some test cases I already have.
Lawrence
Status: Issue closed
|
CMPUT301W17T11/FeelTrip | 215302563 | Title: how to store Calender format date in ES
Question:
username_0: need to figure out how to store Calendar type in ES
Status: Issue closed
Answers:
username_1: ES now stores a Date type as a long, which is a subset of the Calendar type. The full conversion algorithms that we can apply for compatibility are now trivial and as follows:
public static Calendar toCalendar(Date date){
Calendar cal = Calendar.getInstance();
cal.setTime(date);
return cal;
}
public static Date toDate(Calendar cal){
Date date = cal.getTime();
return date;
} |
dotnet/installer | 967709145 | Title: None
Question:
username_0: @dotnet/dnceng
Answers:
username_1: Various files need to be added to your SignCheckExclusionFile.txt: apphost.exe, comhost.dll, singlefilehost.exe. This will need to be done in each of the release + main branches so that these errors go away.
username_1: See runtimes for how to get these files excluded: https://github.com/dotnet/runtime/blob/main/eng/SignCheckExclusionsFile.txt
Status: Issue closed
username_1: This has been fixed |
farhaven/wireless | 242103381 | Title: ioctl: Resource temporarily unavailable
Question:
username_0: Hi,
Thanks for writing `wireless`!
If wifi is already connected and wireless is re-ran, what's expected to happen?
```
$ doas wireless .config/wireless.conf
doas (<EMAIL>) password:
Configured networks:
"network" "username_1"
"username_1" "network"
wireless: ioctl: Resource temporarily unavailable
```
Answers:
username_1: That should work, yeah. It does work on my box with an `iwm`. Does the same error appear if you do an `ifconfig $device scan` while the wifi is connected?
username_0: The scan works, but takes quite awhile - about 30 seconds.
It seems like `doas wireless .config/wireless.conf` also takes about the same amount of time.
username_1: I just pushed a change that might fix this by doing an `ifconfig $dev down && ifconfig $dev up` before scanning. Seems to be necessary with -current on my laptop. Does it work for you?
username_2: Hello,
same problem here with -current and version in ports (v3), iwn device.
I do confirm that, after applying the modifications as per commit "Do an ifconfig down/up dance before scanning", it works flawlessly even when the wifi is already connected.
Do you mind to bump a new version?
username_1: Yup, sounds good. I'll close this bug later tonight when I released a new version. I'll also push an updated port to ports@.
Status: Issue closed
username_1: I just created a new release and sent a port update to `ports@`.
username_0: I confirm your fix works! So sorry for not follow up sooner! |
nodejs/help | 545344350 | Title: Allocation failure scavenge might not succeed
Question:
username_0: * **Node.js Version**: v10.16.0
* **OS**: macOS 10.15.2 (19C57)
* **Scope (install, code, runtime, meta, other?)**: Runtime/Code
* **Module (and version) (if relevant)**: Unsure
I'm getting a crash on Node.js. It took a while to get to this crash, and the error is not very descriptive about what is going on here.
What I also find weird, is below in the logs, towards the end it was only printing odd indexes. `20100688` was the last even number that was printed in the indexes. So I'm not sure if one of those child processes failed or what happened. It didn't actually run the `.on("close"` method until the very end tho, which is weird.
Any ideas of what is going on here?
**Command Ran**
```
node index.js 2
```
**index.js**
```js
const StellarHDWallet = require("stellar-hd-wallet");
const { spawn } = require("child_process");
const mnemonic = StellarHDWallet.generateMnemonic();
const wallet = StellarHDWallet.fromMnemonic(mnemonic);
const args = [...process.argv].slice(2);
const runNumber = parseInt(args[0]);
let items = [];
console.log(mnemonic);
for (let i = 0; i < runNumber; i++) {
const item = spawn("node", ["process.js", mnemonic.split(" ").join("-"), runNumber, i]);
item.stdout.on("data", (data) => {
console.log(data.toString().trim());
});
item.on("close", (code) => {
console.log(`${i} closed`);
items.forEach((a) => a.kill());
});
items.push(item);
}
```
**process.js**
```js
const StellarHDWallet = require("stellar-hd-wallet");
const args = [...process.argv].slice(2);
const mnemonic = args[0].split("-").join(" ");
const wallet = StellarHDWallet.fromMnemonic(mnemonic);
const runNumber = parseInt(args[2]);
[Truncated]
20113057
20113059
<--- Last few GCs --->
[25406:0x102802000] 2772590 ms: Mark-sweep 1394.2 (1429.2) -> 1390.3 (1429.2) MB, 1812.1 / 0.0 ms (average mu = 0.115, current mu = 0.048) allocation failure scavenge might not succeed
[25406:0x102802000] 2774476 ms: Mark-sweep 1394.2 (1429.2) -> 1390.3 (1429.2) MB, 1771.6 / 0.0 ms (average mu = 0.088, current mu = 0.061) allocation failure scavenge might not succeed
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0x224ba25be3d]
1: StubFrame [pc: 0x224ba260c95]
Security context: 0x201dbac1e6e9 <JSObject>
2: toLowerCase [0x201dbac108c1](this=0x201ddfd12c29 <String[6]: sha512>)
3: createHmac(aka createHmac) [0x201ddfd19d91] [/Users/charliefish/tmp/node_modules/create-hmac/browser.js:54] [bytecode=0x201d6237d871 offset=7](this=0x201d138826f1 <undefined>,alg=0x201ddfd12c29 <String[6]: sha512>,key=0<KEY> <String...
0 closed
1 closed
```
Answers:
username_1: As explained, `console.log` (and also `process.stdout.write` whis is internally used by `console.log`) is asynchronous. You should use this instead
```js
require('fs').writeSync(process.stdout.fd, val + ' ' + index + '\n');
```
username_2: I would disagree here – one, this doesn’t properly work on Windows, and two, it’s better to write code so that it handles backpressure correctly.
Additionally, this problem should be far less impactful from Node.js v13.3.0 upwards.
username_1: Sure, in a production code I would suggest using either an async function, or callbacks on `process.stdout.write`
username_0: Would changing it to the following work (since making it async won't act in a blocking way and allow things to be deallocated)?
```js
function loop() {
const val = wallet.getPublicKey(index);
const bLetter = val[1];
if (val.startsWith(`G${bLetter}ABCDE`)) { // Changed ABCDE from something else
done = true;
console.log(val, index);
}
console.log(index);
if (index >= 2147483647) {
console.log("Ran out.");
return;
}
index += runTimes;
if (!done) {
setImmediate(loop)
}
}
```
username_1: Yes, that should work fine. If you still experience crash with the new code, you probably have some other memory leak in your application or in dependencies.
Note that after PR https://github.com/nodejs/node/pull/30710 (released in v13.3.0), even with the sync usage of `console.log` it is much harder for memory leak to appear (still possible, but less probably).
username_2: `setImmediate()` would not be enough to guarantee that writes to stdout have finished, but it would solve any problems coming from the `console.log()` implementation itself.
username_2: One approach that would be “correct” in a sense would be to use `process.stdout.write(...)` instead of `console.log()`, see if it returns `false` during the loop, and if it does, then delay the next call of `loop()` with `process.stdout.on('drain', loop)` rather than `setImmediate()`.
username_3: closing as answered, pls reopen if there is anything outstanding.
Status: Issue closed
|
dphily21/jamming | 307101572 | Title: Technical Design
Question:
username_0: This portion of your feature project could have been longer as it should have encompassed the details and logistics of getting this feature up and running.
You did specify that no front end changes would occur but other visual aids such as code snippets could have been useful.
Other important information like what key code we'd need to listen for would have been a great add. |
xforever1313/Chaskis | 476489192 | Title: Add action command handlers
Question:
username_0: Actions can be taken when a user types "/me does something" on an IRC client. If I were to type "/me says hello" the IRC client will usually say to the channel "username_0 says hello" in a different color to represent an action.
Actions are PRVMSG, but with a small twist... the message starts with 0x01 followed by ACTION a space, the message and it closes with 0x01:
```
PRIVMSG #TestSeth :ACTION hello world
```
When sending an action, we need to do this. When handling an action, we also need to do this, but also not mix it up with a standard message.
Answers:
username_0: This task also includes updating httpserver.
Status: Issue closed
username_0: I believe we can call this issue complete. We have the handlers added and tests added to ensure it all works correctly. Closing. |
api-platform/core | 344909316 | Title: [Feature?] Add decoded body to request attributes
Question:
username_0: In some of my actions (and some of my services), I may have a need to access the request's content, especially to make a `json_decode` call (or a serializer::decode).
I think it would be better so that the deserializer listener could add a step so that it stores the decoded request in its attributes, rather than having just the deserialized data (in case of some additionnal fields, .. and so on).
WDYT ?
Answers:
username_1: I wonder if this is related to #2168
username_0: Not really IMO. It could be used for that though. The idea is to have an array containing the decoded request in the request's attributes :
```
{
"foo": "bar"
}
```
```
dump($request->attributes->get('_body'));
```
would output
```
[
"foo" => "bar"
]
```
username_1: @username_0 wouldn't this be easy to add your own event listener to json decode the body? It seems like this could exist outside of API Platform fairly easily.
username_2: my 2 cents: It would, but if it can be out of the box, it is much better for DX.
username_3: Doing this will increase the memory usage. So why not, but it should be optin.
Honestly, this use case looks a bit advanced to me, a doc entry would probably be enough.
username_2: I doubt it will be much more than a couple of kB.
In my project, we had situations where we manually stored decoded body in one listener (`POST_VALIDATE`) to use it later in another listener (`POST_WRITE`) just to not duplicate decoding logic.
I'm not saying we did it right, lol, but this is an example that it can be useful.
Personally, I would vote for adding this to the core since memory usage is not a problem IMO, but I would love to see at least a doc entry for this 👍
username_3: It directly depends of the payload’s size. Large payload will lead to large memory increase. But why not as long as it’s an optin feature.
username_0: The memory argument is not a real one IMO. We could in the decoding listener (or before ?) store the decoded body in the attributes, and then use the serializer to work from that instead of the request's body.
Unless you're thinking about the fact of storing a decoded payload that could be indeed huge ?
Status: Issue closed
username_4: I agree with @username_3 it's useful for advanced uses only. Also this is easy enough to be implemented by the developers if they need it right? |
logstash-plugins/logstash-input-beats | 192935176 | Title: Beats plugin does not get restarted after unrecoverable error
Question:
username_0: Logstash doesn't properly restart the beats plugin if it fails. Restarting logstash resolves the problem, but that's not an ideal solution.
```{:timestamp=>"2016-12-01T18:33:51.729000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Beats port=>5044, codec=><LogStash::Codecs::Plain charset=>\"UTF-8\">, host=>\"0.0.0.0\", ssl=>false, ssl_verify_mode=>\"none\", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>\"message\", tls_min_version=>1, tls_max_version=>1.2, ciusername_2er_suites=>[\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\"], client_inactivity_timeout=>60>\n Error: event executor terminated", :level=>:error}```
Answers:
username_1: Hmm, that indeed looks like a bug.
@username_2 thoughts?
username_2: @username_0 This indeed look like a bug, can you add the logstash version and the beats input version you are currently using? Also is this possible to include your configuration?
Is there any other error before that one? The event executor terminated, make me things that `plugin#stop` was called.
You can get the beats version by running this command
`bin/logstash-plugin list --verbose beats`
username_0: Sure!
Logstash version: `2.4.1`
Beats version: `logstash-input-beats (3.1.8)`
Config, other notes, etc:
```
input {
beats {
port => 5044
}
}
filter {
(redacted)
}
output {
amazon_es {
hosts => ["(var name redacted)"]
region => "(var name redacted)"
index => "(var name redacted)"
aws_access_key_id => "(var name redacted)"
aws_secret_access_key => "(var name redacted)"
}
stdout {
codec => rubydebug
}
}
```
```Linux ~ 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 GNU/Linux```
```Docker version 1.12.3, build 6b644ec```
- Logstash is running as a Marathon app in a Mesos-managed container.
- There weren't any errors preceding this one 😞
For what it's worth, it hasn't fallen over again, but if it does, I'll see if I can get you guys some more verbose output.
username_2: @username_0 Thanks for the info, we had another report of a similar error in #172 I will do a bit more testing on my side if I can come up with a reproducible use case or at least provide more debug info.
username_3: I have the same sort of issue, when trying to launch multiple beats inputs on different ports. Perhaps that's the common thread?
username_4: We're too. If we can provide further information please let me know.
username_5: Same here. From one day to the next, this started happening. Restarting Logstash, the filebeat shipper, etc, doesn't do anything. When I restart Logstash this starts happening after a few seconds.
```
[2017-02-12T10:52:35,829][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"63edd083fe1245e2b83c698477e59284bc990786-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_77a98a69-b962-4e8f-987d-145cdced8135",
enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"m
essage", tls_min_version=>1, tls_max_version=>1.2, ciusername_2er_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_E
CDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60>
Error: event executor terminated
```
My relevant config (working as of yesterday evening):
```
input {
beats {
port => 5044
}
}
output {
# Uncomment the following for debugging into a local file
file {
path => "/tmp/logstash.out"
}
filter {
[SNIP...]
}
elasticsearch {
hosts => "localhost:9200"
index => "myindex-%{+YYYY.MM.dd}"
template => "/etc/logstash/myindex.json"
template_name => "myindex"
template_overwrite => true
document_type => "%{[@metadata][type]}"
}
```
Any help much appreciated, will happily provide more details.
username_6: I'm having this issue as well. Using logstash 5.2.0
username_7: I'm having this issue as well. Using logstash 5.2.2
anyone help?
username_8: Also having this issue, using logstash 5.3.0
username_1: This is on my radar to look into soon. I don't' have any updates yet. The errors reported are helpful in pointing out where the problem might be, thank you all :)
username_5: Please @username_8 and @username_6 and all looking this thread, make sure you already don't have :5044 bound and are trying to start Logstash again. I think I got this when for some reason the port was already taken.
username_8: @username_5 so it looks like the issue was in the confs I was building to parse different types of logs. In the one for Apache that's mentioned in [this guide](https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html#indexing-parsed-data-into-elasticsearch), it specifies having the following in the file:
`input {
beats {
port => "5043"
}
}`
What I was doing was copying the Apache conf and just changing the filter to specify the grok pattern I needed. That was when I started getting this error.
To eliminate the error, I removed the input section entirely from all confs except for the Apache one and it works just fine now.
username_9: @username_8 , you are completely right. I can't manage multiple config files. With one unique config file works soomthly. What should we do to manage more than one?
username_10: I found the solution here [https://github.com/elastic/logstash/issues/6279](url)
username_11: Hello all,
below you can see my setup.
**../bin/logstash-plugin list --verbose beats**
logstash-input-beats (3.1.12)
**../bin/logstash --version**
logstash 5.2.2
i am having the same error even though i use only 1 config file.
and @username_10 the link you've provided is not working anymore. what was the solution? could you please share it? thank you.
username_10: @username_11 the link works. (copy/past url in new tab).
username_12: i.e. https://github.com/elastic/logstash/issues/6279
username_13: This might be fixed in #289. |
FNNDSC/ChRIS_store_ui | 855318048 | Title: Sorting of plugin search results
Question:
username_0: https://github.com/FNNDSC/ChRIS_store_ui/blob/ca45de40784ddb979445252c3d71c05bc3699b50/src/components/Plugins/Plugins.js#L218
Some sorts could be:
- alphabetical, by author
- alphabetical, by name
- chronological, by date created
Answers:
username_1: Can I work on this one @username_0?
username_1: 
I'm trying to resolve conflicts but my code isn't passing these 2 tests. I didn't change any code related to them. I've checking what's wrong and updated my branch to be inline with the master before fixing plugin sort.
I've also checked if the "mark as favorite" functionality on plugins store website is working but it's not - was it working before?
username_0: Try this:
```bash
git remote add upstream <EMAIL>:FNNDSC/ChRIS_store_ui.git
git fetch upstream
git diff upstream/master
```
I found this:
```patch
diff --git a/src/components/Plugins/Plugins.js b/src/components/Plugins/Plugins.js
index a157f49..e5e2b45 100755
--- a/src/components/Plugins/Plugins.js
+++ b/src/components/Plugins/Plugins.js
@@ -113,20 +114,23 @@ export class Plugins extends Component {
}
}
- fetchPlugins = () => {
- const params = new URLSearchParams(window.location.search)
- const name = params.get('q') //get value searched from the URL
+ fetchPlugins() {
+ const params = new URLSearchParams(window.location.search);
+ const name = params.get("q"); //get value searched from the URL
+
const searchParams = {
limit: 20,
offset: 0,
name_title_category:name,
- };
+ }
+
```
Notice the declaration of `fetchPlugins` was changed. This is an oddity with syntax. If a function is defined as `fetchPlugins() {`, `this` will be undefined unless you manually "bind" it.
username_1: Thanks for that. What about the `starsByPlugin` test?
username_0: @username_1 all tests are passing on https://github.com/username_1/ChRIS_store_ui/commit/b154b309e04425a781d0a8f59aa033f665388d45
username_1: Yeah, I spotted the problem with the tests. Everything is ok now.
username_1: I've closed the other PR because I deleted that forked repo 😁. |
googleapis/google-cloud-python | 512514093 | Title: bigquery client.py methods insert_rows and insert_rows_json don't provide option to omit insertId
Question:
username_0: Streaming inserts to BigQuery can achieve [1GB per second](https://cloud.google.com/bigquery/quotas#streaming_inserts) but only when insertId is omitted. Currently both methods from google-cloud-bigquery that implement the BigQuery insertAll REST API method, **insert_rows** and **insert_rows_json**, automatically add insertId preventing anyone from making use of the higher throughput limits.
Proposing that an argument be added to one or both of these methods like "insertId=True" (defaulting to true). When false, uuid is not used to add insert IDs. Furthermore, documentation is updated to clearly articulate that omitting insertId can result in duplicate records upon retry of failed API calls according to [this guide](https://cloud.google.com/bigquery/streaming-data-into-bigquery#dataconsistency).
Answers:
username_1: @username_2 Can you please comment?
username_2: This is already supported for `insert_rows_json`. You can explicitly pass in an iterable of `None`s for the `row_ids` argument. Probably worth keeping this issue open to make a sample demonstrating that, though.
```
client.insert_rows_json(table, json_rows, row_ids=[None]*len(json_rows))
```
username_2: Oh, it also should work for [`insert_rows`](https://googleapis.dev/python/bigquery/latest/generated/google.cloud.bigquery.client.Client.html#google.cloud.bigquery.client.Client.insert_rows), as it passes any additional keyword arguments to [`insert_rows_json`](https://googleapis.dev/python/bigquery/latest/generated/google.cloud.bigquery.client.Client.html#google.cloud.bigquery.client.Client.insert_rows_json).
username_1: @username_2 That seems like an odd DX to have to document / defend. Instead, maybe we could special-case `row_ids=None` to expand to that? E.g.:
```python
client.insert_rows_json(table, json_rows, row_ids=None)
```
username_2: It's only a handful of people who'll want to trade more duplicate rows to get to 1 GB of writes per second. I think the current default of populating row IDs with UUID when `row_id=None` is the correct choice.
We could potentially make a special marker value, but I don't see all that much benefit. At a minimum, we should do the following, though.
- [ ] Expand our unit tests to ensure that a sequence of `None` values continues to work for the `insert_rows*()` methods.
- [ ] Add a code sample demonstrating use of `insert_rows_json` with `row_ids=[None]*len(rows)` for use on the page where we document how to achieve the higher 1GB / s write limit.
Status: Issue closed
|
KhronosGroup/OpenCL-Docs | 658635849 | Title: Implementing restriction on relaxed memory scope atomic use
Question:
username_0: There are still `mem_fence,` `read_mem_fence`, `write_mem_fence` that are defined with non-relaxed scope by default.
Answers:
username_1: I believe the way we got here was by looking at OpenCL 1.2 device capabilities:
* For the OpenCL 1.2 atomic functions (e.g. `atomic_add`), there is no expectation that they enforce memory consistency, so they are "relaxed" atomics.
* To enforce memory consistency, OpenCL 1.2 has fence functions (e.g. `mem_fence`). While the definition of these functions is somewhat ambiguous, they appear to describe "acquire-release" fences.
So, in order to describe OpenCL 1.2 device capabilities using the C11 / OpenCL C 2.0 atomic functions, we need to support no more than relaxed memory orders for most atomic functions, but we need to support the acquire and release memory orders for `atomic_work_item_fence`. This is how we ended up with the wording referenced above and the special case for `atomic_work_item_fence`.
If we want to eliminate the special case and simplify tooling (and maybe the spec also), here are a few options:
1. Eliminate the special case and require that if the acquire-release memory orders are supported for any atomic function then they are supported for all atomic functions, both atomic operations and atomic fences. This means that an OpenCL 1.2 device moving to OpenCL 3.0 either needs to add support for more than just "relaxed" atomic operations, or will be unable to express OpenCL 1.2 memory fences using the newer `atomic_work_item_fence` function.
2. Only control the availability of the acquire-release memory orders based on the OpenCL C feature macro, but do not try to detect and control how the acquire-release memory orders are used. If a device supports acquire-release memory orders only for fences and an acquire-release memory order is passed to an atomic operation, then the program is malformed and behavior is undefined.
I thought that option (2) was the intended option but perhaps that isn't the case - ping @username_2 to check that the spec is documenting what we intended to document.
For Khronos folks, this was discussed in internal issue 210, and was added by internal merge request 136.
username_2: But this is exactly what results in the exceptions that this ticket is about.
As for option (1), it would certainly simplify the specification. It would mean that some devices only supported relaxed order fences, i.e. they would not support any useful fences at all. I'm not sure how much of a limitation would be for programmers.
username_1: That doesn't seem too bad for tooling. @username_0 , is there a devil in the detail here that I'm missing?
username_0: There are still `mem_fence,` `read_mem_fence`, `write_mem_fence` that are defined with non-relaxed scope by default.
username_0: This sounds good! It feels sensible that if `__opencl_c_atomic_order_acq_rel` is not supported then acquire and release ordering are not available at all, and if either of those is still needed for the fences then one of the following functions can be used`mem_fence`, `read_mem_fence`, `write_mem_fence`? In addition we could define the default overload of `atomic_work_item_fence` that would use `memory_order_acq_rel` and `memory_scope_work_group`.
`void atomic_work_item_fence(cl_mem_fence_flags flags) // order = emory_order_acq_rel, scope = memory_scope_work_group`
Generally think it is logical that if some implementation doesn't support fully C11/OpenCL 2.0 memory model, they won't be able to have full functionality of OpenCL 2.0 atomics and therefore some limitations will apply.
username_1: Summarizing where we are right now regarding this issue. I think this is consistent with the discussion in the July 21st teleconference:
If we do nothing, the spec currently says:
* The acquire, release, and acq_rel memory orders are always available and are not protected by any feature macros.
* It is always valid to use the acquire, release, and acq_rel memory orders with `atomic_work_item_fence`, to achieve equivalent functionality to the OpenCL 1.2 `read_mem_fence`, `write_mem_fence`, and `mem_fence` functions.
* It is only sometimes valid to use the acquire, release, and acq_rel memory orders with other atomic functions, such as `atomic_fetch_add`. The `__opencl_c_atomic_order_acq_rel` feature macro can be used to determine when this usage is valid. The compiler will NOT provide diagnostics if the acquire, release, or acq_rel memory orders are improperly used with other atomic functions, and this is undefined behavior..
* This is slightly different than the seq_cst memory order, since the seq_cst memory order will be protected by the `__opencl_c_atomic_order_seq_cst` feature macro. This makes it harder to improperly use the seq_cst memory order, but even if this happens somehow, the compiler will NOT provide diagnostics and this is undefined behavior.
I _think_ this is implementable by tooling and can express all OpenCL 1.2 functionality even if it's a little confusing.
The alternative we are discussing is:
* We protect the acquire, release, and acq_rel memory orders by the `__opencl_c_atomic_order_acq_rel` feature macro, similar to how the seq_cst memory order is protected by the `__opencl_c_atomic_order_seq_cst` feature macro.
* If an implementation chooses to support the `__opencl_c_atomic_order_acq_rel` feature then the acquire, release, and acq_rel memory orders must be supported for all atomic operations.
* As mentioned in a prior comment, this all-or-nothing support either means that all OpenCL 1.2 functionality either cannot be expressed via C11 atomics (specifically for memory fences, if `__opencl_c_atomic_order_acq_rel` is not supported), or that additional functionality is required beyond OpenCL 1.2 (specifically acquire and release atomic operations, if `__opencl_c_atomic_order_acq_rel` is supported).
We need to decide if it's OK to trade off the inability to precisely express OpenCL 1.2 functionality for the increased simplicity.
username_3: Does the alternative above break backward compatibility with OpenCL C 1.2? Sounds like it does as memory fences can not be supported.
Also, should any of these approaches will require changes to OpenCL kernels written for OpenCL 1.2 to make them work with OpenCL 3.0?
username_1: Summary of discussion in the July 23rd teleconference:
- The alternative doesn't break backwards compatibility with OpenCL C 1.2 or require changes to OpenCL kernels written for OpenCL 1.2 because the "old" memory fence functions will still be supported: `read_mem_fence`, `write_mem_fence`, and `mem_fence`. The alternative could mean that the memory fence functions are unable to express via `atomic_work_item_fence` though.
- There wasn't a strong preference for the status quo vs. the alternative. We agreed to spend a bit more time to consider the tradeoffs more carefully, since this issue does not need to be resolved ASAP.
Status: Issue closed
username_1: Discussed in the August 4th teleconference and decided to keep the spec as-is, so closing this issue without additional action. |
PapaPeach/PeachHUD | 776663956 | Title: HTML MotD hidden by default - why?
Question:
username_0: It seems like in textwindow.res and textwindowcustomserver.res, you move the html elements offscreen. Why? Could you consider adding a customization to enable html motds? Thanks
Answers:
username_1: Will look into this for the next update, probably just a carry over from using SunsetHud as a base that went unnoticed.
Status: Issue closed
|
gatsbyjs/gatsby | 615269917 | Title: gatsby-plugin-typography doesn't handle pathToConfigModule in theme
Question:
username_0: <!--
Please fill out each section below, otherwise, your issue will be closed. This info allows Gatsby maintainers to diagnose (and fix!) your issue as quickly as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.org/docs/
- How to File an Issue: https://www.gatsbyjs.org/contributing/how-to-file-an-issue/
Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues
-->
## Description
I am trying to use `gatsby-plugin-typography` with a theme but the `pathToConfigModule` seems to searching in the site directory rather than the theme directory.
### Steps to reproduce
1. `git clone https://github.com/lamnohq/gatsby-theme-typographyjs-bug.git`
2. `yarn workspace example develop`
### Expected result
The plugin should read the typography.js file in the theme directory.
### Actual result
The plugin seems to be looking in the site directory.
Error:
```
ERROR #98124 WEBPACK
Generating SSR bundle failed
Can't resolve '/Users/username_0/Code/gatsby/gatsby-theme-typographyjs-bug/example/src/utils/typography' in '/Users/username_0/Code/gatsby/gatsby-theme-typographyjs-bug/example/.cache/caches/gatsby-plugin-typography'
Perhaps you need to install the package '/Users/username_0/Code/gatsby/gatsby-theme-typographyjs-bug/example/src/utils/typography'?
File: .cache/caches/gatsby-plugin-typography/typography.js
```
### Environment
```
System:
OS: macOS 10.15.4
CPU: (8) x64 Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
Shell: 5.7.1 - /bin/zsh
Binaries:
Node: 13.8.0 - ~/.nvm/versions/node/v13.8.0/bin/node
Yarn: 1.22.4 - /usr/local/bin/yarn
npm: 6.13.6 - ~/.nvm/versions/node/v13.8.0/bin/npm
Languages:
Python: 2.7.16 - /usr/bin/python
Browsers:
Chrome: 81.0.4044.138
Firefox: 72.0.2
Safari: 13.1
npmGlobalPackages:
gatsby-cli: 2.11.22
```
Answers:
username_1: This issue seems to be that shadowing isn't working properly with the plugin's system of writing the module file out to .cache
username_2: Thanks for opening up an issue @username_0!
The error you're seeing is a result of resolution from your gatsby-config. With themes, the resolution happens from a different directory/context. In order to fix this you can explicitly create a full path from the current directory using `path.join`.
```js
const path = require("path");
module.exports = {
plugins: [
{
resolve: "gatsby-plugin-typography",
options: {
pathToConfigModule: path.join(__dirname, `src/utils/typography`),
},
},
],
};
```
Status: Issue closed
|
JedMeister/tracker | 334295307 | Title: v15.0 readmes and changelogs
Question:
username_0: MySQL changelogs:
```
* Replace MySQL with MariaDB (drop-in MySQL replacement)
* Updated version of mysqltuner script
```
LAMP changelogs:
```
* Install Adminer directly from stretch/main repo
* Provide "adminer" root-like user for Adminer MySQL access
* Replace MySQL with MariaDB (drop-in MySQL replacement)
* Updated version of mysqltuner script
* Includes PHP7.0 (installed from Debian repos)
* Updated PHP default settings
* Remove phpsh (no longer maintained)
```
LAMP readmes:
```
- Adminer: username **adminer**
```
LAMP apps:
- [ ] b2evolution
- [ ] cakephp
- [ ] codeigniter
- [ ] collabtive
- [ ] concrete5
- [ ] drupal7
- [ ] drupal8
- [ ] e107
- [ ] elgg
- [ ] espocrm
- [ ] gallery
- [ ] gnusocial
- [ ] joomla3
- [ ] lamp
- [ ] laravel
- [ ] limesurvey
- [ ] magento
- [ ] mambo
- [ ] mantis
- [ ] mediawiki
- [ ] mibew
- [ ] moodle
- [ ] mumble
- [ ] nextcloud
- [ ] observium
- [ ] omeka
- [ ] orangehrm
- [ ] oscommerce
- [ ] owncloud
- [ ] phpbb
- [ ] phplist
[Truncated]
- [ ] yiiframework
- [ ] zencart
- [ ] zurmo
MYSQL only apps:
- [ ] asp-net-apache
- [ ] bugzilla
- [ ] django
- [ ] etherpad
- [ ] ghost
- [ ] gitlab
- [ ] icescrum
- [ ] lighttpd-php-fastcgi
- [ ] mysql
- [ ] nginx-php-fastcgi
- [ ] otrs
- [ ] roundup
- [ ] tomcat-apache
- [ ] tomcat |
IgniteUI/help-topics | 243619496 | Title: Broken link to API
Question:
username_0: Go here
https://staging.igniteui.local/help/16.2/igdateeditor-jquery-api
Open this:
igDateEditor ASP.NET MVC Helper API
This is broken:
https://staging.igniteui.local/help/16.2/infragistics.web.mvc~infragistics.web.mvc.datetimeeditormodel
Answers:
username_1: Fix already shipped with latest version of the help and will also be updated for the 16.2 deployment shortly. Closing.
Status: Issue closed
|
MicrosoftDocs/powerbi-docs | 553717005 | Title: User can't access dataset if it has RLS
Question:
username_0: Even if that user has access to a portion of the data, if he wants to create a dashboard in his own workspace with that dataset (the one with rls), he can't see nor access to de dataset.
its that ok? A dataset with rls can't be access by any member with a role in its own workspace?
that dataset use to be accesible to that user before rls.
thanks
Answers:
username_1: Hi @username_0. You have reached the documentation team, and I don't see a reference to any specific documentation item. I think your question is perhaps more of a product question. I suggest you put your question to the [Power BI Community]( https://community.powerbi.com). There are a great many experienced people there and someone may be able to answer your question.
Thanks for reaching out to us!
Status: Issue closed
|
AlexsLemonade/training-modules | 606291862 | Title: RStudio Server: when entering a password, it looks blank
Question:
username_0: When you enter a password at the command line, it doesn't show up. If you are used to working with command line tools, that is familiar behavior. If you're not, it can be a point of friction. We should include this in our RStudio change your password instructions. https://github.com/AlexsLemonade/training-modules/blob/master/virtual-setup/rstudio-login.md#logging-in
Answers:
username_0: We should probably have a screenshot for 1) what you see when you correctly enter your current password and 2) what you see when you incorrectly enter your current password
username_0: Add logging out after the password change to the instructions. Also the issue with cookies that has been popping up.
username_0: Was this closed via #202 @username_1 ?
Status: Issue closed
|
HCD-iTC/HCD-iTC-Template | 823191043 | Title: Question: Is hash sufficient for code integrity check at boot?
Question:
username_0: **What is the change request for the cPP? Please describe.**
Issue #150 requested that the SFRs for Authentication using X.509 certificates from ND cPP Section B.4.1 be added to the HCD cPP. While the HCD iTC Network Subgroup was reviewing these SFRs, the following question was raised that requires discussion with the full HCD iTC:
Is hash sufficient for code integrity check at boot?
This issue is to bring this question before the HCD iTC for discussion.
**Describe the solution you'd like**
HCD iTC discussion and resolution of the question "Is hash sufficient for code integrity check at boot?"
**Describe alternatives you've considered**
None.
**Additional context**
None |
ibm-openbmc/dev | 494873701 | Title: GUI : FED : Event Log - Date pickers not working in Safari
Question:
username_0: ## Overview
This implementation uses the native date input type. Chrome and other browsers support this and build the date picker natively. Safari will not have a date picker until they support the date type input element.
The problem is the date format. The date picker changes the format for supported browsers to mm/dd/yyyy. The format is converted to yyyy-mm-dd by the browser, but the user never sees that in supported browsers.
## As Is
The only label used to indicate to the expected format to the user used the placeholder attribute. This is not accessible and provides a challenging user experience since the placeholder text is removed as the user begins typing. Also, the only invalid format indicator is a red bottom border
## To Be
The current design does not account for a visually displayed label or helper text. At minimum we will need to:
- [ ] Add error messaging letting the user know the date format is not valid and the expected format.
The best solution would be to provide:
- [ ] Visual labels for `Start Date` and `End Date`
- [ ] Helper text to describe the expected format other than just using the placeholder
Answers:
username_0: Resolved by https://gerrit.openbmc-project.xyz/c/openbmc/phosphor-webui/+/25335
username_0: Testing:
Safari and IE will not have date pickers and will have a date format pattern set to yyyy/mm/dd. All other browsers should have a date picker (they are all slightly different) and the date format pattern is mm/dd/yyyy.
Status: Issue closed
username_1: https://gerrit.openbmc-project.xyz/c/openbmc/phosphor-webui/+/25335 merged |
jpatel531/neutrino | 180759171 | Title: Changes required for upcoming CoffeeScript upgrade
Question:
username_0: We think your package may be affected by this upgrade, in the following places:
* The `scopeForFenceName` variable [here](https://github.com/jpatel531/neutrino/tree/master/lib/md-renderer.coffee#L184)
These findings are based on linting packages with `coffeescope`. We could be wrong about some of them. When we release v1.12 beta, please test your package against it to make sure that it works. Let me know if you have any further questions; I will be happy to help! |
razrrr/psychic-winner | 125881078 | Title: Implement "played" zone
Question:
username_0: need to implement a zone for played cards. played action cards should no longer go straight from hand to discard, but rather hand to played zone, then played to discard when the turn is over.
Answers:
username_0: fixed in b1031d6
Status: Issue closed
|
marcglasberg/i18n_extension | 574239721 | Title: Default language not working
Question:
username_0: It seems to be the default language (fallback) is not working. If some unknown language is set, for example, de_DE, the dutch(nl) translation is shown. Any English locale will display the English translation.
Main:
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Timesheet',
home: I18n(child: LoginScreen()),
debugShowCheckedModeBanner: false,
localizationsDelegates: [
GlobalMaterialLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
GlobalCupertinoLocalizations.delegate,
],
supportedLocales: const [
Locale('nl', 'NL'), // nl
Locale('en', 'US'), // English
Locale('en', 'UK'), // English
]
);
}
}
Translation:
extension Localization on String {
String get i18n => localize(this, t);
String plural(int value) => localizePlural(value, this, t);
static var t = Translations('en') +
{
'en': 'Sign in success',
'nl': 'Inloggen gelukt',
} +
{
'en': 'Sign in fail',
'nl': 'Inloggen mislukt',
} +
...
}
adding the following seems to fix the issue.
localeResolutionCallback: (locale, locales) {
print("FALLBACK TO ${locale.toLanguageTag()}");
return locale;
},
Status: Issue closed
Answers:
username_1: By reading https://api.flutter.dev/flutter/widgets/WidgetsApp/localeResolutionCallback.html and https://api.flutter.dev/flutter/widgets/WidgetsApp/supportedLocales.html I'd say it's defaulting to `NL` because that's the first locale in the `supportedLocales` list you provided.
When you provide your `localeResolutionCallback` you are overriding the `supportedLocales` list, and then it really uses the `de_DE` locale, which doesn't exist, so it will then default to English.
You probably can obtain the same result by removing `localeResolutionCallback` and then putting `Locale('en', 'US')` as the first item in the `supportedLocales` list.
My `i18n_extension` can't really change the way the proper locale is decided by the `MaterialApp` widget.
username_0: Read that somewhere before, but didn't seem to make a difference. I'll try agian |
VictoriaMetrics/VictoriaMetrics | 466930875 | Title: Add the prometheus process collector to be able to monitor CPU metrics
Question:
username_0: https://godoc.org/github.com/prometheus/client_golang/prometheus#NewProcessCollector
Would you consider adding this?
Answers:
username_1: Sure, it will be added to [VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) package in the near future.
username_0: CPU usage metric is a must have metric for most application no?
Thanos,Cortex, Prometheus all expose that. Probably Grafana M3 as well.
username_1: OK, I'll look into lightweight exporting basic CPU usage metrics via [github.com/VictoriaMetrics/metrics](https://github.com/VictoriaMetrics/metrics) package.
username_0: Thanks.
It was more of a question really. I am not in operations, but would imagine many alerting rules are built around that.
Maybe @superq can comment for that.
username_2: Yes, `process_cpu_seconds_total` is a must for direct instrumentation.
username_1: FYI, there is also [process_exporter](https://github.com/ncabatoff/process-exporter) that allows monitoring CPU usage, RAM usage, etc. for the selected processes via `/proc`.
username_2: The process exporter is generally discouraged when direct instrumentation is possible. This is why the official Prometheus client_golang includes all of the process_, go_, etc metrics by default.
Status: Issue closed
username_1: `process_*` metrics have been added in the commit 998525999c4ebf44c5e39d0ace6091b4ed12e58f . They will be available in the next VictoriaMetrics release.
username_0: Thanks
username_1: FYI, the release [v1.22.2](https://github.com/VictoriaMetrics/VictoriaMetrics/releases) exposes `process_*` metrics on the `/metrics` page.
username_0: Thanks!
it might be worth adding another version of the grafana dashboard that includes these.
https://grafana.com/grafana/dashboards/10229
I think @bbrazil's dashboard use few of these so can borrow it from there.
https://grafana.com/grafana/dashboards/9761
username_1: @username_3 , could you add cpu usage graph based on `process_cpu_seconds_total` to the official VictoriaMetrics dashboard?
username_0: From what I see the grafana dashboards should be versioned so that people with VM versions prior to v1.22.2 don't see empty graphs.
username_3: Hi @username_0! I've added CPU panel to our [Grafana dashboard](https://grafana.com/grafana/dashboards/10229) and put version into description so revisions contain required version now. Thanks for the hint! |
bahamas10/hueadm | 294066964 | Title: Allow groups to be specified by name
Question:
username_0: Rather than having to remember which group is which, or refresh my memory by running `hueadm groups` before changing the state of a group, it would be very nice to be able to specify a group by name. For example:
hueadm group lounge on
You could check to see if the argument is numeric (`/^\d+$/` would do it) and treat it as an ID if so, otherwise look up the ID of a group with the same name.
It might then make sense to offer the same feature for lights too, or other commands, though groups are more valuable to me personally.
Answers:
username_1: 👍
Status: Issue closed
username_1: ```
dave - meka linux ~/dev/hueadm (git:master) $ npm publish
+ [email protected]
```
username_1: Example runs. Because group names are not guaranteed to be unique I have made it so an error is thrown if more than 1 group with the same name is found.
```
$ ./bin/hueadm group foo
error: Error: Multiple groups "foo" found
$ ./bin/hueadm group bar
error: Error: No group "bar" found
$ ./bin/hueadm group 'Garage Inside'
name: 'Garage Inside'
lights:
- '20'
- '21'
- '22'
- '23'
type: Room
state:
all_on: false
any_on: false
class: Carport
action:
on: false
bri: 254
alert: none
```
username_0: Excellent. It's working for me. Thanks! This is a major usability enhancement in my eyes. |
alibaba/flutter_boost | 685138793 | Title: OpenGL相关的崩溃
Question:
username_0: 问题现象:
线上包遇到的问题,自己尝试没有复现
flutter环境:
[✓] Flutter (Channel v1.12.13-hotfixes, v1.12.13+hotfix.9, on Mac OS X 10.15.4 19E287, locale zh-Hans-CN)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[✓] Xcode - develop for iOS and macOS (Xcode 11.4.1)
[✓] Android Studio (version 4.0)
[✓] VS Code (version 1.48.1)
[✓] Connected device (1 available)
flutter_boost环境:
flutter_boost:
git:
url: 'https://github.com/alibaba/flutter_boost.git'
ref: 'task/task_v1.12.13_support_hotfixes'
奔溃日志:
崩溃一:
解析原始
0 OpenGLES +[EAGLContext setCurrentContext:] + 84
1 OpenGLES +[EAGLContext setCurrentContext:] + 64
2 Flutter 0x00000001048a8000 + 177124
3 Flutter 0x00000001048a8000 + 174080
4 Flutter 0x00000001048a8000 + 186424
5 Flutter 0x00000001048a8000 + 57240
6 Flutter 0x00000001048a8000 + 122096
7 flutter_boost -[FLBFlutterViewContainer init] + 304
8 -[PlatformRouterImp open:urlParams:exts:completion:] (PlatformRouterImp.m:42)
9 flutter_boost -[FLBFlutterApplication open:urlParams:exts:onPageFinished:completion:] + 700
10 flutter_boost +[FlutterBoostPlugin open:urlParams:exts:onPageFinished:completion:] + 208
崩溃二:
0 OpenGLES 0x00000001c4f5b000 + 39064
1 OpenGLES 0x00000001c4f5b000 + 1972501869997627532
2 Flutter 0x00000001094cc000 + 1670453931219792868
3 Flutter 0x00000001094cc000 + 174080
4 Flutter 0x00000001094cc000 + 186424
5 Flutter 0x00000001094cc000 + 57240
6 Flutter 0x00000001094cc000 + 122096
7 flutter_boost -[FLBFlutterViewContainer init] + 304
8 -[PlatformRouterImp open:urlParams:exts:completion:] (PlatformRouterImp.m:42)
9 flutter_boost -[FLBFlutterApplication open:urlParams:exts:onPageFinished:completion:] + 700
10 flutter_boost +[FlutterBoostPlugin open:urlParams:exts:onPageFinished:completion:] + 208
Answers:
username_0: 尝试在FLBFlutterViewContainer 的 dealloc中手动调用,但是仍有崩溃。dealloc是经过方法交换的,替换了之前的方法
- (void)delloc {
dispatch_async(dispatch_get_main_queue(), ^{
[EAGLContext setCurrentContext:nil];
});
}
目前最新的报错如下:
0 OpenGLES -[EAGLContext renderbufferStorage:fromDrawable:] + 88
1 OpenGLES 0x000000019c4eb000 + 6711829093781891196
2 Flutter 0x0000000106a94000 + 13786257331195130852
3 Flutter 0x0000000106a94000 + 174080
4 Flutter 0x0000000106a94000 + 186424
5 Flutter 0x0000000106a94000 + 57240
6 Flutter 0x0000000106a94000 + 122096
7 flutter_boost -[FLBFlutterViewContainer init] + 304
8 -[PlatformRouterImp open:urlParams:exts:completion:] (PlatformRouterImp.m:42)
9 flutter_boost -[FLBFlutterApplication open:urlParams:exts:onPageFinished:completion:] + 700
10 flutter_boost +[FlutterBoostPlugin open:urlParams:exts:onPageFinished:completion:] + 208
username_1: @username_0 解决了吗,我这也碰到了
username_1: crash问题已经找到,是因为我们app中有使用opengl的原生页面,在原生页面释放时未设置为nil。
username_2: 根据描述这个issue可以关了
Status: Issue closed
|
diffux/diffux | 32315978 | Title: Some missing parts from the language file
Question:
username_0: I noticed some of the parts were missing on the en.yml, I took some notes of specific texts that are missing, if you are OK I can try to include them in the en.yml myself during next week, let me know if you need the notes of the parts I noticed were missing.
Status: Issue closed
Answers:
username_0: Closing this so it does not show in my timeline |
PopojiCMS/Popoji | 859817309 | Title: Illuminate\Database\QueryException SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) (SQL: select * from `settings` where `options` = member_registration limit 1
Question:
username_0: I tried Popoji version 3.0.0, installed it on local host successfully. after installation I went to my browser and visit the front-page and login page. I faced the same problem on both page with this error
`Illuminate\Database\QueryException
SQLSTATE[HY000] [1045] Access denied for user 'root'@'localhost' (using password: YES) (SQL: select * from `settings` where `options` = member_registration limit 1)`

I checked the database password again and again, it was correct. So, What would you guys doing to fix this error and unexpected ?.
Answers:
username_1: Please check po-includes/.env file and change with your correct DB connection
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=popoji
DB_USERNAME=root
DB_PASSWORD=
username_0: i had checked it. my database password was correct. i wrote it with letters, number and some symbols, but it did not work in popoji at all.
username_2: @username_0 If you use letters, number and some symbols use ""
example
`DB_PASSWORD="<PASSWORD> %%%"`
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.