repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
cloud-ark/kubediscovery | 344940695 | Title: Remove Object tree from provenance list after an Object is deleted
Question:
username_0: Currently, Objects that have been deleted (using for example kubectl delete) still show when queried using kubectl get /..../compositions
This is done on purpose as we want to build provenance of the cluster.
In the code we maintain a list of Provenance objects. See here:
https://github.com/cloud-ark/kubediscovery/blob/master/pkg/provenance/provenance.go#L275
But this can be confusing to users as their expectation will most likely be that composition output will be empty for an Object after it is deleted.
The way to address this will be as follows:
We should maintain two separate lists - one for historical record of compositions and one for current compositions. The Compositions list will be dynamic. It will reflect the current state whereas the History list will be additive.
The Composition list can be maintained as follows:
- Everytime we check for resource names (getResourceNames), we remove the Objects from the Composition list that are not present in the queried resource names.
The modification will need to be done around here:
https://github.com/cloud-ark/kubediscovery/blob/master/pkg/provenance/provenance.go#L66
For retrieving history, we should register another endpoint, say "/history".<issue_closed>
Status: Issue closed |
wangyonghong/gitalk | 709730263 | Title: 【Linux 命令】lvreduce | 永红的互联网手记
Question:
username_0: https://yonghong.tech/linux-command/lvreduce/
收缩逻辑卷空间 补充说明lvreduce命令 用于减少LVM逻辑卷占用的空间大小。使用lvreduce命令收缩逻辑卷的空间大小有可能会删除逻辑卷上已有的数据,所以在操作前必须进行确认。 语法1lvreduce(选项)(参数) 选项12-L:指定逻辑卷的大小,单位为“kKmMgGtT”字节;-l:指定逻辑卷的大小(LE数)。 参数逻辑卷:指定要操作的逻辑卷对应的设备文件。 实例使用lvreduc |
yobnytech/queueone_issues_tracker | 581320486 | Title: Incorrect numbering for service counter on partner web
Question:
username_0: Created 6 counters for Q at Classic Salon. The number for counter shown as:
Counter 0 | Counter 0 | Counter 1 | Counter 2 | ... | Counter 4
Counter 0 is repeated, and ideally we should have Counter no started from 1
Answers:
username_1: Fixed
Status: Issue closed
username_0: Not verifying this as once counter are created, they need to be cleaned from backend. This closing as verified by developer. |
umijs/qiankun | 990880068 | Title: 动态插入的<style>标签会被qiankun拦截?
Question:
username_0: <!-- https://github.com/YOUR_REPOSITORY_URL -->
## How To Reproduce
**Steps to reproduce the behavior:** 1. 2.
**Expected behavior** 1. 2.
## Context
- **qiankun Version**:
- **Platform Version**:
- **Browser Version**:
Answers:
username_1: 
用这个可以逃出沙箱
username_0: 可是我主应用ng中那么多页面 也不算静态资源 我怎么去配置这个asseturl呢
username_0: 我知道了 是因为我qiankun子应用没有卸载完毕 就切换了路由导致了被劫持
可是我原生react子应用配置是没有劫持的
是因为umi默认开启了劫持吗 这个东西可以关掉吗
Status: Issue closed
|
fspoettel/pomodoro-discord-bot | 611214251 | Title: Move to Dokku
Question:
username_0: - [x] Setup a `dokku` host on DO
- [x] Configure `dokku` app and mongo-db plugin
- [x] Update `.env` variables to match diff. to Heroku
- [ ] Deploy `pom-dev` and monitor stability
- [ ] Migrate timers database to new host
- [ ] Remove Heroku app, deploy `pom` to `dokku`<issue_closed>
Status: Issue closed |
digidem/osm-p2p-server | 126722726 | Title: Unnecessary `ch.id.replace(/^[nw]/, '')`?
Question:
username_0: I am unclear at why we are doing [this](https://github.com/digidem/osm-p2p-server/blob/master/index.js#L96)? I know iD Editor internally prepends `n` or `w` to avoid duplicate ids but it [strips these](https://github.com/openstreetmap/iD/blob/master/js/id/core/entity.js#L26-L28) before creating a changeset to upload.<issue_closed>
Status: Issue closed |
alfg/opendrinks | 714402193 | Title: Dark mode
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Can you describe which browser you used? I'm trying on Firefox 81.0.1 on Win10 and cannot reproduce this.
username_2: I'm also on Firefox 81.0.1/Win10, and I see the light theme flash while navigating between recipes.
username_1: Ah yes, now I see it when I navigate between recipe pages.
username_2: Fixed on #859
Status: Issue closed
|
HackGT/registration | 573321238 | Title: Ground Truth Login after Walk-up Registration crashes registration
Question:
username_0: If a user has already registered through walk-up registration (ie staff) and then later tries to login via a HackGT account in ground truth, the ground truth login flow attempts to create the user an account and crashes registration with a duplicate key error on the mongoose insertion. |
sonata-nfv/tng-sdk-sm | 385325100 | Title: Something is wrong with the path when executing an ssm
Question:
username_0: ```
ana@ana-quobis:~/workspace/pilot/tng-communications-pilot$ tng-sm execute --event task --payload project/test-ssm.yml ~/workspace/pilot/tng-communications-pilot/project/ssm/videoconference-service-ssm/ Error occured when executing command. Error: chdir /home/ana/workspace/pilot/tng-communications-pilot/project/ssm/videoconference-service-ssm/home/ana/workspace/pilot/tng: no such file or directory
```<issue_closed>
Status: Issue closed |
railwaycat/homebrew-emacsmacport | 122416708 | Title: Crash On 10.11.2
Question:
username_0: Exception Info:
```
Dyld Error Message:
Symbol not found: _gliCreateContextWithShared
Referenced from: /System/Library/Frameworks/OpenGL.framework/Resources/GLEngine.bundle/GLEngine
Expected in: flat namespace
```<issue_closed>
Status: Issue closed |
openmv/openmv | 1097367980 | Title: Add wifi support for ESP32
Question:
username_0: At present, the price of WIN1500 module is too expensive, and the price of ESP32 is very cheap. I am using ESP32 to complete the serial port image transmission to OpenMV, but the function is relatively simple, only the part of image transmission, can not complete the communication such as socket, etc., may I ask you Can you support the driver support of ESP32? |
hxmowang/leetcode_zijie | 199332843 | Title: 121 bp buy and sell
Question:
username_0: class Solution {
public:
int maxProfit(vector<int>& prices) {
int i;
int n;
n = prices.size();
int d[100000];
d[0]=0;
int minnum;
minnum=500000;
for (i = 1; i < n; i++)
{
minnum=min(prices[i-1],minnum);
d[i] = max(d[i - 1], prices[i]-minnum);
}
return d[n-1];
}
};
注意数组的大小,以及时间的限制 |
jpuri/react-draft-wysiwyg | 509421804 | Title: This package is Dead
Question:
username_0: - Last version is 8 months old.
- Not working correctly with newest react version.
- Ton of issues is ignored.
Answers:
username_1: Not upgraded to DraftJS last version.
username_2: Author is mostly working on another similar repo using a different framework than Draft.js, it seems like:
https://github.com/nib-edit/Nib/
username_3: Hi @username_8, is this true? will the repo not longer be maintained?
username_4: But Nib does not seem to be nowhere near @username_2 , I have `react-draft-wysiwyg` all over my project and I have not found any suitable alternative yet..
username_5: @username_8 Could you give us a quick hint, if and when this repo is maintained an updated? :)
username_6: Better switch to this project !
http://www.nibedit.com/index.html
working like a charm
username_7: What if she quit from new project too? =)
username_8: Hi All,
My apologies for being away from the project for long. Maintaining an open source like this is much work and at times you miss on things juggling between priorities.
My plan is to upgrade React Draft Wysiwyg soon to support latest version of React and Draft, I do not plan to add many new features or changes to the lib - as I see limitation in DraftJS, prosemirror is definitely a better framework for test editing and I will focus on building new features in Nib.
I would appreciate users to come up with sponsoring projects like this and Nib, for Nib option to buy licence is also available. It at times become burning out for authors to manage paid work and maintain bigger projects like these.
I hope to release updated version of React Draft Wysiwyg within this week.
username_1: I browsed the Nib pages and was unable to find out the cost of the license it mentions.
It would be fine to set it clear before switching.
Another unanswered question is if Nib works nice inside React pages, this is the reason I chose Draft (and perhaps many others).
username_8: Here are details for licensing: http://www.nibedit.com/index.html#/License, even if you are using free version of the lib I would appreciate sponsoring it.
Nib works perfect with React, it is misconception many have that only DraftJS work well for text editing with React. Many other libs / frameworks work well with React.
username_2: Hi @username_8, all your work is very much appreciated – this library is awesome and has been incredibly helpful for us. Totally understand that this is a side project for you, so no worries 😃
username_8: `react-draft-wysiwyg:1.14.0` is new version of lib.
username_9: nice!
but problem https://github.com/username_8/react-draft-wysiwyg/issues/853 sill exists
username_8: Oh, I will fix this also soon.
Thx
username_10: Thanks @username_8 for taking time to fix these issues. I've just created a [pull request](https://github.com/username_8/react-draft-wysiwyg/pull/895) renaming a few unsafe methods to avoid a few other warnings. Could you take a look on it? Thanks!
username_11: #877 critical pasting error :-s
username_14: It would be useful to change the state of this project in deprecated on the main page, if not maintained anymore. |
Fuyukai/asyncwebsockets | 896034220 | Title: Should there be a Lock around the TCP sending part?
Question:
username_0: When multiple tasks are sending messages you can run into `BusyResourceError` as there is an `await` in the `send()` method.
`trio-websocket` encloses the send block in a stream so perhaps this library should do the same<issue_closed>
Status: Issue closed |
CSIS-iLab/ocean | 380907949 | Title: Component: Post Block
Question:
username_0: Create post block include to use on archive pages. Include should have a "class" parameter.
- [x] Header
- [x] Featured
- [x] Content Type
- [x] Title
- [x] Authors
- [x] Published
- [x] Image
- [x] Main
- [x] Excerpt<issue_closed>
Status: Issue closed |
btcpayserver/btcpayserver | 696446260 | Title: `dotnet publish` produces defective build
Question:
username_0: The build created via `dotnet publish` fails at runtime.
#### Reproduce
##### 1. Publish and run
```bash
# Add nuget and dotnet SDK to your environment.
# For nix, use nix-shell -p dotnetPackages.Nuget dotnetCorePackages.sdk_3_1
TMPDIR=/tmp/build-btcp
rm -rf $TMPDIR
mkdir -p $TMPDIR/{src,buildHome,runHome}
cd $TMPDIR/src
curl -SL https://github.com/btcpayserver/btcpayserver/archive/v1.0.5.5.tar.gz -SL | tar xz --strip-components=1
HOME=$TMPDIR/buildHome DOTNET_CLI_TELEMETRY_OPTOUT=1 \
dotnet publish --output publish -c Release BTCPayServer/BTCPayServer.csproj
HOME=$TMPDIR/runHome DOTNET_ROOT=$(dirname $(dirname $(type -P dotnet))) \
publish/BTCPayServer
```
##### 2. Trigger bug
```bash
curl -L 127.0.0.1:23000
```
BTCPayServer fails with output:
```
Microsoft.AspNetCore.Server.Kestrel: Connection id "0HM2KGDSR0A9S", Request id "0HM2KGDSR0A9S:00000002": An unhandled exception was thrown by the application.
System.NullReferenceException: Object reference not set to an instance of an object.
at BundlerMinifier.FileHelpers.NormalizePath(String path)
at BundlerMinifier.FileHelpers.TrimTrailingPathSeparatorChar(String path)
at BundlerMinifier.FileHelpers.DemandTrailingPathSeparatorChar(String path)
at BundlerMinifier.TagHelpers.BundleTagHelper.GetSrc(String path)
at BundlerMinifier.TagHelpers.BundleTagHelper.Process(TagHelperContext context, TagHelperOutput output)
at Microsoft.AspNetCore.Razor.TagHelpers.TagHelper.ProcessAsync(TagHelperContext context, TagHelperOutput output)
at Microsoft.AspNetCore.Razor.Runtime.TagHelpers.TagHelperRunner.RunAsync(TagHelperExecutionContext executionContext)
at AspNetCore.Views_Shared_Header.ExecuteAsync() in /tmp/build-btcp/src/BTCPayServer/Views/Shared/Header.cshtml:line 19
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderPageCoreAsync(IRazorPage page, ViewContext context)
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderPageAsync(IRazorPage page, ViewContext context, Boolean invokeViewStarts)
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderAsync(ViewContext context)
at Microsoft.AspNetCore.Mvc.TagHelpers.PartialTagHelper.RenderPartialViewAsync(TextWriter writer, Object model, IView view)
at Microsoft.AspNetCore.Mvc.TagHelpers.PartialTagHelper.ProcessAsync(TagHelperContext context, TagHelperOutput output)
at Microsoft.AspNetCore.Razor.Runtime.TagHelpers.TagHelperRunner.<RunAsync>g__Awaited|0_0(Task task, TagHelperExecutionContext executionContext, Int32 i, Int32 count)
at AspNetCore.Views_Shared__LayoutSimple.<ExecuteAsync>b__10_0()
at Microsoft.AspNetCore.Razor.Runtime.TagHelpers.TagHelperExecutionContext.SetOutputContentAsync()
at AspNetCore.Views_Shared__LayoutSimple.ExecuteAsync() in /tmp/build-btcp/src/BTCPayServer/Views/Shared/_LayoutSimple.cshtml:line 2
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderPageCoreAsync(IRazorPage page, ViewContext context)
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderPageAsync(IRazorPage page, ViewContext context, Boolean invokeViewStarts)
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderLayoutAsync(ViewContext context, ViewBufferTextWriter bodyWriter)
at Microsoft.AspNetCore.Mvc.Razor.RazorView.RenderAsync(ViewContext context)
at Microsoft.AspNetCore.Mvc.ViewFeatures.ViewExecutor.ExecuteAsync(ViewContext viewContext, String contentType, Nullable`1 statusCode)
at Microsoft.AspNetCore.Mvc.ViewFeatures.ViewExecutor.ExecuteAsync(ViewContext viewContext, String contentType, Nullable`1 statusCode)
at Microsoft.AspNetCore.Mvc.ViewFeatures.ViewExecutor.ExecuteAsync(ActionContext actionContext, IView view, ViewDataDictionary viewData, ITempDataDictionary tempData, String contentType, Nullable`1 statusCode)
at Microsoft.AspNetCore.Mvc.ViewFeatures.ViewResultExecutor.ExecuteAsync(ActionContext context, ViewResult result)
at Microsoft.AspNetCore.Mvc.ViewResult.ExecuteResultAsync(ActionContext context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResultFilterAsync>g__Awaited|29_0[TFilter,TFilterAsync](ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResultExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.ResultNext[TFilter,TFilterAsync](State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeResultFilters()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
[Truncated]
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Session.SessionMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Session.SessionMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at BTCPayServer.Hosting.BTCPayMiddleware.Invoke(HttpContext httpContext) in /tmp/build-btcp/src/BTCPayServer/Hosting/BTCpayMiddleware.cs:line 83
at Microsoft.AspNetCore.Diagnostics.StatusCodePagesMiddleware.Invoke(HttpContext context)
at BTCPayServer.Hosting.HeadersOverrideMiddleware.Invoke(HttpContext httpContext) in /tmp/build-btcp/src/BTCPayServer/Hosting/HeadersOverrideMiddleware.cs:line 29
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
```
The same bug appears when the binary from the build phase is run:
```
HOME=$TMPDIR/runHome DOTNET_ROOT=$(dirname $(dirname $(type -P dotnet))) \
BTCPayServer/bin/Release/netcoreapp3.1/BTCPayServer
```
I'm using the `dotnet publish` approach because I want to create BTCPayServer packages for Linux.
Answers:
username_1: @username_3 Any idea what's going on here?
username_2: What format?
username_0: Nixpkgs.
There's already a working release: we're using [`dotnet publish`](https://github.com/NixOS/nixpkgs/blob/54a0a400f2d1ab30764cf752dd107a86fef11ffa/pkgs/applications/blockchains/nbxplorer/default.nix#L38) for nbxplorer and [a workaround](https://github.com/NixOS/nixpkgs/blob/54a0a400f2d1ab30764cf752dd107a86fef11ffa/pkgs/applications/blockchains/btcpayserver/default.nix#L29) for btcpayserver.
When `dotnet publish` is fixed, I'll propose an update to the [AUR package](https://aur.archlinux.org/packages/btcpayserver/)
username_2: Just curious (sorry for the off-topic): there are AUR (and soon Nixpkgs) packages but no snap/deb/rpm/flatpak?
username_1: @username_0 were you able to resolve this or figure out the culprit?
username_0: No, nixpkgs is still using [this workaround](https://github.com/NixOS/nixpkgs/blob/4ecaf7ef3a65cffac09cf55235fb99b4c19befe0/pkgs/applications/blockchains/btcpayserver/default.nix#L29).
username_3: @username_0 I think the problem is with the data directory.
We use publish ourselves in https://github.com/btcpayserver/btcpayserver/blob/master/amd64.Dockerfile
We use `dotnet blah.dll` to execute btcpay
https://github.com/btcpayserver/btcpayserver/blob/master/docker-entrypoint.sh
username_3: Oh, got it, you want to create a self-contianed package so the dotnet is embedded ?
username_3: I can't reproduce. Did it on windows, it just work :/
How do you run the app? what is the current directory?
username_0: It indeed works when using the build directory as the working directory.
As a daemon, btcpayserver shouldn't depend on the working directory for running correctly. Could you fix this?
username_3: @username_0 I actually also tried with `-c Release` and inside the binary directory, and it worked as well.
Did you checked what raspiblitz do for example? Because when I look our dockerfile, we are running BTCPay from the binary directory, so that should work.
username_3: Look they seem to do the same https://github.com/rootzoll/raspiblitz/blob/1da0b3b59ac6b3eededd84fad935a83ed0f4b425/home.admin/config.scripts/bonus.btcpayserver.sh
username_0: Yes, as I said, the correct runtime operation of btcpayserver should not depend on its working directory.
Please fix this in `btcpayserver`. This bug doesn't exist in `nbxplorer`.
username_3: NBXplorer does not have this issue because it does not have assets to look for in directories.
Will try to see if I can do something about it.
username_0: Great, thanks! This will help making deployments more simple and robust. It is really unusual for a service binary to assume a working dir for accessing internal data.
Status: Issue closed
username_0: Thank you! |
wellington/wellington | 320589909 | Title: Problem compiling using bundled libsass
Question:
username_0: System: Fedora 28
```
$ go get -u github.com/wellington/wellington/wt
# github.com/wellington/wellington/vendor/github.com/wellington/go-libsass/libs
In file included from unity.cpp:22:
../libsass-build/json.cpp:49: warning: "out_of_memory" redefined
#define out_of_memory() do { \
In file included from ../libsass-build/ast.hpp:52,
from ../libsass-build/ast.cpp:2,
from unity.cpp:4:
../libsass-build/util.hpp:15: note: this is the location of the previous definition
#define out_of_memory() do { \
``` |
youzan/vant | 320108246 | Title: AddressEdit 地址编辑 默认收件区域
Question:
username_0: https://www.youzanyun.com/zanui/vant#/zh-CN/address-edit
文档上

province、city、county ,传值 都是code吧?
为什么我设置的没用?
```html
<div class="p-address-edit">
<van-nav-bar title="编辑地址" left-arrow @click-left="$router.go(-1)" />
<van-address-edit :addressInfo="address" @change-area="test" :area-list="areaList" show-delete
@save="onSave" @delete="onDelete"/>
</div>
```
```script
<script>
import area from '@/common/app.area.js';
export default {
data () {
return {
areaList: area,
searchResult: [],
address: {
province: '110000',
city: '110102',
county: '110100'
}
};
},
methods: {
onSave () {
// Toast('save');
console.log(this.address);
},
onDelete () {
// Toast('delete');
},
test (val) {
console.log(this.address);
console.log(val);
}
}
};
</script>
```
Answers:
username_1: province、city、county 都是名称,要使用 area_code 指定省市区
Status: Issue closed
|
exercism/cli | 401091750 | Title: download TODO comment: handle collision when writing solution files
Question:
username_0: In effort to resolve TODO comments, I'm in the process of promoting them to issues.
https://github.com/exercism/cli/blob/a86f82974270d2da2090a02437560897a216abc3/cmd/download.go#L198-L199
It seems like https://github.com/exercism/cli/issues/457 is related. Is this still pending further info?
Answers:
username_1: I've come to the conclusion that I don't want to make the CLI interactive if we can avoid it (at least not until we've worked out all the existing kinks in a non-interactive way).
I think the solution here would be to display an error message if there's a collision, including a pointer at how you could resolve it.
username_2: With the premiss of an interactive CLI out of scope for the time being, and the `--force` flag having been added in #979. I believe this should count as resolved? |
ddavison/sublime-tabs | 70114209 | Title: Atom.Object.defineProperty.get is deprecated.
Question:
username_0: atom.workspaceView is no longer available.
In most cases you will not need the view. See the Workspace docs for
alternatives: https://atom.io/docs/api/latest/Workspace.
If you do need the view, please use `atom.views.getView(atom.workspace)`,
which returns an HTMLElement.
```
Atom.Object.defineProperty.get (/opt/atom/resources/app.asar/src/atom.js:57:13)
Object.activate (/home/adham/.atom/packages/sublime-tabs/lib/sublime-tabs.coffee:39:9)
Package.activateNow (/opt/atom/resources/app.asar/src/package.js:222:19)
<unknown> (/opt/atom/resources/app.asar/src/package.js:203:30)
```
Answers:
username_1: thanks for opening this @username_0 , however this is known, and is being tracked in #53. (closing as duplicate)
Status: Issue closed
|
tgruenewald/smallworld | 223593275 | Title: Player walk around planets
Question:
username_0: Player should walk around the planets.
Answers:
username_0: I've been working on this. It's kind of clunky, due to the actual physics that we're using. We're going to have to mess with the values in each of the "GravityObject"s of the planet prefabs.
Status: Issue closed
|
Hypfer/Valetudo | 675885713 | Title: "Unknown cloud message received" for Viomi v8.
Question:
username_0: Hi.
Checcking the logs of my viomi v8 I see a pair of methods not recognized (by DummyCloud, I suppose):
"_otc.ncinfo" and "_otc.ncstat"
I guess they could provide info about use and consummables status....
```
Aug 10 07:24:37 aspirador.lan valetudo[2311]: [2020-08-10T05:24:37.970Z] [INFO] Unknown cloud message received: {"id":1427363063,"method":"_otc.ncinfo","params":{"miio_ver":"0.0.5","miio_times":[35073,-1,24609,10355],"ot_stat":[3,2,3],"otu_stat":[3,0,80,114,4,3,635,511,5],"ott_stat":[0,0,-1,-1,-1,-1,0,0,0]}}
Aug 10 07:24:37 aspirador.lan valetudo[2311]: [2020-08-10T05:24:37.974Z] [INFO] Unknown cloud message received: {"method":"_otc.ncstat","id":321282685,"params":{"timestamp":1597011015,"proto":0,"otip":"172.16.31.10","otport":8053,"sendcnt":95,"ackcnt":75,"ackcnt_timeout":0,"costtime":316,"rpc":38,"rpcack":44,"rpc_local":0,"rpcack_local":0,"dns":2,"keepalive_interval":20}}
```
How easy would it be to add these methods so that they are supported? I'd try , but I've read the code and need a some guidance! :D :D
Best Regards.
username_0.
Answers:
username_1: I'd strongly advice against adding that feature right now since I'm currently reworking the internals on that.
It should get much easier (and especially easier to understand) when that is done. No ETA though
username_0: Hi.
Ok. I agree with your suggestion. ;)
Best Regards.
username_0.
username_2: I thought of these more like network and protocol stats. Maybe I’m wrong, but I don’t think those messages provide any value right now.
There are separate APIs to collect consumable information. So for these two cloud messages should probably just implement a no-op handler to avoid them spamming logs and confusing users.
username_1: The new system has been merged to master. You can go ahead now :)
Status: Issue closed
username_2: I’m closing due to lack of activity. Feel free to reopen a PR if you end up implementing this (though I see little or no value at this point).
username_3: Still appearing in the logs:
```
[2021-04-11T19:37:55.069Z] [INFO] Unknown cloud message received: {"id":685065857,"method":"_otc.ncinfo","params":{"miio_ver":"0.0.5","miio_times":[70212,-1,0,70206],"ot_stat":[1,0,1],"otu_stat":[1,0,42,142,12,14,281,197,84],"ott_stat":[0,0,-1,-1,-1,-1,0,0,0]}}
[2021-04-11T19:37:55.090Z] [INFO] Unknown cloud message received: {"method":"_otc.ncstat","id":2074490832,"params":{"timestamp":1618168075,"proto":0,"otip":"127.0.0.1","otport":8053,"sendcnt":5,"ackcnt":5,"ackcnt_timeout":0,"costtime":47,"rpc":0,"rpcack":0,"rpc_local":0,"rpcack_local":0,"dns":2,"keepalive_interval":20}}
``` |
daniela-perez/learn | 183316862 | Title: Agregar función o ciclos while for para implementar menu
Question:
username_0: Agregar un menu al programa para que pueda o preguntar nombres o salir del programa.
1. Cuando inicia el programa, debe aparecer un menu que pregunte lo que el usuario desea hacer, por ejemplo:
```
Agenda Personal V. 0.1
-------------------------
[1] Buscar contacto
[2] Agregar contacto
[3] Salir
Opción: >
...
Contacto: >
...
Opción buscar...
[1] Contacto1
[2] Contacto2
[3] Contacto3
....
Nombre: <NAME>
Correo: <EMAIL>
Telefono: 5555555
[1] Regresar
[2] Eliminar contacto
[3] Editar contacto
...
``` |
rust-lang/rust | 57707227 | Title: Can’t declare lifetime of closure that returns a reference
Question:
username_0: When you declare closure argument types, there is no syntax to declare a lifetime parameter. And I guess lifetime elision does not apply to closures. Therefore, there seems to be no way to declare the type of a closure that returns a reference.
It compiles if you avoid declaring the type of the closure and depend on type inference. But then you would not be able to assign the closure to a local variable.
fn print_first(list: Vec<String>) {
let x: &str = list
.first()
.map(|s: &String| -> &str &s[]) // ERROR
//.map(|s: &String| &s[]) // ERROR
//.map(|s| -> &str &s[]) // ERROR
//.map(|s| &s[]) // OK
.unwrap_or("");
println!("First element is {}", x);
}
It gives a compiler error and a suggestion that does not make sense.
src/rusttest.rs:4:29: 4:32 error: cannot infer an appropriate lifetime for lifetime parameter 'a in function call due to conflicting requirements
src/rusttest.rs:4 .map(|s: &String| -> &str &s[])
^~~
src/rusttest.rs:1:1: 10:2 help: consider using an explicit lifetime parameter as shown: fn print_first<'a>(list: Vec<String>)
src/rusttest.rs:1 fn print_first(list: Vec<String>) {
src/rusttest.rs:2 let x: &str = list
src/rusttest.rs:3 .first()
src/rusttest.rs:4 .map(|s: &String| -> &str &s[]) // ERROR
src/rusttest.rs:5 //.map(|s: &String| &s[]) // ERROR
src/rusttest.rs:6 //.map(|s| -> &str &s[]) // ERROR
This bug is filed after I asked this [question on stack overflow](http://stackoverflow.com/questions/28512314/how-do-i-get-lifetime-of-reference-to-owned-object-cannot-infer-an-appropriate). It may be related to [Region inference fails for closure parameter #17004](https://github.com/rust-lang/rust/issues/17004).
Answers:
username_1: Triage: there is still no way to declare lifetime parameters for closures like this today.
username_2: Lets throw some ideas out there:
```
|s: &'a String|<'a> -> &'a str &s[]
<'a>|s: &'a String| -> &'a str &s[]
let closure<'a> = |s: &'a String| -> &'a str &s[];
```
The last one is a little limited, but may actually be feasible unlike the others.
username_3: I just ran into this issue today, trying to return a reference out of a closure that had the same lifetime as one of its arguments. Closures with lifetime, as well and type arguments would definitely be nice to have - if we had them, then I'm pretty sure closures would be just as powerful as functions. There's an RFC to implement them both: https://github.com/rust-lang/rfcs/pull/1650
username_4: I played with a similar example today and found a funny workaround.
Just skip type declaration and cast the type in the body of the closure:
```rust
|s| -> &str { &(s as &String)[..] }
```
or even with `#![feature(type_ascription)]`
```rust
|s| -> &str { &(s: &String)[..] }
```
In this way, type inference can do it's job with a lifetime and the type of the argument is limited by the way of use in the body.
username_5: There is another problem related to this one:
```
error[E0281]: type mismatch: `[closure@src/x.rs:329:50: 335:22 message_type:_]` implements the trait `std::ops::Fn<(_,)>`, but the trait `for<'r> std::ops::Fn<(&'r Message,)>` is required
--> src/x.rs:341:44
|
329 | let filter_by_message_type = |x| {
| -
330 | | if (x as &Message).get_type() == *message_type {
331 | | Some(x)
332 | | } else {
333 | | None
334 | | }
335 | | };
| |- implements `std::ops::Fn<(_,)>`
```
which essentiall prevents using closures for `for<'r> Fn(..) -> ...`bounded generics
username_6: An update: all of the examples given in the bug description do compile today, after being updated for Rust 1.0 syntax:
```rust
fn print_first(list: Vec<String>) {
let x: &str = list
.first()
// .map(|s: &String| -> &str { &s[..] }) // OK
// .map(|s: &String| { &s[..] }) // OK
// .map(|s| -> &str { &s[..] }) // OK
.map(|s| { &s[..] }) // OK
.unwrap_or("");
println!("First element is {}", x);
}
fn main() {
print_first(vec![format!("hello"), format!("world")]);
}
```
username_6: (However, what I do not yet know is whether the types we are actually inferring in all of the above cases are what the user expects. See related discussion on #56537...)
username_7: I've been having a somewhat similar issue being unable to handle lifetimes with closures. Were explicit lifetimes ever added to closures?
```
$ rustup --version
rustup 1.24.1 (a01bd6b0d 2021-04-27)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.52.1 (9bc8c42bb 2021-05-09)`
$ cargo --version
cargo 1.52.0 (69767412a 2021-04-21)
$ rustc --version
rustc 1.52.1 (9bc8c42bb 2021-05-09)
```
```rust
#[cfg(test)]
mod tests {
#[test]
fn closure_lifetimes() {
let input: String = String::from("hello world");
let closure = |s: &str| -> &str {&s[..]};
let output: &str = closure(&input);
assert_eq!(input, output);
}
}
```
```
error: lifetime may not live long enough
--> src/payload.rs:173:42
|
| let closure = |s: &str| -> &str {&s[..]};
| - - ^^^^^^ returning this value requires that `'1` must outlive `'2`
| | |
| | let's call the lifetime of this reference `'2`
| let's call the lifetime of this reference `'1`
```
username_8: Not as useful as the above repro by @username_7 , and maybe this is a bit too domain specific, but I hit a lifetime issue with a closure using LALRPOP as shown here:
https://gitter.im/lalrpop/Lobby?at=61227b26aa48d1340c27ca4a
username_9: Isn't the compiler capable of detecting that the argument outlives the closure?
Example:
```Rust
fn main() {
let x = SimpleWrapper { value: 10 };
let foo = |wrapper: &SimpleWrapper| wrapper;
let y = foo(&x);
}
struct SimpleWrapper {
value: i32,
}
``` |
gnosis/dex-services | 516987823 | Title: Implement solution validation with objective value.
Question:
username_0: The idea would be to add a method like this:
```rust
pub fn validate(&self, orders: &[Order]) -> Result<U256, SolutionValidationError>;
```
Answers:
username_0: We are now using the smart contract to validate and compute objective value so this is no longer needed.
Status: Issue closed
|
mjordan/islandora_workbench | 782315845 | Title: Check whether input CSV is encoded either as ASCII or UTF-8
Question:
username_0: Workbench, and Drupal, have no problems with CSV input as long as the CSV file is either ASCII or UTF-8. Not sure about any other encoding but unless there's a proven use case to go beyond UTF-8 it should be sufficient. Therefore `--check` should check whether the input CSV is one of those encodings.
Answers:
username_0: Resolved with e2378e3c15bdc23db8ad3521237ab2b3666960bd.
Status: Issue closed
|
MiguelPerezMartinez/apollofy-music-project | 1008318870 | Title: Subida de canciones
Question:
username_0: Track Squema
- owner: object id
- totalPlays: number
- totalLikes: number
- author: String
- album: String
- title: String
- urlImage: String
- urlSong: String
- genre: String
- "duration: data time" -> Por si no se puede conseguir desde Cloudinary.
Answers:
username_0: Track Squema
- owner: object id
- totalPlays: number
- totalLikes: number
- author: String
- album: String
- title: String
- urlImage: String
- urlSong: String
- genre: String
- "duration: data time" -> Por si no se puede conseguir desde Cloudinary.
username_0:  |
SeisSol/SeisSol | 438741189 | Title: error in submodules
Question:
username_0: submodules/glm/glm/detail/../detail/func_common.inl(631): error: the global scope has no "isnan"
return ::isnan(x) != 0;
^
submodules/glm/glm/detail/../detail/func_common.inl(665): error: the global scope has no "isinf"
return ::isinf(x);
^
I manually fixed it to be std::isnan and std::isinf and now it works.
Answers:
username_1: Could you please post the compile settings you used? This issue is not related to SeisSol but GLM and we might need to post a bug report to the GLM developers.
username_0: I am using these modules:
1) intel-lm/1.0 2) intel/2017 3) mpi.intel/2017 4) mkl/11.3
username_2: I had the exact same issue when installing locally at my workstation (Ubuntu 16.04 LTS, Intel Compiler) while following the installation procedure exactly. I used the same fix as @username_0.
I think the issue stems from a missing preprocessor definition, I will look into this in the near future.
username_3: I remember I had a similar issue while I was installing some sub-modules. In my case, I used intel 19.0 compiler
username_4: Hi Duo/Lukas,
Maybe you can create a pull request with your fix if you already have a fix, right?
Thomas.
username_0: I don't have this issue with intel-mpi/2018 in Durham/hamilton cluster. And this is with glm. Should we ask them to fix it?
username_2: This is an issue with GLM probably, the fix is editing their source code.
There might be some issue with our compiler settings but I've currently got no idea how to fix this.
username_3: I strongly believe that either glm developers or Intel ones messed up something with their C macros in their source codes. I left an issue in their github repo of glm but I haven't gotten any answer.
username_2: @username_5 started porting all code that uses GLM to eigen some time ago. This would be the best way out of this.
username_5: Fixed by #216
Status: Issue closed
|
epicf/ef_python | 311379709 | Title: Визуализация для jupyter
Question:
username_0: Может файл для визуализации в jupyterе внести в основную программу? А то у меня он дорабатывается. И в разных папках он разных версий получается. Или папку отдельную сделать?
Answers:
username_1: "Внести в основную программу" - что имеется в виду?
Да, пора сделать модуль общим для всех примеров.
Наверное, надо сделать в корневом каталоге папку jupyter, положить в нее самую актуальную версию модуля, и потом обновить пути к нему в примерах.
Давай твою версию возьмем за основу.
Если не лень немного повозиться с гитом, то можно сделать это в отдельной ветке:
Переключиться на ветку master (если есть какие-то изменения в текущей ветке, то можно сохранить их с помощью git stash ).
Потом от master-ветки создать ветку jupyter и переключиться на нее.
Потом в корневом каталоге создать папку jupyter и положить в нее последнюю версию твоего модуля.
Потом обновить пути к нему в примерах.
Потом залить на гитхаб. |
nodemcu/nodemcu-firmware | 622084981 | Title: ledc.channel:reset() only working once
Question:
username_0: ### Expected behavior
channel:reset() should reset the timer and trigger a new pwm when the pulse count reaches 200
### Actual behavior
By doing measurements with the oscilloscope, channel:reset() only triggers the reset once (at 200 pulse) but never again although le pulse count is working properly.
### Test code
```Lua
pinOut=15
pinPulseInput = 12
thr0 = 200
thr1 = -4
llim = -6
hlim = 220
pwm=ledc.newChannel({
gpio=pinOut,
bits=ledc.TIMER_13_BIT,
mode=ledc.HIGH_SPEED,
timer=ledc.TIMER_0,
channel=ledc.CHANNEL_0,
frequency=100,
duty=4096
})
function onPulseCnt(unit, isThr0, isThr1, isLLim, isHLim, isZero)
print("Pulse count:" .. pcnt:getCnt())
if isThr0 then
pwm:reset()
print("thr0 "..thr0.." triggered event")
pcnt:clear()
end
end
pcnt = pulsecnt.create(0, onPulseCnt)
pcnt:chan0Config(
pinPulseInput,
pulsecnt.PCNT_PIN_NOT_USED,
pulsecnt.PCNT_COUNT_DIS,
pulsecnt.PCNT_COUNT_INC,
pulsecnt.PCNT_MODE_KEEP,
pulsecnt.PCNT_MODE_KEEP,
llim,
hlim
)
--filter spikes
pcnt:setFilter(1023)
-- Clear counting
pcnt:clear()
-- Set counter threshold for low and high
pcnt:setThres(thr0, thr1)
```
### NodeMCU startup banner
branch: dev-esp32
commit: <PASSWORD>3bb795443
### Hardware
ESP32-WROOM-32
Answers:
username_1: Love that you're using the pulse counter library I wrote. It looks like you're trying to generate PWM signals that turn off at exactly a pulse count. I'm guessing you won't like how it works as the timing of the callbacks and processing through Lua to then turn off the PWM is going to be pretty slow for any accuracy. So maybe that's what you mean in your post? Instead, you should be using the RMT hardware to generate your exact output pulses instead of the pulsecnt approach. Depends on whether you are ok with a mushy solution or not.
If instead you just mean the pulse counter stops counting, I never saw that in my use of it. Are you ever turning your PWM signal back on such that you would hit threshold 0 again? From looking at the code you never turn PWM back on, so you would never get the isThr0 again.
username_1: Hmm. I might have misunderstood your question. Perhaps you are just referring to the ledc library having an issue on reset(), not anything to do with the pulsecnt library. In that case, I'm not sure :reset() does anything other than just start the PWM again, which would mean nothing in your code.
username_0: Thank you, The issue I have is only with ledc. PulseCount is working fantastic, you did a great job on that library. I am trying to make a light AC dimmer. there is a pulse count everytime I get a zero crossing point of the AC 230v 50HZ. I need to keep ledc in sync with that slightly moving 50HZ. All I want is to restart the 100HZ ledc pwm every 200 (will probably be 4 in the end) zero crossing to keep it in sync with the 230v AC. It seems to me ledc.channel:reset() is suppose to start a fresh PWM by reseting the timer.
username_1: Have you tried pause() and resume()?
username_0: yes I did, it does not fix my problem
username_2: -> #3023, in the making, bingo?
username_0: Great news, when will it be released?
Regardless, don’t you think LEDC.reset() should work?
username_2: I leave that for @username_1 to address.
username_1: My gut says that this is an ESP-IDF thing, beyond the scope of NodeMCU. If you take a look at the code for the ledc module, it is literally just a pass-through call to Espressif's code. I'd look deeper at these docs https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/peripherals/ledc.html
Maybe try creating a new channel and deallocate the old one as a heavier way of recreating a timer? If that's too slow of an operation, maybe have a 2nd timer waiting in the wings already allocated but not started?
Here's the code from the module. It's not doing anything special.
```C
static int lledc_timer_rst( lua_State *L ) {
ledc_channel * channel = (ledc_channel*)luaL_checkudata(L, 1, "ledc.channel");
esp_err_t err = ledc_timer_rst(channel->mode, channel->timer);
if(err != ESP_OK)
return luaL_error (L, "reset failed, code %d", err);
return 1;
}
static int lledc_timer_pause( lua_State *L ) {
ledc_channel * channel = (ledc_channel*)luaL_checkudata(L, 1, "ledc.channel");
esp_err_t err = ledc_timer_pause(channel->mode, channel->timer);
if(err != ESP_OK)
return luaL_error (L, "pause failed, code %d", err);
return 1;
}
static int lledc_timer_resume( lua_State *L ) {
ledc_channel * channel = (ledc_channel*)luaL_checkudata(L, 1, "ledc.channel");
esp_err_t err = ledc_timer_resume(channel->mode, channel->timer);
if(err != ESP_OK)
return luaL_error (L, "resume failed, code %d", err);
return 1;
}
static int lledc_stop( lua_State *L ) {
ledc_channel * channel = (ledc_channel*)luaL_checkudata(L, 1, "ledc.channel");
int idleLevel = luaL_checkint (L, 2);
luaL_argcheck(L, idleLevel >= 0 && idleLevel <= 1, 1, "Invalid idle level");
esp_err_t err = ledc_stop(channel->mode, channel->channel, idleLevel);
if(err != ESP_OK)
return luaL_error (L, "stop failed, code %d", err);
return 1;
}
```
username_0: Thank you @username_1, it seems ledc is definitely only passing infos straight down to idf.
I tried two things and both with no luck:
1. Stop the first pwm and immediately start another one on a different timer and channel.
2. Same thing but each pwm on a different pin
Every new pwm seem to start from where the previous one left off, no reset.
username_2: @username_0 did you try bumping ESP-IDF to https://github.com/espressif/esp-idf/releases/tag/v3.3.2 (we're currently on 3.3.1)?
username_1: For what it's worth, you could try to take my RMT TX library, which I haven't pushed to this repo yet because the docs aren't done yet, and use that to generate your PWM with very fine-grained control. You could just set up your duration0, level0, and duration1, level1 to your 100Hz values and let it loop on its own. Then you can call stop and start in a cleaner way. Sure, you're using the amazing RMT hardware to do basic PWM, but if you weren't using it anyway, might as well.
https://github.com/username_1/robot-actuator-esp32-v8/blob/master/firmware/src/components/modules/rmttx.c
Here's sample Lua code showing how to send info to a ws2812 LED using the module: https://github.com/username_1/robot-actuator-esp32-v8/blob/master/lua/rmttx_ws2812_v2.lua
Here's sample Lua code showing how to send stepper motor signals with accelerate/decelerate using the module:
https://github.com/username_1/robot-actuator-esp32-v8/blob/master/lua/rmttx_stepper_v3.lua
username_0: @username_2 , I updated to v3.3.2 and issue remains the same. (ledcl:reset() not triggering)
@jpeletier , I compiled dimmer.c module, it works like a charm!
Thank you all for the inputs, the AC dimmer is now working.
- My initial post was regarding ledc:reset() not triggering. Like @username_1 mentioned, it definitely seems to be coming from the ESP-IDF. I leave it up to you guys if you wish to keep this issue open just to track it down.
username_2: @username_0 thanks for testing. Can I ask you to help fellow developers by raising this at https://github.com/espressif/esp-idf/issues?
Status: Issue closed
|
ARM-software/ComputeLibrary | 791719488 | Title: incorrect output from NEDeconvolutionLayer
Question:
username_0: HI,
I am seeing incorrect results for NEDeconvolutionLayer with asymmetric padding size. I have attached the setup with expected output below.
ACL Version used: v20.02.1
[transposed_standalone.zip](https://github.com/ARM-software/ComputeLibrary/files/5853826/transposed_standalone.zip)
Answers:
username_1: Hi @username_0
I think there might be a problem with your functions:
```
void fillBufferToTensor(float* input_data, arm_compute::ITensor& inputTensor) {
void fillTensorToBuffer(float* output_data, arm_compute::ITensor& outputTensor) {
```
Could you please take a look at this https://github.com/ARM-software/ComputeLibrary/issues/770#issuecomment-551030169 and make changes to those functions according to the information explained in that post?
Essentially when you copy your tensors you should use the strides:
`*reinterpret_cast<float *>(input_it.ptr()) = src_data[id.z() * (width * height) + id.y() * width + id.x()];`
username_2: Hi @username_1,
I have changed the fillTensorToBuffer and fillBufferToTensor, still i'm seeing numerical mismatch. Also, In my script i'm using count variable to traverse through the un padded area in output ARM Tensor and copying into the output buffer. We have already discussed about count variable based approach in https://github.com/ARM-software/ComputeLibrary/issues/711 to copy ARM Tensor data to the raw buffer.
Moreover there is no padding to the output arm tensor. I have verified by using has_padding() method on output ARM Tensor.
It seems like there is an issue with cropping the padded dimensions input.
Also, I have tried passing padded values as zeros and NEDeconvLayer and used NESlice operation to crop the output of NEDeconvLayer by actual padded dimensions. Then its output matching with the reference output.
Could you take a look into this? Let me know if you face any issues while reproducing this problem.
Thanks,
Hari
username_1: Hi @username_2
I'd advice you always call the validate() method before running configure() and certainly before run().
In this particular case if you call validate() you will notice that this configuration is not supported, that's the reason why run() is generating mismatches.
If validate() tells the configuration is not supported, run() should not be executed as the results will be undefined.
In order to see this you should build with debug=1 and add these lines before the call to configure
```
271 const auto s = NEDeconvolutionLayer::validate(src.info(), m_deConvLayerWgtTensor.info(), m_deConvLayerBiasTensor.info(), out_deConv0.info(),psinfo);
272 std::cout << " validate " << (bool)s << std::endl;
```
Then if you run the test you will get
```
root@odroid:~/pablo# LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./transposedConv_standalone_exe
validate 0
terminate called after throwing an instance of 'std::runtime_error'
what(): in validate src/runtime/NEON/functions/NEGEMMConvolutionLayer.cpp:510: biases->dimension(0) != weights->dimension(idx_kernels)
Aborted
```
Hope this helps.
username_3: Hi @username_1 ,
Thanks for your suggestion.
As a workaround, I have performed the NEDeconvolution layer with default padding dimensions. On top of the NEDeconvolution output, I'm using NESlice operation with input asymmetric dimensions. This was giving correct results.
Also, Do you have any plans to support NEDeconvolution with asymmetric padding dimensions?
Thanks,
Hari
username_1: HI @username_3
Good to hear you got it working.
We have no plans to add support for asymmetric padding in NEDeconvolution. We support Android nnapi use cases.
Would you please let us know if this particular configuration is a standalone test you have written or if it comes from a well-known model?
username_3: Hi @username_1 ,
We had a custom classification network provided by the customer. Basically, it uses a transposedConv2D layer with asymmetric padding dimensions.
Status: Issue closed
|
MadsKirkFoged/SharpFluids | 700897527 | Title: Added 32bit and 64bit DLLs
Question:
username_0: Coolprop have both 32bit and 64bit dlls that we can use.
Right now we are only using the 32bit one.
I have tried without luck to add the 64bit dll.
The two tasks to do (As I see it):
1) When the project builds it should only copy one of the dlls to the Output folder dependent on the chosen architecture
2) When creating the Nuget Package it should also take the architecture into account
Answers:
username_0: I have added support for 64bit - let me know if anyone have problems with it
Status: Issue closed
username_0: I fixed it |
stbui/angular-material-app | 269143302 | Title: Need guide to make it Right to left support
Question:
username_0: Hello,
Is there any built-in support for Right to Left? If not, there is [an online tool](http://rtlcss.com/playground/) that can make Left to Right CSS files to right to left version. Which files should I modify?
Thanks for sharing this amazing work.
Answers:
username_0: I found the solution. In `index.html` file add `dir="rtl"` to `<body>` tag like this:
```
<body dir="rtl">
...
```
Status: Issue closed
|
angular/material | 225014978 | Title: Datepicker month view isn't visible on IE
Question:
username_0: **Actual Behavior**:
- `What is the issue? *`
If I use the Internet Explorer version 11 and opens a Datepicker everything is perfect until I click on the arrow to access the view of month selection. Actually, a customer sent me a picture of the issue with specs because I couldn't reproduce the error. The behavior is only observed for one datepicker, all other datepickers are ok. This datepicker is located in a container with following styles:
position:absolute and z-index:2.
- `What is the expected behavior?`
The datepicker should display the grid of month.
**CodePen** (or steps to reproduce the issue): *
- `CodePen Demo which shows your issue:`
Isn't possible because, I cannot reproduce the error myself
- `Details:`
**AngularJS Versions**: *
- `AngularJS Version:` 1.5.8
- `AngularJS Material Version:`1.1.1
**Additional Information**:
- `Browser Type: *` Internet Explorer
- `Browser Version: *` 11
- `OS: *`
- `Stack Traces:` Win 7 professional
----
Thank you in advance
Status: Issue closed
Answers:
username_1: Thank you for taking the time to submit this issue. However, we haven't been able to reproduce it.
Please feel free to reopen the issue when you are able to provide a reproduction via [CodePen](https://codepen.io/team/AngularMaterial/pen/bEGJdd). |
vincentzhang96/dv-thanatos | 323870653 | Title: Text regression with newlines
Question:
username_0: Instances where the game expects an actual newline in the XML instead of an escaped \n results in strikethrough-w-n displaying, such as in image-banner interactions (the banners with the monster face next to it the text)

Status: Issue closed
Answers:
username_0: Fixed with 0f0826c9284091c24af8ae4d833d720866d9cf8e and updates to ModKit |
rarebreed/quartermaster | 257460921 | Title: Need to make some generic components in cycle js
Question:
username_0: So, I must be doing something wrong with rxjs. When I converted basically the same logic of my existing code (using rxjs) to use xstream instead, it works.
I'd prefer not to have to learn two different reactive libraries though.
Answers:
username_0: I think I will go ahead and use xstream instead of rxjs. I managed to get the DOM component to show up on the webpage for a text input box, but it's not receiving any events. When I use xtream however, it works.
The only real difference is that in xstream I am using the combine operator to merge two streams, whereas in the rxjs version, I am using mergeMap. I don't think this is actually the problem though, because the component actually does render to the DOM. The problem is that when I enter text into the input form, the change isn't propagated.
There is one advantage to using xtream. I can use the cycle chrome dev tool to help with debugging.
username_0: Ok, I was able to get the rxjs version working. I made a couple mistakes in my code. So, the generic label component is now working |
swcarpentry/git-novice | 440778605 | Title: Broken link in lesson 07-github
Question:
username_0: The Remotes module (07-github) of this lesson has a broken link at [Line 54](https://github.com/swcarpentry/git-novice/blame/gh-pages/_episodes/07-github.md#L54), which tries to link back to the Tracking Changes module, but ends up at a nonexistent page (right now going to http://swcarpentry.github.io/git-novice/07-github/04-changes.html). This link should be fixed.
Answers:
username_0: Resolved via #640
Status: Issue closed
|
theFork/uMIDI | 140521757 | Title: Feature wishlist for the sequencer module
Question:
username_0: - [ ] Ability to load and store programs
- [ ] 120 programs containing
* Speed
* Waveform ("normal" waveform / pattern)
- [ ] Selectable by MIDI PC commands or via HMI
- [ ] Edited and stored via HMI
- [ ] Generalized and more powerful patterns
- [ ] Every step of a pattern should store
* MIDI channel
* MIDI message type
* MIDI value(s) - depending on message type
- [ ] A pattern can be "one-shot" (fired on PC) or continuous
- [ ] Nice user-interface that allows
- [ ] adjustment of all parameters
- [ ] storing a modified pattern at a specifiable location (pattern 1-20)
- [ ] copying and wiping patterns
Status: Issue closed
Answers:
username_0: Merged: 7ab2b95 |
BCDevOps/platform-services | 440143699 | Title: Automated build pipeline for iOS
Question:
username_0: iOS build can not be preformed in OpenShift because they need to be done on a macOS device with the development suite. Currently the only build provider with macOS build offerings is CircleCI.
We need to setup an automated build process with CircleCI that includes signing the output `xcarchive` and then providing it as an artifact to a member of the team. This will probably be done by sending an email with a link to the signed IPA for download.
Answers:
username_0: Tested CircleCI at building the Secure Image app. While service and build work fine there is a problem producing the xcarchive. It needs some form of a certificate to produce an archive. The current CircleCI 2.0 pipeline only supports `fastlane` which requires that provisioning profiles and certificates are stored in encrypted format in a GitHub repository. This is likely a blocker for us as its unlikely that "security" would allow us to put our certificates in a repository governed by a 3rd-party (GitHub). We would also need to keep the decryption keys in an environment variable on CircleCI.
username_0: Fixed the issues mentioned above. The artifact appears to work with the mobile signing service. Next step is to determine where to go next with the initiative.
username_0: Got the same project but the Android version building wit CircleCI yesterday. It works well.
username_0: Completed work on build and sign on Azure DevOps.
Status: Issue closed
username_0: iOS build can not be preformed in OpenShift because they need to be done on a macOS device with the development suite. Currently the only build provider with macOS build offerings is CircleCI.
We need to setup an automated build process with CircleCI that includes signing the output `xcarchive` and then providing it as an artifact to a member of the team. This will probably be done by sending an email with a link to the signed IPA for download.
Status: Issue closed
|
ybertot/plugin_tutorials | 362941219 | Title: Please, create 8.8, 8.9 branches; it does not work with `master`.
Question:
username_0: Please, create proper branches for the tutorial.
Also, current code doesn't work with `master`.
This is prerequisite to get https://github.com/coq/coq/pull/8531 in Coq, which should enable us to keep this tutorial updated.
Answers:
username_0: As to unblock this, maybe we should move this repository to the Coq org ?
username_1: I am sorry, I was busy on other topics and will look into it this week. But I am not sure to solve the problems.
username_1: I think I did what was requested.
- there are now a coq-8.8 and a coq-8.9 branch
The tip of master compiles with today's coq master without error or warning.
username_1: I should have added that, as far as I know, the branches coq-8.8 and coq-8.9 compile with the announced version of Coq.
username_0: Great, thanks @username_1 , will check the CI works and close this issue.
A nit comment: maybe you could consider naming the branches `v8.8` and `v8.9` , we will propose CI developments that can to follow the same branch naming convention than Coq so we can use a "convention over configuration" style.
Also, in the past @username_2 mentioned that maybe this tutorial is important enough as to be moved to the Coq organization, will let you folks thing about that.
username_2: I think that if a convention must be chosen, having the branch named `coq-8.8` and `coq-8.9` is actually better (because clearer) than `v8.8` and `v8.9`.
username_2: I didn't say move to the Coq organization, I said move to the Coq repository itself. There are many plugins in the Coq repository that people look into when they want to create their own (but they do not follow best practices to say the least). Having special example plugins there would serve them better and would encourage developers to update these examples. Just transferring the repository to the coq organization would be pretty useless I believe (repository in github.com/coq which are not coq/coq tend to go unmaintained :disappointed:).
username_0: Interesting, so would you support renaming the coq branches to `coq-v8.8` ?
IMHO it is very important for convention to work to use the exact naming.
username_0: The plugins compiles now on my CI branch, thanks @username_1 !
Status: Issue closed
username_2: Certainly not. I'm talking about the branch name of external plugins, not of Coq itself.
username_0: Then I don't like that naming proposal. The whole point of same-branch naming is indeed to remove any transformation between branch names.
username_1: I support @username_2 's proposal of using a different name from the one in the coq repository. An external plugin might want to use its own version and branches and v8.8 is too anonymous to be reserved just to track the coq system. On the other hand, a tag of the form coq-8.8 means exactly what I want: whatever my own version is this tag is supposed to follow version 8.8 of coq.
In the case of plugin_tutorial, this question is rather useless, though.
username_0: Indeed the case is that for most Coq plugins the question is rather useless, so that's the point of proposing the convention. The versioning of plugin is 100% tied to the particular version of Coq, in the large majority of cases. Thus it makes sense to have versions numbers such as `v8.9+0.2.1` etc... |
huridocs/uwazi | 913065055 | Title: Table view: endless scroll + select all dont play well together
Question:
username_0: Since our table view is based on endless scroll, trying to perform a simple bulk operation on several entities results in needing to scroll to the limit several times before hitting the select all button in order to do the bulk operation.
Answers:
username_1: Isn't that the same for the cards but by clicking a button?
Also this makes me wonder (again :D ) if we need to rethink the select all button semantics. Should it select every result shown or every result in the search?
username_0: Yes, but at least you have the option to get 300 in a go, which in most cases is enough to get all the data. Is way less time. |
monix/monix | 409026006 | Title: .reduce on Observable with one item emits an empty stream
Question:
username_0: Using the .reduce operator on a single-item observable results in an empty stream.
```
import monix.reactive.Observable
implicit val ec = monix.execution.Scheduler.global
Observable.fromIterable(List(1,2)).reduce(_ + _).toListL.runToFuture
//res8: monix.execution.CancelableFuture[List[Int]] = Async(
// Future(Success(List(3))),
// monix.eval.internal.TaskConnection$Impl$$anon$1@7fac18dc
//)
Observable.fromIterable(List(1)).reduce(_ + _).toListL.runToFuture
//res9: monix.execution.CancelableFuture[List[Int]] = Async(
// Future(Success(List())),
// monix.eval.internal.TaskConnection$Impl$$anon$1@279c4e3b
//)
```
Answers:
username_1: I think it's on purpose. I don't know the reasoning behind this decision though, might have been a mistake considering `List(1).reduceLeft(_ + _)` returns 1. `Iterant` will have similiar behaviour I think (`Some(1)`)
Can we fix this @username_2 ? I mean change is trivial but there's a chance for silently breaking users' code
username_2: I don't remember any specific reason for that. Yes, it's fine to have the same behavior as `List` and `Iterant`.
Status: Issue closed
|
langcog/web-cdi | 117219963 | Title: instructions for CDI
Question:
username_0: above demographics in first screen of the survey, we want to have the instructions that are on the instrument itself
Answers:
username_0: e.g. from word document from @vmarchman
username_0: And also a header that says "MacArthur-Bates Communicative Development Inventory" and a link with "Click here for more info" to mb-cdi.stanford.edu
username_1: does the document with the instruction text exist somewhere?
Status: Issue closed
|
polkadot-js/apps | 771285051 | Title: What is the authorization of transfer pop-up authorization transaction? I don't know this process very well
Question:
username_0: 
Answers:
username_1: It plays a couple of roles -
- it allows you to verify the details of the transaction you are sending
- it shows the fees associated with the transaction
- it allows you to enter your password (unlocking your account and signing the transaction)
username_1: Closing
Status: Issue closed
|
MicrosoftDocs/azure-docs | 506224983 | Title: Switching to customer's directory in Portal - not possible
Question:
username_0: How to properly switch to the delegated customer subscription / directory? It's not present in Directories + Subscription tab even the account has delegated the Contributor role in the destination subscription.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 85dc331a-0f34-f9a2-43d9-8ab7e0e635c6
* Version Independent ID: 96c07c81-f84c-f47b-659c-b3d055aefb03
* Content: [Onboard a customer to Azure delegated resource management - Azure Lighthouse](https://docs.microsoft.com/en-us/azure/lighthouse/how-to/onboard-customer)
* Content Source: [articles/lighthouse/how-to/onboard-customer.md](https://github.com/Microsoft/azure-docs/blob/master/articles/lighthouse/how-to/onboard-customer.md)
* Service: **lighthouse**
* GitHub Login: @JnHs
* Microsoft Alias: **jenhayes**
Answers:
username_1: @username_0 Thanks for the feedback and bringing this to our notice . At this time we are reviewing the feedback and will update the document as appropriate .
username_2: Are you sure the sub is delegated correctly. Usually this takes some time for the portal to clear it's cache and function properly. If you list subscriptions in cloudshell, do you have it there?
I have had lot's of strange things happen because of the portal intense caching
username_3: @username_0 We will now close this issue for now. If there are further questions regarding this matter, please respond and we will gladly continue the discussion.
Status: Issue closed
|
zoehneto/chrome-rtf-viewer | 257131805 | Title: Doesn't work with local files
Question:
username_0: If you open an .rtf file on Google chrome locally on your computer, it does not change the style of it.
Answers:
username_1: To open local files with the extension you have to allow access to local files in the settings as described here: https://stackoverflow.com/a/8921492
username_0: Whoops, thanks man |
scikit-image/scikit-image | 242356060 | Title: Bright ridge detection not working as expected with Frangi
Question:
username_0: ## Description
Frangi filter doesn't work with `black_ridges=False`.
The only way to get a sensible result is to invert the image and keep `black_ridges=True`.
## Way to reproduce
Making only minor edits to the Frangi example:
*NB I also switched the image to `float` to get a better result from the Hessian...*
```python
"""
=============
Frangi filter
=============
The Frangi and hybrid Hessian filters can be used to detect continuous
edges, such as vessels, wrinkles, and rivers.
"""
from skimage.data import camera
from skimage.filters import frangi, hessian
import matplotlib.pyplot as plt
image = camera()
image = image.astype(float)
fig, ax = plt.subplots(nrows=2, ncols=2, subplot_kw={'adjustable': 'box-forced'})
ax[0,0].imshow(image, cmap=plt.cm.gray)
ax[0,0].set_title('Original image')
ax[0,1].imshow(frangi(image), cmap=plt.cm.gray)
ax[0,1].set_title('Frangi filter result')
ax[1,0].imshow(frangi(image, black_ridges=False), cmap=plt.cm.gray)
ax[1,0].set_title('Frangi filter result (black_ridges=False)')
ax[1,1].imshow(hessian(image), cmap=plt.cm.gray)
ax[1,1].set_title('Hybrid Hessian filter result')
for a in ax:
for aa in a:
aa.axis('off')
plt.tight_layout()
image = image.max()-image
image = image.astype(float)
fig, ax = plt.subplots(nrows=2, ncols=2, subplot_kw={'adjustable': 'box-forced'})
ax[0,0].imshow(image, cmap=plt.cm.gray)
ax[0,0].set_title('Original image')
ax[0,1].imshow(frangi(image, black_ridges=False), cmap=plt.cm.gray)
ax[0,1].set_title('Frangi filter result (inverted, black_ridges=False)')
[Truncated]
ax[1,0].imshow(frangi(image, black_ridges=True), cmap=plt.cm.gray)
ax[1,0].set_title('Frangi filter result (inverted, black_ridges=True)')
ax[1,1].imshow(hessian(image), cmap=plt.cm.gray)
ax[1,1].set_title('Hybrid Hessian filter result')
for a in ax:
for aa in a:
aa.axis('off')
plt.tight_layout()
plt.show()
```
Produces

and

Answers:
username_0: Would it be acceptable to simple invert the image in the `frangi` function if `black_ridges` is `False`?
If so I'd be happy to submit a quick fix - otherwise more digging into the issue with the underlying `_frangi_hessian_common_filter` will be needed.
username_1: yes, get the same observation here. ppl needs to get @DavidBreuer work into the trunk or fix this bug...
username_2: How good you are thanks for this work 🥇
Status: Issue closed
|
dukecon/dukecon_pwa | 306094456 | Title: Add a little Infobox or icon if favorited multiple talks in the same time slot
Question:
username_0: _From @noctarius on March 17, 2017 7:49_
When favoriting talks from the timetable, the slots are obvious but when looking through topic areas I might have a chance to select multiple talks which happen in parallel. Maybe a small infobox would be nice or adding a little yellow exclamation mark to the header and show the info if the user hovers or clicks it :)
_Copied from original issue: dukecon/dukecon#35_ |
PokemonGoers/Catch-em-all | 180247095 | Title: migrate to Ionic RC0
Question:
username_0: - Migrate to NgModule
- Base on Angular 2.0.0 Final
- Ahead of Time Compiling
- Switch to Rollup
- Switch to the new Lifecycle Hooks
- New button behavior
- Update CSS Link Tags
- Migrate to TS 2.0
- use new typings system
Answers:
username_1: Have you tried running the Docker build? For me, the cordova.js file was missing. I don't know who is supposed to generate it but apparently `ionic build browser` doesn't do the trick.
username_0: I'll have to look into the docker build, ionic build browser is working as intended and everything required ends up in platforms/browser/www directory
username_0: @username_1 Can you look into the API Endpoint stuff? I worked around that to make it work locally, maybe you can reintroduce the env variables?
username_1: Environment variables are fixed. Docker should also work.
Please avoid relying on any globally installed packages (e.g. `npm install -g cordova`) and install them locally instead. I also removed the `rm -rf node_modules` line you've introduced. This is just gonna give us painfully long Travis builds. If there should be any issues with npm installs you can always just clear the cache.
username_0: I had to do it one time to solve an issue with npm. However, keeping it is not neccessairy, I agree. Same thing goes for the cordova. previously, the builds were failing because n_m/.bin/ionic build would try to get plugins using cordova.
username_2: when is the MVP out?
username_0: This will be merged after the beta probably.
It is in a solid state but we have to take care of enough things so we don't want to add additional complexity for now, since the migration involves a completely new build process and breaking changes in both Ionic and Angular
username_0: Updated
username_3: @sacdallago This issue requires a lot of experience with ionic 2. Should @Georrgi still want to contribute we could find something easier for him. I'm unassigning him here since @username_0 has it covered (See rc1 branch). I hope that's okay.
username_0: I didn't even see this until now, I agree. Me and @username_1 are/will be doing fine here.
username_0: Updated.
Status: Issue closed
|
keybase/client | 351021423 | Title: KBFS does not start (and can't reinstall)
Question:
username_0: Context: Problem began to occur following a hard-shutdown.
Platform: Windows 10
Symptoms:
- KBFS drive is not available.
- GUI fails to load.
- Installer fails with "The specified account already exists"
Rebooting does not help.
my log id: 1e48bc10c2043a64ad978c1c
Status: Issue closed
Answers:
username_0: Fixed by uninstalling before attempting to reinstall.
username_1: I was unable to fix it by uninstalling it first. Now it just says "Gathering required information...", the meter fills up, then it quits.
username_2: CC @username_3 installer broken at least?
username_3: @username_1 What does `keybase version` say? Can you do a `keybase log send`?
username_1: I got it to install successfully by deleting the HKCU\Software\Keybase key, then reinstalling. It also managed to change the KBFS mount to L: instead of K:. |
coding-blocks/coding-blocks.github.io | 265187024 | Title: Debranding: Scrub "Pandora" and officially use "Android Course"
Question:
username_0: We will be having
- Android App Development Course
- Android Development Live Course
scrub all mentions of "Pandora", we are debranding that name.
Answers:
username_0: Create pages like this `/courses/classroom/android.html`
Earlier pandora.html pages can be HTML head redirects (like http://codingblocks.com/boss )
username_1: Will be creating android-development.html and web-development.html. Also working on using course id rather than course name everywhere, so might take little more time.
Status: Issue closed
|
standardnotes/forum | 417429583 | Title: [Security bug?] Circumvent privileges settings
Question:
username_0: This initially seemed like it could be a security issue, but on second viewing I think it's mostly inconsequential.
Steps to recreate:
(Having previously set privileges such that a local passcode is required to change the passcode lock settings)
* Fresh log in (all local data previously deleted)
* Set local passcode while the notes/extensions/preferences are still loading
* Although the local passcode is set, I was able to change the autolock timeout setting without needing to re-enter my local passcode (unexpected behavior).
* Once my notes/settings loaded, the privileges/restrictions were applied properly, and I had to enter my passcode to change the setting again (expected behavior).
I haven't tested it, but I wonder if *Manage Privileges* would be similarly unsecured. I also wonder if this would be the case for privileges secured with the account password.
I'm not really sure this is a big concern, but maybe the one situation where this could be a problem is:
1. Authorized user logs in
2. (Sets local passcode or not)
3. Steps away while notes/extensions/etc load - for this scenario, assuming the number of notes is substantial enough for the loading to take some minutes.
4. Unauthorized user steps in and changes privileges settings, that would normally require authentication, during this vulnerable loading-state.
Answers:
username_1: Privileges are an account item, just like a note. And items are downloaded from the server, with oldest items coming in first. Since privileges are relatively new, then they might come in towards the end of a sign in session. In that case, you're right, you may be able to perform account protected actions while the data is downloading.
Two solutions I can see are:
1. Don't allow protected action while initial download has not completed.
2. Download privileges from the server before downloading anything else.
I'll have to think this one through. |
gocd/gocd | 97411453 | Title: HTML escaping issues in trigger with options
Question:
username_0: The commit message reads `#1334 Replaced 'eval' with 'exec'`
<img width="810" alt="screen shot 2015-07-27 at 1 32 25 pm" src="https://cloud.githubusercontent.com/assets/10598/8901573/9866187a-3467-11e5-8184-96137de28d7d.png">
Answers:
username_0: Related issue #1472 & #1603
username_1: @zabil: Another one (like #304) related to escaping. Is this fixed as well?
username_0: @GaneshSPatil — can you look at this when you have some spare time?
Status: Issue closed
|
dropbox/sqlalchemy-stubs | 483071367 | Title: hybrid_property support
Question:
username_0: Hi, I saw the thread a this topic [here](https://github.com/python/mypy/issues/4430), but can't haven't seen any TODOs in this repo for it, so I'll go ahead and make one.
We need support to infer types for `hybrid_property` similar to `property`, returning different types based on whether it's being accessed on a class or an instance, and support for the `@<property>.expression` decorator.
Answers:
username_0: If anyone's reading this and is curious if there's a quick way to get an easy (but janky) fix here, you can do something like this to basically sub in mypy's type checking for `property` for your `hybrid_property`s by using a third variable called `typed_hybrid_property` who's type changes depending on whether it's runtime.
```
if TYPE_CHECKING:
# Use this to make hybrid_property's have the same typing as a normal property until stubs are improved.
typed_hybrid_property = property
else:
from sqlalchemy.ext.hybrid import hybrid_property as typed_hybrid_property
```
This also allows mypy to type expression functions, which is nice.
username_1: I tried keeping the name and mypy doesn't recognise it as an alias for property:
```python
if TYPE_CHECKING:
hybrid_property = property # type: ignore[misc,assignment]
```
Maybe it's because of the errors raised:
```
error: Cannot assign to a type [misc]
error: Incompatible types in assignment (expression has type "Type[property]", variable has type "Type[hybrid_property]") [assignment]
```
username_2: @username_1 Make sure that you're not re-assigning `hybrid_property` (which is what the error message points out).
The following should work:
```
if TYPE_CHECKING:
hybrid_property = property
else:
from sqlalchemy.ext.hybrid import hybrid_property
# hybrid_property will be Type[hybrid_property] during type checking
# but refer to sqlalchemy.ext.hybrid.hybrid_property at runtime
```
The following won't:
```
from sqlalchemy.ext.hybrid import hybrid_property
if TYPE_CHECKING:
hybrid_property = property # reassignment -> conflict
```
username_1: Thank you. This does indeed work. The `.expression` and `.comparator` decorators also work without issue.
username_3: im playing with descriptors today and shouldn't this be the general approach for a descriptor?
```python
from typing import Any, Union, overload
class Descriptor:
def some_method(self) -> Any:
pass
@overload
def __get__(self, instance: None, other: Any) -> "Descriptor": ...
@overload
def __get__(self, instance: object, other: Any) -> "int": ...
# we can use -> Any here to fix errors below, but then
# we get no typing value at all.
def __get__(self, instance: object, other: Any) -> "Any":
if instance is None:
return self
else:
return 5
class Foo:
value = Descriptor()
Foo.value.some_method()
f1 = Foo()
val : int = f1.value
```
that is, the hybrid_property separation of "expression" and "instance" is made apparent by the type of "instance" passed to the descriptor protocol.
can someone comment on this approach? considering this is what I would try to adapt to hybrid properties, which are just descriptors with pluggable class/instance level functions.
username_3: Here's POC 1 for this approach with hybrids:
```python
from typing import Any
from typing import Callable
from typing import Generic
from typing import Optional
from typing import overload
from typing import Type
from typing import TypeVar
from typing import Union
from sqlalchemy import column
from sqlalchemy import Integer
from sqlalchemy.sql import ColumnElement
_T = TypeVar("_T")
class hybrid_property(Generic[_T]):
def __init__(
self,
fget: Callable[[Any], _T],
expr: Callable[[Any], ColumnElement[_T]],
):
self.fget = fget
self.expr = expr
@overload
def __get__(
self, instance: None, owner: Optional[Type[Any]]
) -> "ColumnElement[_T]":
...
@overload
def __get__(self, instance: object, owner: Optional[Type[Any]]) -> _T:
...
def __get__(
self, instance: Union[object, None], owner: Optional[Type[Any]] = None
) -> Any:
if instance is None:
return self.expr(owner)
else:
return self.fget(instance)
def expression(
self, expr: "Callable[[Any], ColumnElement[_T]]"
) -> "hybrid_property[_T]":
return hybrid_property(self.fget, expr)
class MyClass:
def my_thing_inst(self) -> int:
return 5
def my_thing_expr(cls) -> "ColumnElement[int]":
return column("five", Integer)
my_thing = hybrid_property(my_thing_inst, my_thing_expr)
mc = MyClass()
int_value: int = mc.my_thing
expr: ColumnElement[int] = MyClass.my_thing
```
username_3: Here we go, this is just about the whole thing, how about this
```python
from typing import Any
from typing import Callable
from typing import Generic
from typing import Optional
from typing import overload
from typing import Type
from typing import TypeVar
from typing import Union
from sqlalchemy import column
from sqlalchemy import Integer
from sqlalchemy.sql import ColumnElement
_T = TypeVar("_T")
class hybrid_property(Generic[_T]):
def __init__(
self,
fget: Callable[[Any], Union[_T, ColumnElement[_T]]],
expr: Optional[Callable[[Any], ColumnElement[_T]]] = None,
):
self.fget = fget
if expr is None:
self.expr = fget
else:
self.expr = expr
@overload
def __get__(
self, instance: None, owner: Optional[Type[Any]]
) -> "ColumnElement[_T]":
...
@overload
def __get__(self, instance: object, owner: Optional[Type[Any]]) -> _T:
...
def __get__(
self, instance: Union[object, None], owner: Optional[Type[Any]] = None
) -> Any:
if instance is None:
return self.expr(owner)
else:
return self.fget(instance)
def expression(
self, expr: "Callable[[Any], ColumnElement[_T]]"
) -> "hybrid_property[_T]":
return hybrid_property(self.fget, expr)
class MyClass:
# seems like "use the name twice" pattern isn't accepted by
# mypy, so use two separate names?
@hybrid_property
def _my_thing_inst(self) -> int:
return 5
@_my_thing_inst.expression
def my_thing(cls) -> "ColumnElement[int]":
return column("five", Integer)
mc = MyClass()
int_value: int = mc.my_thing
expr: ColumnElement[int] = MyClass.my_thing
``` |
open-mmlab/mmsegmentation | 1142316449 | Title: STDC checkpoints have no `meta` key
Question:
username_0: STDC checkpoints have no `meta` key.
### ran script
```
python /home/PJLAB/maningsheng/projects/openmmlab/mmsegmentation/demo/image_demo.py \
demo/demo.png \
configs/stdc/stdc2_in1k-pre_512x1024_80k_cityscapes.py \
checkpoints/stdc2_in1k-pre_512x1024_80k_cityscapes_20211125_220437-d2c469f8.pth
```
### error message
```
load checkpoint from local path: checkpoints/stdc2_in1k-pre_512x1024_80k_cityscapes_20211125_220437-d2c469f8.pth
Traceback (most recent call last):
File "/home/PJLAB/maningsheng/projects/openmmlab/mmsegmentation/demo/image_demo.py", line 40, in <module>
main()
File "/home/PJLAB/maningsheng/projects/openmmlab/mmsegmentation/demo/image_demo.py", line 27, in main
model = init_segmentor(args.config, args.checkpoint, device=args.device)
File "/home/PJLAB/maningsheng/projects/openmmlab/mmsegmentation/mmseg/apis/inference.py", line 35, in init_segmentor
model.CLASSES = checkpoint['meta']['CLASSES']
KeyError: 'meta
```
Answers:
username_1: It is because after pushing the STDC model, pr reviewer made some comments about variable names, so I have to transfer key names to ensure lateset version could directly use STDC model which is actually using older variable names.
META info is lost during transferring keys process. I would try to find original models back, otherwise I would train new STDC models and push again.
Best,
username_2: Encountered the same problem.
Status: Issue closed
|
yuuis/sweets | 399197704 | Title: [mergeすんなよ] リクエスト認証とサンプル
Question:
username_0: # 認証
## sign_in
endpoint: `base_url/api/v1/auth/sign_in` `POST`
body:
```
{
"email": "<EMAIL>",
"password": "<PASSWORD>"
}
```
## 以降つけるheader
* access-token
* client
* uid
# sample
## purchase reserve
endpoint: `base_url/api/v1/purchase/reserve` `POST`
body:
```
[
{
"product_id": 1,
"quantity":2
},
{
"product_id": 2,
"quantity": 1
}
]
```
## purchase create
endpoint: `base_url/api/v1/purchase` `POST`
body:
```
{
"purchase_id": 1,
"idm": "456def"
}
```<issue_closed>
Status: Issue closed |
DragonCherry/AssetsPickerViewController | 650763526 | Title: Preselected assets not selecting
Question:
username_0: Hello! 👋🏻
I've noticed that the photos are not preselected after implementing the "drag to select" feature.
Also if you select some photos in a first opened album and then switch to some another album, already selected items are not preselected too.
Here is my fix to solve the problem #79 .
**Steps to reproduce the first bug:**
1. Open AssetsPickerViewController
2. Launch example "Set Selected Assets Before Present"
3. Click "Pick" and select 2 photos
4. Click "Pick" again
**Expected result:**
Selected photos from step 3 selected and having checkmarks
**Actual Result:**
The photos are not preselected
**Steps to reproduce the second bug:**
1. Open AssetsPickerViewController
2. Launch any example
3. Click "Pick" and select some photos in a first opened album
4. Change album to the another that contains the same photos
**Expected result:**
Selected photos from step 3 selected and having checkmarks
**Actual Result:**
The photos are not preselected
Thx 😊
Status: Issue closed
Answers:
username_1: Thank you so much !! |
jeroen/unix | 749387417 | Title: R4: there is no package called ‘unix’
Question:
username_0: Hi,
I ran into an issue installing devtools (as example) for R4. It gives me an error while installing any dependency: `there is no package called ‘unix’`.
I created an issue there ( https://github.com/r-lib/devtools/issues/2300 ) but it seems not to be an issue there. Im not even sure that the unix package is the root cause, maybe I forgot something? I installed and self compiled R as stated in the manual.
Do someone know whats wrong here?
Status: Issue closed
Answers:
username_0: R4 was using R3.6 packages - no error here, sorry |
godotengine/godot | 627954985 | Title: Normal map flipped on Y-axis in 2D mode
Question:
username_0: **Godot version:**
v3.2.1
**OS/device including version:**
<!-- Specify GPU model, drivers, and the backend (GLES2, GLES3, Vulkan) if graphics-related. -->
I am using a 2020 MacBook with Intel Iris Plus Graphics and Mac OS 10.15.5 Catalina. I have selected GLES3 as backend.
**Issue description:**
<!-- What happened, and what was expected. -->
The normal map for my sprite is behaving as if the y-axis is flipped, but the texture is right side up (notice the slit at the top of the sprite). In 3D, the normals behave correctly.
**Steps to reproduce:**
Use a normal map designed for a 3D mesh on a sprite.

**Minimal reproduction project:**
<!-- A small Godot project which reproduces the issue. Drag and drop a zip archive to upload it. -->
[MicroConsole.zip](https://github.com/godotengine/godot/files/4707642/MicroConsole.zip)
Answers:
username_1: Duplicate of #18299.
Status: Issue closed
|
manki11/ExerciserReact | 354042231 | Title: White space on input in CLOZE text
Question:
username_0: When a white space is typed before or after the expected answer of a CLOZE text, the answer is considered wrong. Same issue if the user add a space in its answer.
Answers:
username_1: Issue solved in 4fbbf918fc387a82ea77371860f47d7dfa6a1887
Status: Issue closed
|
xlab/c-for-go | 576574884 | Title: [question] generate Go function which accepts slice (without length) for C function that accepts dynamic array
Question:
username_0: Suppose a header file with the following:
```
typedef struct {
uint8_t x;
} Flarp;
uint64_t foo(const Flarp *input_ptr, size_t input_len);
```
With no `TRANSLATOR` `PtrTips` specified, c-for-go generates the following Go signature:
```
func Foo(inputPtr []Flarp, inputLen uint) uint64 {
```
My question: **Does there exist a configuration setting (in YAML) which will produce the following Go signature?**
```
func Foo(inputPtr []Flarp) uint64 {
```
I would expect that the `size_t input_len` argument would be computed at runtime in the (automatically-generated) body (or some helper) of the `Foo` function.
Answers:
username_1: @username_0 did you ever find a solution to this?
username_0: @username_1 - Alas, I did not. If I recall correctly - and it's been a while now - we just ended up using `func Foo(inputPtr []Flarp, inputLen uint) uint64` and ignoring the `inputLen` parameter.
username_2: Sorry for late reply lol. It wasn't possible since size is a very tricky thing in C, almost impossible to do it correctly automatically. There is a `size` tip but it has no real effect, just to mark things for better readability in YAML.
Maybe it will be solved in the future. |
ArctosDB/arctos | 360429648 | Title: Specimen Identification
Question:
username_0: We like to add specimens using the biologic taxonomy that is on the legacy type data cards. We then like to update the taxonomy using up to date resources (eg. WoRMS, Paleobiology Database, GBIF...). Some of the extra specific names/unsure names are editable sometimes, and in other situations they are not editable. Is there anything we should do to make this work for us or something we need to change in Arctos?
Examples:
http://arctos.database.museum/guid/UNM:ES:15360

is editable
http://arctos.database.museum/guid/UNM:ES:14826
is not editable

Answers:
username_1: I'm not sure what you mean - they both look editable to me??
And FWIW many times it's preferable to add an ID rather than edit. "taxonomic revision" (https://arctos.database.museum/info/ctDocumentation.cfm?table=CTNATURE_OF_ID) exists to facilitate that.
username_0: I guess I only know the path of being able to select the name in Identifications on the specimens. Note the purple on Rastellum? sp. and the black letters on Baculites cf. claviformis. If there is another path to do this, what is that path?
username_2: Looks like an error in building the taxon. Should be Baculites claviformis cf. Try rebuilding it and see if it doesn't turn purple.
username_3: Be sure to use the ID formula cf though!
-Derek
username_2: Derek is right. Using the ID formula is the way to be sure it's correct.
username_0: Ok, I'll try that. Problem is that cf. is referring to the species comparison not the genus.
For the formula would it be: Baculites {cf. claviformis} or Baculites {cf.} claviformis ...?
username_1: I think you're using the "A {string}" formula for both of those, in which case the identification string is whatever you feel like typing. For any other formula, you should just select the formula and pick name(s). Formatting is only required for the bulkloader (which necessarily works with strings, not data objects). That's documented here: http://handbook.arctosdb.org/documentation/bulkloader.html#taxonomy
username_2: Choose A cf. and it will take care of itself.

username_0: All of that is not very clear to me. It doesn't make sense to me. I bulkloaded this specimen.
I entered it as: Baculites claviformis {Baculites cf. claviformis}
username_0: I tried the way @username_2 suggested, but it won't let me save it. Says I need to populate all required fields, which are all populated.
username_2: Try just changing the Taxon to _Baculites claviformis cf. Sometimes it accepts that when it doesn't accept a change in the formula. It can be quirky.
username_0: So, if I put it in as Baculites claviformis cf. , it will enter it in as Baculites cf. claviformis...?
username_2: Sorry, I didn't mean to include that underscore. Just enter Baculites claviformis cf.
No, Arctos' format is Baculites claviformis cf. I'm used to seeing it Baculites cf. claviformis so I'm not sure why Arctos chooses to enter the cf. at the end instead of before the species. Here's what one of my entries looks like.

username_1: Consistency - _Bla_ (no matter the rank) and _Bla blah_ and _Bla blah var. blaugh_ all have the same format, and it's the same format as other formulaic IDs. An "A {string}" ID using the same taxa is functionally identical (unlesss their arr typos...) and allows you to format any way you want.
username_4: I had the same issues with cf. and I decided it wasn't important to worry about WHERE the cf. was in the string because if it was referring to the genus, then that would be all you have, correct? I know the tradition is to put the cf between the genus and specific epithet, but in the end I decided that having at the end should still mean the same thing.
username_0: @username_4 I get your point, but I actually, although rarely, do have very strange strings (e.g. cf. XXX aff. yyy or ?XXX cf. yyy?). I understand how weird they are. I am fine entering them in a format that Arctos group prefers and putting it in the notes how it was originally written, but it would be so much nicer to put it in Arctos as is and update it as we go.
My real problem is what I should tell students. Some volunteers/students don't have much background in any of this and I would like to direct them in the best way and still preserve legacy data. I have made it part of my role to update taxonomy and research geology attributes. Not to mention, I have very little background in this, but it's part of my job to get involved. I want students and volunteers to be able to get data in as accurately and smoothly as possible. All the other goofy, complicated stuff is my priority.
In general, biological taxonomy in Arctos should be more malleable in showing the legacy. Maybe it is and I'm not following the rules to help this process. I am currently entering data, then going back and updating the biological taxonomy of a record. A record will be named XXX cf. yyy, but below it will have the Arctos taxonomy with all the updated Linnaean classifications as Kingdom, Phylum, Class, Order, Family, ZZZ cf. qqq.
@username_1 I understand that it lets me format any way I want. I used it that way. But in this specific example, and in others that I have, it did not work. There are other examples that did work though.
username_1: It's infinitely malleable, but you may need your own classification if you want to do something that conflicts with other users.
Status: Issue closed
username_1: @username_1 We like to add specimens using the biologic taxonomy that is on the legacy type data cards. We then like to update the taxonomy using up to date resources (eg. WoRMS, Paleobiology Database, GBIF...). Some of the extra specific names/unsure names are editable sometimes, and in other situations they are not editable. Is there anything we should do to make this work for us or something we need to change in Arctos?
Examples:
http://arctos.database.museum/guid/UNM:ES:15360

is editable
http://arctos.database.museum/guid/UNM:ES:14826
is not editable

Status: Issue closed
|
xamarin/AndroidX | 797231768 | Title: VS2019 cannot correct rendering Floating Action button after upgrade AndroidX
Question:
username_0: ### Version Information
- Visual Studio version (eg. 16.8 or 8.8): 16.8.4
- Xamarin.Android version (eg. 11.1): 192.168.127.12
### Describe your Issue:
Seems like after upgrade to AndroidX, (install/update all nuget package to latest, resolve all namespace issue) VS2019 seems can not rendering Floating Action Button correctly in Design view (but it shows correctly when running), like following screent shot shows:
PLEASE NOTE THAT THIS ISSUE HAPPENS ONLY AFTER UPDATE THE NUGET PACKAGE TO LATEST (after migrate to androidx, nuget management shows there are update available!)
<img width="1252" alt="Capture" src="https://user-images.githubusercontent.com/6314177/106331316-935dfb80-6239-11eb-9d5e-2132f0487bb2.PNG">
### Steps to Reproduce (with link to sample solution if possible):
1. Create a new Xamarin Android project
2. Migrate to AndroidX and change all namespaces, make sure project build and run
3. update all nuget packages in project to latest, after this, project won't build, it will complaint there are packages missing, install it.
4. After package installed and project rebuilt, VS2019 stops showing FAB correctly in design window.<issue_closed>
Status: Issue closed |
SvetlanaaDanailova/ExamQAFundamentals | 103864083 | Title: <High>There is no delation message when you delete some picture when you an admin
Question:
username_0: Environment: Windows 7 Ultimate, Mozilla Firefox 40.0.3 with installed Firebug.
Steps to reproduce:
1.Go to admin login
2.Go to an album
3.Go to some picture
4.Click on delete button
Expected result:
-The admin receive a message for the deletion.
Actual result:
-The picture is deleted but the admin does not receive a message,but he have to check again, |
dmo60/JumpToPlaying | 18199252 | Title: No icon / instance, 2.99.1
Question:
username_0: Hi, don't know if this is just me but has anyone else found that by (painstakingly) upgrading to 2.99.1, JTP doesn't exist anywhere any more? It looks normal in the plugins list, but there's no icon. I don't use the context pane as it's buggy and re-opens each time RB starts.
Any thoughts? I'm anecdotally aware that RB 2.99 has broken a lot of backward compatibility with other plugins that the plugin devs are kinda cheesed off about...
Cheers
Answers:
username_1: Hi,
just a quick follow-up - this developer has found a way to access and change the main toolbar
- https://github.com/bmerry/rbtempo
Note - I've also followed this as well to create my own "new compact" toolbar which I suppose could be another vehicle here to add a toolbar option.
- https://github.com/username_1/alternative-toolbar |
bombsimon/wsl | 506701916 | Title: Correct Example syntax triggering "block should not end with a whitespace (or comment)"
Question:
username_0: [Example](https://golang.org/pkg/testing/#hdr-Examples) functions must end with comments, but wsl doesn't like that very much:
Here's a snippet that's failing for me in [go-which](https://github.com/username_0/go-which):
```go
func ExampleWhich() {
path := Which("sh")
fmt.Printf("Found sh at: %s", path)
// Output: Found sh at: /bin/sh
}
```
```console
$ wsl which_test.go
which_test.go:124: block should not end with a whitespace (or comment)
```
I'm thinking wsl should probably ignore this particular check in `Example` tests.
Answers:
username_0: Another thing to note: I'm using golangci-lint to run wsl, and so I _could_ add `// nolint: wsl` to ignore these failures. However, that comment then gets put into the godoc:
<img width="653" alt="Screen Shot 2019-10-14 at 10 50 38" src="https://user-images.githubusercontent.com/2037086/66760677-97adac00-ee70-11e9-99b1-2d327f446afa.png">
username_1: Thank you for this report! I'll look into this asap!
username_0: @username_1 I hacked up a fix which I'll PR soon, though it's a bit messy 😉
username_0: See #39 for a potential fix
Status: Issue closed
username_1: Looks good! Thank you so much for the report and the PR! Merged and I'll try to include it in an existing PR to update `golangci-lint`! |
zdhxiong/mdui | 288383744 | Title: 部分移动浏览器上打开对话框会导致退出全屏模式
Question:
username_0: - 这是一个:bug 反馈。
- 使用平台:已知 iOS Chrome 和 iOS Safari 最新版存在此问题,其他未测试。
### 如何操作
1. 滚动页面使浏览器地址栏和导航栏隐藏;
2. 打开一个 MDUI 对话框。
### 预期结果
对话框无缝打开,URL Hash 被更新。
### 实际结果
对话框打开,同时浏览器地址栏和导航栏弹出。由于视图高度变化,还可见对话框上下跳动。 |
OxDUG/issue-tracker | 120034231 | Title: Server space - we need to setup a server
Question:
username_0: Where?
Answers:
username_1: We could just grab some free space from Acquia or Pantheon?
username_2: Does the acquia free give us production? Just a quick look implies dev and stage only
http://www.acquia.com/free
Whereas Pantheon appears free for the community websites
https://pantheon.io/free-website-management-platform-beyond-hosting
username_1: Acquia do free for the community too:
https://www.acquia.com/gb/free-hosting-drupal-community-sites
username_1: I expect that many people in the group also have a server kicking around that could host it, but might just be simpler to outsource for now, and revisit when we encounter issues or can't stand the marketing emails any longer.
username_2: I think outsourcing may be easier, as one less thing we have to manage/own whilst learning D8. As for hosting, as both do it for free, I have no real preference. I have used Acquia before, but that isn't necessarily for/against. Has anyone used both for comparison? |
kubernetes/kube-state-metrics | 297894263 | Title: Flag for adjusting log level
Question:
username_0: I'd like to run kube-state-metrics with a warning log level as I don't need to see all the info logs printed every minute. Can we please add a flag for adjusting log level/verbosity?
Answers:
username_1: kube-state-metrics uses [glog](https://github.com/golang/glog) to log info. There is a `--v` command line argument to adjust log level. If you want to decrease info log, you can pass `--v=4` to only allow log level above 4 printed.
I am not sure which kube-state-metrics version you use. Actually we have decrease the default log level log in https://github.com/kubernetes/kube-state-metrics/pull/354 which is available on master branch for now. |
flutter/flutter | 607186702 | Title: Unable to compile on ios
Question:
username_0: Update all pods
Updating local specs repositories
Analyzing dependencies
[!] CocoaPods could not find compatible versions for pod "file_picker":
In Podfile:
file_picker (from `.symlinks/plugins/file_picker/ios`)
Specs satisfying the `file_picker (from `.symlinks/plugins/file_picker/ios`)` dependency were found, but they required a higher minimum deployment target.
I have added the keys as required in the documentation.
Status: Issue closed
Answers:
username_1: Hi @username_0
For issue related to the 3rd party plugin [file_picker](https://pub.dev/packages/file_picker) you may want to open an issue in the dedicated [github](https://github.com/miguelpruivo/flutter_file_picker/issues).
Closing, as this isn't an issue with Flutter itself,
if you disagree please write in the comments and I will reopen it
Thank you
username_0: I agree. It was file picker. Apologies. Thank you for the clarification.
Best regards..Deepak |
autumnmeaning/springboot_blogs | 681917233 | Title: 安全漏洞提醒
Question:
username_0: 漏洞类型:邮箱 SMTP 信息泄露
漏洞等级:高
漏洞地址:https://github.com/autumnmeaning/springboot_blogs/blob/6986f757fd33afabe7bdeab89ba0f7b96c702695/src/main/resources/application.yml
漏洞危害:任何人可以通过 SMTP 账号密码收发邮件,进而通过邮箱重置其他平台密码
解决方案:重置 SMTP 密码并检查邮箱是否有敏感信息泄露(请勿只修改代码,历史版本库依旧可见)
本次扫描结果由 [ 码小六 ] https://github.com/4x99/code6 提供(欢迎 star) |
algorithm006-class01/algorithm006-class01 | 598833742 | Title: 【633-Week09】毕业总结
Question:
username_0: 工作以来其实业务上用到算法的地方不多,但是在这行当工作久了对算法不精或者说缺少基本的系统化认识总感觉缺点什么,或者说有一种根基不牢的心虚,平常也没时间看,今年刚好疫情爆发在老家期间果断报了咱们的训练营,这是上这个的初中。
从参加入营考试的懵懂,那种无知的状态确实蛮可怕的,感觉要被行业抛弃了,哈哈。但是凡事总有开头不会没什么可怕的,经过一期一期的视频观看的习题练习,以及超哥的丰富经验,逐步深入算法,整体下来到目前为止应该算是入了门。
这个过程同时也感觉到算法作为一项内功,其实没那么容易修炼的,今次入门,要达到精修必将是个漫长的过程,而且今后要尽量想办法带入实际工作中才会有更大的意义,另外这也是找到好工作的一块敲门,还是要持之以恒的时时翻看,加强自律吧。
这个课程整体来讲还是不错的满紧凑的,整体上来讲超哥也讲了很多方法的东西,这些都很有用,但对我来讲还是有个问题就是看到问题之后可能有很多想法,但是不能精确的定位到那个方法最优的,这部分或许是要更多的练习来进行分门别类的总结归纳
最后一点感悟,基本功还是要趁早练。 |
valantic/vue-template | 1083309330 | Title: Vue 3 branch shows overlay for Linting issues
Question:
username_0: On the Vue 3 branch the error overlay is shown in the browser when linting issues occur. The overlay should not be used to show linting issues. they should only appear in the terminal and console and not block the developer from coding. |
rust-lang/rust | 696114142 | Title: format_args! is slow
Question:
username_0: Like molasses in the Antarctic.
As a consequence, so is any method which depends on its Arguments, like `{fmt, io}::Write::write_fmt`. The microbenchmarks in [this issue about `write!`'s speed][slow_write_macro] demonstrate that merely running the same arguments through `format_args!` and then `write_fmt`, even if it's just a plain string literal without any formatting required, produces a massive slowdown next to just feeding the same through `fmt::Write::write_str` or `io::Write::write_all`.
Unfortunately, `write!`, `format!`, `println!`, and other such macros are a common feature of fluent Rust code. Rust promises a lot of zero-cost abstractions, and on a scale from "even better than you could handwrite the asm" to "technically, booting an entire virtual machine is zero cost if you define the expression as booting a virtual machine..." this is currently "not very". Validating and formatting strings correctly can be surprisingly complex, which is going to increase with features like [implicit named arguments in format_args!][implicit_args], so we can expect increasing speed here may be challenging. However, this should be possible, even if it might require extensive redesign.
### Multiple Problems, Multiple Solutions
- `format_args!`'s internal machinery in the Rust compiler can likely be improved.
- Consumers of Arguments, such as `fmt::{format, write}` and `{fmt, io}::Write::write_fmt`, can be reviewed for runtime performance.
- Macros downstream of `format_args!` often are invoked to do something simple that does not require extensive formatting and can [use the pattern-matching feature of `macro_rules!` to special-case simple patterns][special_case_format] to side-step `format_args!` when it's not needed. This will increase the complexity of those macros and risks breakage if done incautiously, but could be a big gain in itself.
Unfortunately some of these cases may run up against complex situations with types, trait bounds, and method resolutions, because e.g. both `io::Write` and `fmt::Write` both exist and `write!` needs to "serve" both. Fortunately, this is exactly the sort of thing that can benefit from the recent advances in const generics, since it's a lot of compile-time evaluation that could benefit from interacting with types (as opposed to being purely syntactic like macros), and in the future generic associated types and specialization may be able to minimize breakage from type issues as those features come online, so it's a good time to begin reviewing this code.
### Related issues and PRs
- #75742 (and #75894)
- #75301 (and #75358)
- #52804
- #10761
[slow_write_macro]: https://github.com/rust-lang/rust/issues/10761
[implicit_args]: https://rust-lang.github.io/rfcs/2795-format-args-implicit-identifiers.html
[special_case_format]: https://github.com/rust-lang/rust/issues/52804#issuecomment-408628574
Answers:
username_1: Note that the formatting infrastructure in core::fmt is *intentionally* not fast, as it optimizes for code size over speed. There are alternatives, e.g., https://github.com/japaric/ufmt which is smaller/faster and makes some different tradeoffs.
I don't know that a blanket issue like this is useful -- I suspect the overall API cannot change at this point, but individual improvements can be, of course, discussed in T-compiler (as this is a libs impl, not T-libs, concern).
username_2: It's code size is also notoriously poor for embedded systems fwiw
username_1: It's true that it may not meet the code size goal well either - I do think we should try and go for size over speed in general, though the two are not always mutually exclusive.
username_0: Are there other alternatives like ufmt primarily for embedded use?
Size is cache and cache is speed, or rather the not needing it. It is _probably_ the case that many optimizations for speed will help reduce overall size as well, and Arguments itself is sequestered from instantiation or introspection and versioned internally. It's not as obscured as a nameless type, but it is likely easy to change many subtle particulars about it without breaking major APIs.
username_2: We've also recently written https://github.com/knurling-rs/defmt, which does the formatting on the host instead of the device through liberal use of the forbidden arts. It is not compatible with the `core::fmt` syntax though, and can only be used for logging (since the device can't actually use the formatted data). |
matplotlib/matplotlib | 157364141 | Title: new figsize is bad for subplots with fontsize 12
Question:
username_0: Classic: figsize (8, 6)
v2.x: figsize (6.4, 4.8)
fontsize: 12 pts in both cases
Together with the infamous outward ticks, this change in figsize wreaks havoc on label placement and axes positions. For a single default subplot, possibly adequate adjustments have been made, but the situation for multiple subplots looks hopeless. Here is the default 4-panel figure, without even trying to add any labels or titles:

Possible solutions:
1) Restore the old figure size.
2) Reduce the base font size
In either case, of course, all positioning parameters will need to be reviewed and adjusted.
I omitted one option: I think that increasing the subplotparams wspace and hspace enough to make 4 subplots look reasonable would leave too little room for the plots themselves.
I think the change to figsize is also responsible for many broken gallery examples as well.
The rationale for reducing the figsize was that (8, 6) is too large for most publication uses, and that with the increase in figure.dpi to 100 to match savefig.dpi, it is also too large for the screen, making it hard to work with multiple figures, and, depending on the user's screen, possibly even triggering resizing, which destroys any attempt to design a layout with fixed dimensions.
Therefore I think that the best option might be the more painful one: drop fontsize from 12 to 10, and adjust everything else accordingly.
Answers:
username_0: I'm now convinced that building the parameters around the reduced fontsize is the best way to go, and actually likely to be less painful than the alternatives.
In addition, I think that if we keep the outward ticks, then we should turn off the top and right ticks. That will reduce the ugliness and loss of useful space, especially when there is more than one subplot.
username_1: We nominally have some data about this from the analytic on the website which sent me down a kinda fun analysis rabbit hole.
caveat: browsers sometimes lie (see http://whatsmy.browsersize.com/ firefox gets my screens correct, chromium is internally doubling to account for the high resolution screen and reports 1/2 the correct screen size) and this also include the small amount of mobile traffic we get. This is based on data from Mach 2015 till now.


(click the images to see big versions)
If we assume 1280x768 that as a screen size that catches 94.8% of our user base. If we say a figure should take up no more that 1/4 of the screen by default, then our target resolution should be 640x384. Currently our default figure size and dpi ((6.4, 4.8), 100) gives 640x480 which is sort of close (it is the difference between a 4:3 and 5:3 aspect ratio) but is enough that only ~50% of our users can tile 2 figures vertically (and that is not considering the GUI/window manager space).
I think reducing the font size is the right thing to do.
When thinking about font sizes what really ends up mattering is the ratio between the font height (in pixels) and the figure height (in pixels). Once the image gets embedded in something else it almost always get scaled uniformly (ex a figure that fills a slide in a presentation) so the font size in the figure is not strictly related to the font size in the rest of the document. When we reduced the figure size in inches to restore it's size in pixels, we increased the font size relative to the figure size, thus we need to shrink the font a bit to get back to the previous ratio.
Related:
https://www.w3counter.com/globalstats.php?year=2016&month=4 gives screen resolution statistics for the general web user.
----
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from io import StringIO
import re
inp = StringIO('''Screen Resolution,Sessions,% New Sessions,New Users,Bounce Rate,Pages / Session,Avg. Session Duration,Goal Conversion Rate,Goal Completions,Goal Value
1920x1080,"1,958,104",27.54%,"539,352",49.94%,3.01,00:05:10,0.00%,0,$0.00
1366x768,"1,269,788",34.49%,"438,006",51.80%,2.87,00:04:51,0.00%,0,$0.00
1440x900,"834,959",26.92%,"224,773",50.59%,2.87,00:05:04,0.00%,0,$0.00
1280x800,"722,484",27.39%,"197,884",51.19%,2.80,00:05:06,0.00%,0,$0.00
1920x1200,"657,828",21.96%,"144,475",49.50%,2.98,00:05:21,0.00%,0,$0.00
1680x1050,"483,351",25.25%,"122,032",49.74%,2.99,00:05:20,0.00%,0,$0.00
1600x900,"393,355",31.16%,"122,574",50.97%,2.95,00:05:00,0.00%,0,$0.00
1280x1024,"353,511",29.62%,"104,718",50.28%,3.05,00:05:15,0.00%,0,$0.00
2560x1440,"303,764",21.50%,"65,323",49.91%,2.91,00:05:18,0.00%,0,$0.00
1280x720,"93,364",31.96%,"29,837",51.64%,2.86,00:04:56,0.00%,0,$0.00
1536x864,"79,725",32.83%,"26,177",50.14%,2.92,00:05:11,0.00%,0,$0.00
1600x1200,"65,708",23.65%,"15,537",49.32%,3.07,00:05:25,0.00%,0,$0.00
2560x1600,"55,616",21.79%,"12,119",49.70%,2.96,00:05:11,0.00%,0,$0.00
1024x768,"51,606",57.15%,"29,495",62.15%,2.62,00:03:37,0.00%,0,$0.00
360x640,"50,188",60.04%,"30,134",65.56%,2.27,00:02:29,0.00%,0,$0.00
768x1024,"37,930",51.76%,"19,632",59.98%,2.90,00:03:15,0.00%,0,$0.00
1360x768,"33,322",36.22%,"12,070",51.73%,2.97,00:04:55,0.00%,0,$0.00
320x568,"20,236",63.93%,"12,937",72.18%,1.82,00:01:44,0.00%,0,$0.00
375x667,"19,566",62.64%,"12,257",70.84%,1.96,00:01:54,0.00%,0,$0.00
2048x1152,"18,575",23.13%,"4,297",49.98%,2.96,00:05:15,0.00%,0,$0.00
1200x1920,"17,729",19.33%,"3,427",51.01%,2.87,00:05:19,0.00%,0,$0.00
1080x1920,"17,538",22.81%,"4,000",50.30%,3.16,00:05:20,0.00%,0,$0.00
1440x960,"14,178",33.04%,"4,684",52.33%,2.94,00:04:48,0.00%,0,$0.00
2560x1080,"12,294",25.58%,"3,145",50.28%,2.94,00:05:02,0.00%,0,$0.00
1536x960,"10,540",27.93%,"2,944",48.33%,3.20,00:05:40,0.00%,0,$0.00
1024x600,"10,367",44.90%,"4,655",53.70%,2.79,00:04:54,0.00%,0,$0.00
3840x2160,"8,028",26.26%,"2,108",50.29%,2.99,00:05:04,0.00%,0,$0.00
1600x1000,"6,811",15.87%,"1,081",48.50%,3.08,00:05:27,0.00%,0,$0.00
1344x840,"6,800",29.94%,"2,036",47.34%,3.32,00:05:56,0.00%,0,$0.00
1760x990,"6,549",7.63%,500,48.54%,2.82,00:05:46,0.00%,0,$0.00
[Truncated]
im = compute_size_heatmap(df, col)
ax.set_title(col)
ax.set_xlabel('screen width [px]')
ax.set_ylabel('screen height [px]')
return ax.imshow(im)
df = clean_ga_data(inp)
fig_single, (ax_w, ax_h) = plt.subplots(1, 2, figsize=(16, 8))
plot_atleast_fraction(df, 'height', ax_h)
plot_atleast_fraction(df, 'width', ax_w)
fig, ax = plt.subplots(figsize=(12, 6))
im = show_size_heat_map(df, 'Sessions', ax)
im.set_clim(.8, 1)
fig.colorbar(im, ax=ax, extend='min')
ax.set_ylim(1080, 0)
ax.set_xlim(0, 1920)
```
username_1: @username_0 If you are working on this I will work on #6380
username_0: OK. Here is the result of 10 pt font with top and RH ticks turned off, for comparison with the original minimal example above. Big improvement, I think. Is it OK to proceed with this strategy of leaving out the top and RH ticks? With outward ticks, I think it is essential. Users who want more alignment cues will simply have to turn on the grid.

username_1: Despite it being a much bigger change, I think that looks better than out on all sides.
username_0: Good! I am going out for a couple hours, but I can work on this when I return.
username_2: If I may, when using outward ticks, using them only on the right and the bottom spines (as Eric proposes) seems to be rather usual among other plotting softwares (at least in their gallery...). See for example:
[Origin ex 1](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/Panel_Graph_with_Inset_Plots_High_Resolution_Electron_Energyloss_Spectra.gif)
[Origin ex 2](http://cloud.originlab.com/www/resources/graph_gallery/images_galleries_new/Symbol_Plot_Color_Transparency_v5.png)
[Igor ex 1](https://www.wavemetrics.com/products/igorpro/gallery/user_jurkus.htm)
[Igor ex 2](https://www.wavemetrics.com/products/igorpro/gallery/user_nmr.htm)
[SigmaPlot ex (1st example of the showcase)](http://www.sigmaplot.co.uk/products/sigmaplot/graph-showcase.php)
username_2: If font height relative to figure height is really what matters, then switching to a 10 pt font for the new default smaller--*in inches*--figure seems to be what the math says: 12 pt * (6.4 in / 8 in) = 9.6 pt
Status: Issue closed
username_0: closed by #6500 |
ueberdosis/tiptap | 1013315926 | Title: HorizontalRule extension: inconsistent behavior
Question:
username_0: ### What’s the bug you are facing?
Inserting an HorizontalRule produces different results depending on:
- How is it created, shortcut or command
- Where in the document it is created; first, last, or in between nodes
- The presence of the StarterKit extensions (maybe because of GapCursor?)
### How can we reproduce the bug on our side?
With just the HorizontalRule extension: https://embed.tiptap.dev/preview/Nodes/HorizontalRule
- Type `---` or `___` or `***`, an empty `<p>` is created before the node.
With the StarterKit extensions: https://embed.tiptap.dev/preview/Examples/Default
- Type `---` or `___` or `***`, an empty `<p>` is created before the node.
- Empty the editor, click <kbd>**horizontal rule**</kbd>, this markup gets created:
```html
<hr contenteditable="false">
<p><br></p>
<p><br><br></p>
```
( I am using the tiptap.dev examples but I also can reproduce these issues locally, with all dependencies updated)
### Can you provide a CodeSandbox?
_No response_
### What did you expect to happen?
- Adding an horizontal rule with markdown shortcuts should not add any empty paragraphs **before** it.
- Adding an horizontal rule as the first node with the `setHorizontalRule` command should not add more than one empty paragraph **after** it.
### Anything to add? (optional)
As always, thanks for your time, and this amazing editor!
### Did you update your dependencies?
- [X] Yes, I’ve updated my dependencies to use the latest version of all packages.
### Are you sponsoring us?
- [X] Yes, I’m a sponsor. 💖 |
flutter/flutter | 448181227 | Title: flutter packages upgrade not working in flutter for web project?
Question:
username_0: <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill our the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!--
Please tell us exactly how to reproduce the problem you are running into.
Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
If the problem is with your application's rendering, then please attach
a screenshot and explain what the problem is.
-->
1. ...
2. ...
3. ...
## Logs
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
```
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
```
<!-- Finally, paste the output of running `flutter doctor -v` here. -->
```
```
Answers:
username_1: You're not providing any info or logs whatsoever. Please follow the issue template, as it's difficult to understand what you mean by "not working". There is no plugin system in place for flutter_web yet, which means that any packages depending on flutter will not work at the moment.
Your dependencies should look like this (with the inclusion of any pure dart dependencies you might have):
```yml
environment:
# You must be using Flutter >=1.5.0 or Dart >=2.3.0
sdk: '>=2.3.0-dev.0.1 <3.0.0'
dependencies:
flutter_web: any
flutter_web_ui: any
dev_dependencies:
build_runner: ^1.4.0
build_web_compilers: ^2.0.0
pedantic: ^1.0.0
dependency_overrides:
flutter_web:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web
flutter_web_ui:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web_ui
```
Also, make sure the `webdev` plugin is installed and activated as per the instructions.
username_0: I already have webdev
in fact, `**webdev serve**` works perfectly fine
it runs the project and I am getting desired output in small projects which are not using any external libraries.
### This is my pubspec.yaml:
`name: hello_world
description: An app built using Flutter for web
environment:
# You must be using Flutter >=1.5.0 or Dart >=2.3.0
sdk: '>=2.3.0-dev.0.1 <3.0.0'
dependencies:
flutter_web: any
flutter_web_ui: any
http: any
dev_dependencies:
build_runner: ^1.4.0
build_web_compilers: ^2.0.0
pedantic: ^1.0.0
dependency_overrides:
flutter_web:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web
flutter_web_ui:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web_ui
`
### **And this is my terminal output:**
`F:\Project\WebSite\test\hello_world>flutter packages upgrade
Could not find a file named "packages/flutter_web/pubspec.yaml" in https://github.com/flutter/flutter_web 7a4c33425ddd78c54aba07d86f3f9a4a0051769b.
Running "flutter packages upgrade" in hello_world...
pub upgrade failed (1)
F:\Project\WebSite\test\hello_world>`
username_0: I already have webdev
in fact, **webdev serve** works perfectly fine
it runs the project and I am getting desired output in small projects which are not using any external libraries.
This is my pubspec.yaml:
`
name: hello_world
description: An app built using Flutter for web
environment:
You must be using Flutter >=1.5.0 or Dart >=2.3.0
sdk: '>=2.3.0-dev.0.1 <3.0.0'
dependencies:
flutter_web: any
flutter_web_ui: any
http: any
dev_dependencies:
build_runner: ^1.4.0
build_web_compilers: ^2.0.0
pedantic: ^1.0.0
dependency_overrides:
flutter_web:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web
flutter_web_ui:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web_ui
`
And this is my terminal output:
`
F:\Project\WebSite\test\hello_world>flutter packages upgrade
Could not find a file named "packages/flutter_web/pubspec.yaml" in https://github.com/flutter/flutter_web 7a4c334.
Running "flutter packages upgrade" in hello_world...
pub upgrade failed (1)

`
F:\Project\WebSite\test\hello_world>`
username_2: Please try one of the following:
If you are using the Flutter SDK for Flutter for web development (i.e. you are using the `flutter` command) then run:
```
flutter pub upgrade
```
Or, if you are using the Dart SDK for Flutter for web development (i.e. `pub` and `webdev` commands) then run:
```
pub upgrade
```
Status: Issue closed
username_3: username_3-mbp:samples username_3$ cd web/charts/flutter/
username_3-mbp:flutter username_3$ flutter pub get
Could not find a file named "packages/flutter_web/pubspec.yaml" in https://github.com/flutter/flutter_web 1ed4317aa91038ba99531037ce00c04672c4bee1.
Running "flutter pub get" in flutter...
pub get failed (1)
username_3-mbp:flutter username_3$ flutter pub upgrade
Could not find a file named "packages/flutter_web/pubspec.yaml" in https://github.com/flutter/flutter_web 2e540931f73593e35627592ca4f9a4ca3035ed31.
Running "flutter pub upgrade" in flutter...
pub upgrade failed (1)
username_2: That's a strange error message. The file is definitely there: https://github.com/flutter/flutter_web/blob/master/packages/flutter_web/pubspec.yaml. I'm wondering if you are having a network issue. Perhaps git is unable to download the package from github. If that's the case, you could `git clone` https://github.com/flutter/flutter_web manually and use a `path` dependency instead of a `git` dependency.
username_2: Otherwise, I think we'd need more info to figure this out. Could you please run `flutter pub get -v` and paste the output here?
username_4: The following is what I get when run `flutter pub get -v`
[+5075 ms] Could not find a file named "packages/flutter_web/pubspec.yaml" in
https://github.com/flutter/flutter_web 20e59316b8b8474554b38493b8ca888794b0234a.
[ +46 ms] Running "flutter pub get" in platform... (completed in 5.1s)
[ +19 ms] "flutter get" took 5,271ms.
[ ] "flutter get" took 5,271ms.
pub get failed (1)
#0 throwToolExit (package:flutter_tools/src/base/common.dart:28:3)
#1 pub (package:flutter_tools/src/dart/pub.dart:173:5)
<asynchronous suspension>
#2 pubGet (package:flutter_tools/src/dart/pub.dart:106:13)
<asynchronous suspension>
#3 PackagesGetCommand._runPubGet (package:flutter_tools/src/commands/packages.dart:95:13)
<asynchronous suspension>
#4 PackagesGetCommand.runCommand (package:flutter_tools/src/commands/packages.dart:126:11)
<asynchronous suspension>
#5 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:478:18)
<asynchronous suspension>
#6 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:383:33)
<asynchronous suspension>
#7 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#8 _rootRun (dart:async/zone.dart:1124:13)
#9 _CustomZone.run (dart:async/zone.dart:1021:19)
#10 _runZoned (dart:async/zone.dart:1516:10)
#11 runZoned (dart:async/zone.dart:1463:12)
#12 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#13 FlutterCommand.run (package:flutter_tools/src/runner/flutter_command.dart:375:20)
#14 CommandRunner.runCommand (package:args/command_runner.dart:197:27)
<asynchronous suspension>
#15 FlutterCommandRunner.runCommand.<anonymous closure>
(package:flutter_tools/src/runner/flutter_command_runner.dart:396:21)
<asynchronous suspension>
#16 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#17 _rootRun (dart:async/zone.dart:1124:13)
#18 _CustomZone.run (dart:async/zone.dart:1021:19)
#19 _runZoned (dart:async/zone.dart:1516:10)
#20 runZoned (dart:async/zone.dart:1463:12)
#21 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#22 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:356:19)
<asynchronous suspension>
#23 CommandRunner.run.<anonymous closure> (package:args/command_runner.dart:112:25)
#24 new Future.sync (dart:async/future.dart:224:31)
#25 CommandRunner.run (package:args/command_runner.dart:112:14)
#26 FlutterCommandRunner.run (package:flutter_tools/src/runner/flutter_command_runner.dart:242:18)
#27 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:62:22)
<asynchronous suspension>
#28 _rootRun (dart:async/zone.dart:1124:13)
#29 _CustomZone.run (dart:async/zone.dart:1021:19)
#30 _runZoned (dart:async/zone.dart:1516:10)
#31 runZoned (dart:async/zone.dart:1500:12)
#32 run.<anonymous closure> (package:flutter_tools/runner.dart:60:18)
<asynchronous suspension>
#33 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:29)
<asynchronous suspension>
#34 _rootRun (dart:async/zone.dart:1124:13)
#35 _CustomZone.run (dart:async/zone.dart:1021:19)
#36 _runZoned (dart:async/zone.dart:1516:10)
#37 runZoned (dart:async/zone.dart:1463:12)
#38 AppContext.run (package:flutter_tools/src/base/context.dart:152:18)
<asynchronous suspension>
#39 runInContext (package:flutter_tools/src/context_runner.dart:56:24)
<asynchronous suspension>
#40 run (package:flutter_tools/runner.dart:51:10)
#41 main (package:flutter_tools/executable.dart:62:9)
<asynchronous suspension>
#42 main (file:///Users/jinliang/workspace/flutter-sdk/packages/flutter_tools/bin/flutter_tools.dart:8:3)
#43 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:299:32)
#44 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:172:12)
username_4: It works by using it as a path dependency.
username_2: Would a path dependency work for you for now? We're in the middle of moving the workflow to the mainline Flutter SDK, where this issue should be fixed.
username_5: ETA?
username_2: @username_5 It should already work when you use the Flutter SDK rather than `package:flutter_web`. It's just less stable and not documented, so we're not yet recommending it. Current options are:
1. Take the risk and use `flutter run -d chrome` for your web projects, along with all the `flutter` CLI tools. `flutter packages upgrade` should work in this mode.
1. Keep using `package:flutter_web`, and either use the path dependency workaround (https://github.com/flutter/flutter/issues/33309#issuecomment-518335140) or use the non-Flutter Dart SDK and `pub upgrade`.
username_6: Yes, it works. But it make flutter-web looks like to toy. |
jaketerrito/speedchallenge | 420759715 | Title: Need to Train Models Separately
Question:
username_0: The Predictor and Decoder Networks should be trained differently since they each require a different optimizer. We'll have to incrementally train each side, similiar to our previous work with adversarial autoencoders. |
BMellor/EmbLab | 204092190 | Title: 3.5.2: May want to reference the IRQ Handler here
Question:
username_0: 3.5.2: “Your interrupt handlers must be declared to accept no arguments and have no return value.” i.e. “void HardFault_Handler(void)” as C is kinda weird. Although it works without the seconds void but compilers may complain.
Reference and point out that the second void is there in Figure: "Example USART RXNE Interrupt Handler" |
wvuweb/cleanslate-cms | 54923874 | Title: Sync Theme spins entire link instead of just the icon
Question:
username_0: _From @username_0 on June 16, 2014 17:34_
## Steps to reproduce the issue
1. Log in to CleanSlate.
1. Go to a site, click "Sync Theme".
1. Watch both the icon and "Sync Theme" spin.
## Results
When I added the words "Sync Theme" in 7f5ef460034c4bb5cd63f7848b4d3f2d77c24bb1, the `.fa-spin` class gets applied to the entire `<a>` tag, not just the icon (`<i>`); thus, the entire "[icon] Sync Theme" spins, instead of just the icon.
## Expected Results
The `.fa-spin` class should only be applied to the `<i>` tag, not the entire `<a>` tag. I believe this can be fixed with a simple edit to [`admin_header.js.coffee`](https://github.com/wvuweb/cleanslate/blob/dev/app/assets/javascripts/slate/views/admin_header.js.coffee).
_Copied from original issue: wvuweb/cleanslate#49_
Status: Issue closed
Answers:
username_0: e10a0e65ef091614fdaa6d5580193fcaedc2b31c and a867482d66d20d7cf7cb593c03077e74340f1047 close this issue. |
micronaut-projects/micronaut-core | 933525267 | Title: java.lang.NoClassDefFoundError: javax/inject/Provider with Micronaut 3.0.0-SNAPSHOT and GraphQL example
Question:
username_0: I've migrated micronaut-graphql to Micronaut 3.0.0-SNAPSHOT and then I tried to run one of the examples included in the repository.
The application fails to start with the following error:
```
11:56:34.631 [main] ERROR io.micronaut.runtime.Micronaut - Error starting Micronaut server: Failed to load a service: null
java.lang.RuntimeException: Failed to load a service: null
at io.micronaut.core.io.service.SoftServiceLoader$ServiceInstanceLoader.collect(SoftServiceLoader.java:409)
at io.micronaut.core.io.service.SoftServiceLoader$UrlServicesLoader.collect(SoftServiceLoader.java:367)
at io.micronaut.core.io.service.SoftServiceLoader$ServicesLoader.collect(SoftServiceLoader.java:310)
at io.micronaut.core.io.service.SoftServiceLoader.collectAll(SoftServiceLoader.java:146)
at io.micronaut.context.DefaultBeanContext.resolveBeanDefinitionReferences(DefaultBeanContext.java:1761)
at io.micronaut.context.DefaultApplicationContext.resolveBeanDefinitionReferences(DefaultApplicationContext.java:135)
at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:3365)
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:245)
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:181)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:311)
at io.micronaut.runtime.Micronaut.run(Micronaut.java:297)
at example.Application.main(Application.java:26)
Caused by: java.lang.reflect.InvocationTargetException: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.micronaut.core.io.service.SoftServiceLoader$ServiceInstanceLoader.compute(SoftServiceLoader.java:396)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: java.lang.NoClassDefFoundError: javax/inject/Provider
at io.micronaut.inject.provider.JavaxProviderBeanDefinition.getBeanType(JavaxProviderBeanDefinition.java:44)
at io.micronaut.inject.provider.AbstractProviderDefinition.<init>(AbstractProviderDefinition.java:62)
at io.micronaut.inject.provider.JavaxProviderBeanDefinition.<init>(JavaxProviderBeanDefinition.java:34)
... 10 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.inject.Provider
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 common frames omitted
```
I'm not 100% sure if this is a bug in core or something missing during the upgrade but I haven't found which dependency triggers this error (if any). In other modules the culprit were `micronaut-cache` or `micronaut-micrometer` because those modules still use `javax.inject.Provider` but in graphql those modules are not used.
The workaround is add `javax.inject` dependency but we need to find out the root cause and fix it.
Maybe the error is that `JavaxProviderBeanDefinition` is loaded and as it extends from `AbstractProviderDefinition<Provider<Object>>` (Provider here is a javax one), fails.
### Steps to Reproduce
- Checkout the branch in this PR: https://github.com/micronaut-projects/micronaut-graphql/pull/197
- `./gradlew graphql-example-chat:run`
### Environment Information
- **Operating System**: Linux Mint 20.1
- **Micronaut Version:** `3.0.0-SNAPSHOT`
- **JDK Version:** 8<issue_closed>
Status: Issue closed |
bors-ng/bors-ng | 806280115 | Title: bors-ng silently discards all batches on GitHub 422 errors
Question:
username_0: We have observed the following bors-ng behavior running on our own bors instance:
Crash log in bors:
```
{{:badmatch,
{:error, :push, 422,
"{\"message\":\"Required status check \\\"continuous-integration/jenkins/pr-merge\\\" is expected. At least 1 approving review is required by reviewers with write access.\",\"documentation_url\":\"https://docs.github.com/enterprise/2.22/user/articles/about-protected-branches\"}"}},
[
{BorsNG.Worker.Batcher, :complete_batch, 3,
[file: 'lib/worker/batcher.ex', line: 712]},
{BorsNG.Worker.Batcher, :maybe_complete_batch, 1,
[file: 'lib/worker/batcher.ex', line: 658]},
{BorsNG.Worker.Batcher, :handle_cast, 2,
[file: 'lib/worker/batcher.ex', line: 94]},
{:gen_server, :try_dispatch, 4,
[file: 'gen_server.erl', line: 637]},
{:gen_server, :handle_msg, 6,
[file: 'gen_server.erl', line: 711]},
{:proc_lib, :init_p_do_apply, 3,
[file: 'proc_lib.erl', line: 249]}
]}
```
In this particular case, the target branch is has the required reviews set, but this particular branch shouldn't be used with bors anways, so that part is not what I'm worried about, I would say working as intended.
The issue that is actually a problem is that bors will silently drop all in-flight batches when it encounters this error. We have not yet taken a closer look into why this happens, but I have seen similar reports in other issues, e.g. #703. The other issues only mention that one batch and not all are dropped.
Any ideas where we could start taking a closer look into the code?
Answers:
username_1: Hey @username_0, I believe this can be now closed as fixed by #1273. Please take a look and verify if it covers your use case, or provide more details if not.
username_0: Hello @username_1,
I've deployed a version of bors that includes your fix and the issue appears to be fixed. Thanks a lot :-)
Status: Issue closed
|
bitfocus/companion-module-requests | 824027165 | Title: Add Internal countdown and/or count up clock to buttons.
Question:
username_0: <!--
USE THE FOLLOWING PAGE IF YOU WANT SUPPORT FOR A NEW DEVICE OR SOFTWARE PROGRAM
https://github.com/bitfocus/companion-module-requests/issues/new
-->
**Describe the feature**
There are currently clock variables to place on a button. A powerful & useful feature would be to include Count Up & Count Down clocks to be displayed on buttons. It would include basic actions such as Set Countdown Clock Time, Start Countdown Clock, Reset Clock, etc.
**Usecases**
When using slow moving shots with multiple PTZ cameras, a countdown displayed on the Preset button indicating when the preset will reach its final position would serve as an excellent guide to the user on when to fade to the next camera.
Programming the feature would be straight forward. In addition to the button having actions to Set the Drive Time & Recall PTZ Preset, it would include Set Countdown & Start Countdown. With the Set Countdown Time matching the PTZ Drive time, and the countdown value available as a variable to label the button - you've got yourself a nice countdown to know when your camera motion will conclude.
In addition to adopting this on individual preset buttons, another great (maybe better now that I think of it) use case would be to have a master Drive Time button that's always showing on your preset page. That button shows the Drive Time of a PTZ cam, and starts counting down anytime a new Preset is recalled for that camera. Then there could be buttons to increase or decrease drive time, so adjustments could be made on the fly. In this scenario the preset recall buttons would not include any drive time actions, leaving that all up to the master drivetime buttons.
In the short run, this countdown can be done using vMix's countdown. There doesn't seem to be a way to grab the vMix countdown value as a variable, so there's no way to put it on the Streamdeck Button - but it can be overlayed on a Preview Monitor, etc. I like the idea of it being an internal countdown, so that it would work independently of all other gear. In addition, there's probably lots of other uses for a countdown or count-up being displayed on buttons. I'm merely thinking selfishly about how I want to use it. Thanks! |
laixintao/iredis | 591766443 | Title: The [TYPE] option for command SCAN is not avaliable until redis `6.0+`
Question:
username_0: As documents [the-type-option](https://redis.io/commands/scan#the-type-option) said, you can use option `TYPE` to ask `SCAN` to only return objects that match a given type as of `version 6.0 `.
But the [commands.json](https://github.com/username_1/iredis/blob/master/iredis/data/commands.json) marked it avaliable since `2.8.0`. And then there always is a `syntax error`.
Answers:
username_1: Sorry about that.
But the commands.json is actually from official docs: https://github.com/antirez/redis-doc/blob/10ca7242a10a5c460cba5bdcc3a61bb80e5ac744/commands.json#L3657
`since` means "since which version, this command is available in Redis", any changes for later versions are not included.
Maybe we display the bottom hint based on redis-server version is the ideal way to display, but that required iredis has some kind of version system, completions, syntax, hints, docs all need to based on that versioning system, which is pretty complicated. So I decided to support with latest Redis only.
You need to keep in mind what's not available in your Redis if you use an older version. Sorry again.
username_1: Hi @username_0 I think I will stick with the latest version of redis. Going to close this issue, feel free to reopen if you have futher questions :)
Status: Issue closed
|
rosenfeld/active_record_migrations | 127417669 | Title: Package gem with active_record_migrations
Question:
username_0: Hi,
this is probably more a StackOverflow question, if so, just give me a hint and close this issue.
I want to create a gem (call it `ar_models`) that contains some ActiveRecord-models and provides the means to setup and migrate the database. Then, other gems or projects can require that gem, run e.g. a rake Task to setup (or migrate) the database and happily use the model. (I am not talking about rails projects here).
Now, in `active_record_migrations` the migration files are picked up from `db/migrate` (although it can be changed in the options). Either I know the absolute path to the "installed" gem (`ar_models`) or I copy over the migration files. Is there a cleaner option that I do not see? As is, it renders the otherwise great active_record_migrations gem a bit useless for my purpose.
Thanks for contributing, reading and responding.
Answers:
username_1: @username_0 Sorry for shameless advertisement, but [I did a system for this for our company’s needs](https://github.com/username_1/multi_ar). We have internal gem, where the migrations are. That gem has shallow binary which implements MultiAR’s interface class and shallow class which sets up the settings (defaults) we use and can be included in the projects for convenience.
username_0: @username_1 Great shameless advertisment :) If I understand correctly I would create my `ar_models` gem (it will have a cooler name) which depends on `multi_ar` and provides the migration and model definitions.
Then, the depending gem (`ar_model_user`) requires `ar_models`, has access to the models, means of migrating the database after version jumps and can define the database location itself? Sorry, I have not yet found the time to look carefully at its README and might not be able to play around a lot these days.
Status: Issue closed
username_2: @username_0 you can provide multiple migrations paths actually. From each gem using `ar_models` you can call some code like this:
```ruby
ActiveRecordMigrations.configure{|c| c.migrations_paths << MY_GEM_PATH}
```
Just ensure this is executed before `ActiveRecordMigrations.load_taks` is called.
username_1: @username_0 Basically like that, yes.
Just a word of warning, since this was born from our internal needs (and still evolving that way), there may be some obvious missing features or bugs. I will fix them if you inform about them though. Documentation may be scarce too.
Looks like this project also supports multiple migration dirs, my project’s focus was multiple databases more than multiple migration dirs. So unless you need multiple databases, this may be better, I don’t think you’ll gain much from my project :)
username_0: @username_2 Thanks! I will try to find out how to find the currently requireds gems own path (I guess its somewhere in `Gem`. It has to work with relative pahts from `Gemfiles` as well as installed gems (which i usually pull in via Gemfile->gemspec and bundler)
username_2: Or you can simply use something like this in you `lib/my-gem.rb`:
```ruby
MY_MIGRATIONS_PATH = File.join __dir__, '..', 'migrations'
```
Or something like that.
username_0: @username_2 Awesome, will try that when I find the time for that kind of coziness. Its true that this pattern is used here and there and thus should resolve reliably to the current codes location :).
username_0: @username_2 Great, that seems to work, thanks! |
MaxDZ8/M8M | 39343731 | Title: Sent share popup not showing in P2Pool
Question:
username_0: It has been reported the "sent first share" popup does not appear when using P2P, even though the node reports shares being sent (?).
Answers:
username_0: I have never been able to reproduce this. Besides, that code has been changed a few times by now and it doesn't make much sense to keep this one open.
Status: Issue closed
|
jenkinsci/docker-plugin | 281481295 | Title: 1.1 spins up full amount of containers for single job
Question:
username_0: Similar to original post in: https://github.com/jenkinsci/docker-plugin/issues/569
If you get some troubles with docker-plugin, please report
- [ ] docker-plugin version you use: 1.1.1
- [ ] jenkins version you use: 2.93
- [ ] docker engine version you use: Docker version 17.09.0-ce, build afdb6d4
- [ ] stack trace / logs / any technical details that could help diagnose this issue
Just upgraded from 1.0.4 to 1.1.1. Tried to run a single job, and the plugin spun up 25 slaves, but never ran the job.
<img width="362" alt="screen shot 2017-12-12 at 12 39 06 pm" src="https://user-images.githubusercontent.com/775690/33899662-2a17b970-df3a-11e7-9913-deed2e0b98c8.png">
```
Dec 12, 2017 12:28:37 PM hudson.WebAppMain$3 run
INFO: Jenkins is fully up and running
Dec 12, 2017 12:36:30 PM com.nirima.jenkins.plugins.docker.DockerCloud provision
INFO: Asked to provision 1 slave(s) for: null
Dec 12, 2017 12:36:30 PM com.nirima.jenkins.plugins.docker.DockerCloud provision
INFO: Will provision 'username_0/jenkins-slave', for label: 'null', in cloud: 'platform-ciw1.plab'
Dec 12, 2017 12:36:30 PM com.nirima.jenkins.plugins.docker.DockerCloud addProvisionedSlave
INFO: Provisioning 'username_0/jenkins-slave' number '0' on 'platform-ciw1.plab'; Total containers: '0'
Dec 12, 2017 12:36:30 PM hudson.slaves.NodeProvisioner$StandardStrategyImpl apply
INFO: Started provisioning Image of username_0/jenkins-slave from platform-ciw1.plab with 1 executors. Remaining excess workload: 0
Dec 12, 2017 12:36:30 PM com.nirima.jenkins.plugins.docker.DockerTemplate pullImage
INFO: Pulling image 'username_0/jenkins-slave:latest'. This may take awhile...
Dec 12, 2017 12:36:34 PM com.nirima.jenkins.plugins.docker.DockerTemplate pullImage
INFO: Finished pulling image 'username_0/jenkins-slave:latest', took 3833 ms
Dec 12, 2017 12:36:34 PM com.nirima.jenkins.plugins.docker.DockerTemplate provisionNode
INFO: Trying to run container for username_0/jenkins-slave
Dec 12, 2017 12:36:35 PM com.nirima.jenkins.plugins.docker.utils.PortUtils$ConnectionCheckSSH execute
INFO: SSH port is open on platform-ciw1.plab:34274
Dec 12, 2017 12:36:35 PM hudson.slaves.NodeProvisioner$2 run
INFO: Image of username_0/jenkins-slave provisioning successfully completed. We have now 2 computer(s)
...
...
...
INFO: Image of username_0/jenkins-slave provisioning successfully completed. We have now 25 computer(s)
```
It also keeps spamming the log with
```
INFO: Asked to provision 1 slave(s) for: null
Dec 12, 2017 12:39:06 PM com.nirima.jenkins.plugins.docker.DockerCloud provision
INFO: Will provision 'username_0/jenkins-slave', for label: 'null', in cloud: 'platform-ciw1.plab'
Dec 12, 2017 12:39:06 PM com.nirima.jenkins.plugins.docker.DockerCloud addProvisionedSlave
INFO: Not Provisioning 'username_0/jenkins-slave'; Server 'platform-ciw1.plab' full with '25' container(s)
Dec 12, 2017 12:39:06 PM com.nirima.jenkins.plugins.docker.DockerCloud provision
INFO: Asked to provision 1 slave(s) for: null
```
Answers:
username_1: Those nodes all fail to connect, don't they ?
Seems to be same issue as https://github.com/jenkinsci/docker-plugin/issues/568
username_0: You are correct, looking at the node logs...
```
[12/15/17 10:12:28] [SSH] Opening SSH connection to <DOCKER_HOST>
Key exchange was not finished, connection is closed.
java.io.IOException: There was a problem while connecting to <DOCKER_HOST>
at com.trilead.ssh2.Connection.connect(Connection.java:834)
at com.trilead.ssh2.Connection.connect(Connection.java:703)
at com.trilead.ssh2.Connection.connect(Connection.java:617)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1302)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:814)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:803)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Key exchange was not finished, connection is closed.
at com.trilead.ssh2.transport.KexManager.getOrWaitForConnectionInfo(KexManager.java:95)
at com.trilead.ssh2.transport.TransportManager.getConnectionInfo(TransportManager.java:237)
at com.trilead.ssh2.Connection.connect(Connection.java:786)
... 9 more
Caused by: java.io.IOException: Cannot read full block, EOF reached.
at com.trilead.ssh2.crypto.cipher.CipherInputStream.getBlock(CipherInputStream.java:81)
at com.trilead.ssh2.crypto.cipher.CipherInputStream.read(CipherInputStream.java:108)
at com.trilead.ssh2.transport.TransportConnection.receiveMessage(TransportConnection.java:232)
at com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:706)
at com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:502)
... 1 more
[12/15/17 10:12:28] Launch failed - cleaning up connection
[12/15/17 10:12:28] [SSH] Connection closed.
```
Anyways, regarding https://github.com/jenkinsci/docker-plugin/issues/568, I'm fine using docker attach if you want to deprecate SSH because of impl difficulties. I've re-upgraded to `1.1.1` and successfully run a job using docker attach (user `jenkins` for me, not root, as specified in my docker image).
I will say the behavior of spinning up max containers is confusing in the event the spun-up containers fail to connect. I don't know a better way to do it besides maybe halting after one failed connection and logging it. It's an awful cleanup if I have 25 allowed containers or something.
Thank you! Feel free to close this.
Status: Issue closed
|
integration-team-iiith/physical-chemistry-responsive-lab | 245940236 | Title: QA_Nuclear Magnetic Resosnace Spectrocopy and Evaulation of Simple 1H NMR Spectra of Select Organic Compounds_Experiment_dropper's-inconsistant-functionality
Question:
username_0: Defect Description :
In the experiment section, the functionality of the dropper is inconsistent. As mentioned in the issue #89 for every alternative click the dropper's position is moving from magnetic field to original position and vice versa.
Actual Result :
In the experiment section, the functionality of the dropper is inconsistent. As mentioned in the issue #89 for every alternative click the dropper's position is moving from magnetic field to original position and vice versa. This is not happening in the original experiment.
Environment :
OS: Windows 7, Ubuntu-16.04,Centos-6
Browsers:Firefox-42.0,Chrome-47.0,chromium-45.0
Bandwidth : 100Mbps
Hardware Configuration:8GBRAM ,
Processor:i5
Attachments:
**N/A** |
ContinuumIO/anaconda-issues | 257779701 | Title: Navigator Error
Question:
username_0: ## Main error
An unexpected error occurred on Navigator start-up<br>byte indices must be integers or slices, not str
## Traceback
```
Traceback (most recent call last):
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/exceptions.py", line 75, in exception_handler
return_value = func(*args, **kwargs)
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/app/start.py", line 115, in start_app
window = run_app(splash)
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/app/start.py", line 58, in run_app
window = MainWindow(splash=splash)
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/widgets/main_window.py", line 160, in __init__
self.api = AnacondaAPI()
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/api/anaconda_api.py", line 1205, in AnacondaAPI
ANACONDA_API = _AnacondaAPI()
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/api/anaconda_api.py", line 65, in __init__
self._conda_api = CondaAPI()
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/api/conda_api.py", line 1622, in CondaAPI
CONDA_API = _CondaAPI()
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/api/conda_api.py", line 340, in __init__
self.set_conda_prefix()
File "/Applications/anaconda/lib/python3.6/site-packages/anaconda_navigator/api/conda_api.py", line 489, in set_conda_prefix
self.ROOT_PREFIX = info['root_prefix']
TypeError: byte indices must be integers or slices, not str
```
## System information
```
python: 3.6.1
language: en
os: Darwin;16.7.0;Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64;x86_64;i386
version: 1.6.2
platform: osx-64
qt: 5.6.2
pyqt: 5.6.0
conda: 4.3.23
```
Status: Issue closed
Answers:
username_1: **See Issue #1837 for more information on how to fix this.**
---
Closing as duplicate of #1837
---
Please remember to update to the latest version of Navigator to include
the latest fixes.
Open a terminal (on Linux or Mac) or the Anaconda Command Prompt (on windows)
and type:
```
$ conda update anaconda-navigator
$ conda update navigator-updater
``` |
tokio-rs/tokio | 555045829 | Title: sync: watch::Receiver::recv() should not return `Option`
Question:
username_0: Now that `broadcast` exists, `watch` should be specialized to track a single value. As such, "sender shutdown" is out of scope and `recv()` can just return `T`.
Answers:
username_1: Are you suggesting that `watch` should be single-message like `oneshot`? I'm using `watch` over `broadcast` because I only need the most recent value (I'm sending time-sensitive but loss-permissive signals from a 'master' task to several other tasks), but I can't use `oneshot` because I need to send multiple values.
I'd like to propose an alternative that won't break current uses of either channel type, but still allows each channel type to fill its niche: making `oneshot` be single-producer multi-consumer. This can be accomplished by adding `oneshot::Sender::subscribe` and/or `impl<T> Clone for oneshot::Receiver<T>`.
username_2: @username_1 I think you may have got the wrong idea?
One thing that would be useful is if there were convenience methods that made it act more like a `Cell`. For example, methods to access the value from the `Sender` side so you don't have to store an extra copy.
username_3: `sync::watch` is more like `sync::broadcast`, whether to consider move sync::watch to sync::broadcast, and implement another version of `watch` that is single-producer, multi-consumer channel but one value can only be consumed once.
username_3: Or multi-producer, multi-consumer channel but one value can only be consumed once.
Status: Issue closed
|
google/grinder.dart | 82677750 | Title: Support for running tasks in parallel
Question:
username_0: Does grinder have a parallel task support? If not, would it make sense to add it?
I'd like to speed up my tasks. If some of them can be run in parallel, that would be great if grinder could help make that easier.
Thanks for considering!
Answers:
username_1: Are you talking about `runAsync` which would run external processes in parallel or about running Dart tasks in parallel (isolates).
Support for Running Grinder tasks in Isolates while still using sync inside tasks would be quit nice.
username_0: I probably mean isolates. :)
username_1: or both ;-) `@DependsParallel(task1, task2,...)`?
username_0: Something like that, yeah. I want to spin up a bunch of independent tasks,
and when they all complete, move on.
username_2: dupe of #170
Status: Issue closed
|
GCTC-NTGC/TalentCloud | 528986217 | Title: Task - Make users aware that cookies are required
Question:
username_0: # Description
Submitting any form with cookies disabled leads to a 419 error, because a cookie is used to track the session, and session data is used to verify CSRF tokens.
We should an explanation to the 419 error page that they may need to enable cookies. Also, if the TokenMismatchException occurs on the Login, Registration, or Reset Password forms, we can catch the exception and return a form error message explaining the need for cookies.
Answers:
username_1: Would be nice to flash an error to the form _before_ the user submits the form, notifying them that cookies are required. In addition to adding more information to the 419 error page.
username_2: This ended up being a more specific technology problem. IE on our TBS surface tablets failed, but chrome on the same device works. @username_3 could you investigate a little more?
username_3: Will do! |
quarkusio/quarkus-quickstarts | 990932911 | Title: Implement Oracle datasource quickstart
Question:
username_0: Add a quickstart that shows Quarkus datasource functionality using the Oracle datarouce (`quarkus-jdbc-oracle`).
Majority of the quickstarts use PostgreSQL (`quarkus-jdbc-postgresql`) and there is a lack of examples of other datasources.
It could be e.g. an analog of `hibernate-orm-quickstart`/`hibernate-orm-panache-quickstart` with postgresql swapped for oracle. |
github-vet/rangeloop-pointer-findings | 774946011 | Title: natanbc/ssrf-pwnabot: app/app.go; 9 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/natanbc/ssrf-pwnabot/blob/c9b6a21588f49ee0daa34bce58d0f64ae280aad0/app/app.go#L44-L52)
<details>
<summary>Click here to show the 9 line(s) of Go which triggered the analyzer.</summary>
```go
for update := range updates {
if update.Message != nil && update.Message.IsCommand() {
go func() {
SSRFPwnaBot.handleCommand(&update)
}()
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: c9b6a21588f49ee0daa34bce58d0f64ae280aad0 |
VasquezNathan/CSE_100 | 834394449 | Title: Use of for loop
Question:
username_0: https://github.com/username_1/CSE_100/blob/fa77d1924aa9209ba7efcd6228c6a4cc1c18da08/Lab06/nvasquez12.cpp#L22
Hi Nathan, Dalton here. Your use of a for loop here works, but should be a while loop for correctness. Typically one uses a while true loop with a break statement instead of allocating extra recource in a for loop. The intent of the code is more clear with a while loop.
Good luck learning, happy to see you learning CSCI!
Answers:
username_1: Hi Dalton, I hope all is well. I see that it would be more clear to have a while true loop, but you mentioned for(;;) using more resources. I was under the impression that they are compiled to assembly identically (using gcc at least). I'll be making the switch to while true because sometimes I can write some ugly code, but for curiosity's sake is a while true more optimal than for(;;)? @username_0 thank you again tho, I'm still trying to get used to github so sorry for such a late reply. |
tymondesigns/jwt-auth | 368008870 | Title: Support for RFC 7519
Question:
username_0: Does this library has support for RFC 7519? https://tools.ietf.org/html/rfc7519
Answers:
username_0: @username_1 some answer?
Status: Issue closed
username_1: JSON Web Encryption (JWE) is not supported out of the box right now but you could implement your own JWT provider and use something like https://github.com/web-token/jwt-framework to power it |
morsk/logi-command | 787616211 | Title: Remember item_slot_count
Question:
username_0: Remember the item_slot_count the blueprint was created with. Ideally by putting a tag on the 1st combinator, because this will be preserved if a user cut-and-pastes entities off the ground.
Also check to see if Auto Trash does this, and use its value if it's provided. |
chinchang/web-maker | 256591450 | Title: Infine loop instrumenting is failing
Question:
username_0: Hi,
So I jhave some code
```
setHeader() {
this.listOptions.rewind();
let temp = "";
let can = this.listOptions.getNext();
debugger;
while (this.listOptions.hasNext() == true)
{
let head = this.listOptions.getNext();
let str = padStr(head.header, head.width);
temp += str;
}
this.headings = temp;
},
```
Which is re-written as (Instrumented)
```
setHeader: function setHeader() {
this.listOptions.rewind();
var temp = "";
var can = this.listOptions.getNext();
debugger;
var _wmloopvar1 = Date.now();
while (this.listOptions.hasNext() == true) {
if (Date.now() - _wmloopvar1 > 1000) { window.top.previewException(new Error("Infinite loop")); break;}
var _head = this.listOptions.getNext();
var str = padStr(_head.header, _head.width);
temp += str;
}
this.headings = temp;
},
```
Even if the value is true. The loop is flagged as infinite???
Huh???
Answers:
username_0: Oops. I am debugging so it will cause the timer issue. !!!! Enhancement request -- set infinite loop timer value in Settings???
Status: Issue closed
|
Stephan-S/FS19_AutoDrive | 598374911 | Title: [FEATURE REQUEST]Einstellung für max. Siloabstand
Question:
username_0: Umstellung des max. Siloabstandes von allg. Einstellungen auf fahrzeugbezogene Einstellungen.
Ich habe gerade das "Problem", dass je nachdem welchen Abstand ich einstelle immer irgendein anderer Kurs Probleme bei den Triggern hat. Und evtl. auch umbenennen in Triggerabstand. Gilt ja nicht nur fürs Silo.
Answers:
username_1: Wie groß hast du den Abstand denn eingestellt und welche "Probleme" treten denn auf?
username_0: Habe von 10m - 100m eingestellt...je nachdem was ich gerade brauche. Wenn man mehrere gleiche Produktionen hintereinander hat, kann es sein, dass man mal alle Trigger abfahren möchte oder halt nur einen bestimmten. Wenn das bei verschiedenen Produktionketten zur gleichen Zeit unterschiedlich sein soll hat man bei einer allg. Einstellung immer das Problem, dass ein Kurs nicht funktioniert.
username_0: Hallo nochmal.
Habe jetzt nochmal ein bisschen mit dem Siloabstand experimentiert und noch ein Beispiel bei dem es Sinn macht den Siloabstand individuell für jedes Fahrzeug einstellbar zu machen.
Ich lasse Gras oder Häckselgut in einem Durchfahrsilo abladen. Damit das optimal funktioniert befindet sich der Abladepunkt hinter dem Silo und der Siloabstand ist auf 100m eingestellt.
Gleichzeitig lass ich vom Hofsilo mit mehreren Traktoren verschiedene Materialien zu den einzelnen Produktionen auf dem Hof bringen. Dadurch dass der Siloabstand auf 100m eingestellt ist, fangen die Traktoren bei jedem Trigger, den sie nur ansatzweise passieren, an zu buckeln bis der Triggerbereich verlassen wurde.
Bei den Traktoren müsste man den Siloabstand auf einen kleineren Wert einstellen um das zu vermeiden. Dann hätten aber die Fahrer im Durchfahrsilo wieder ein Problem und erkennen den Silotrigger nicht richtig.
Ich hoffe ich habe mich verständlich ausgedrückt.
username_1: Ich habe meinen Abstand auf 50m eingestellt und keine derartigen Probleme bei einem 40m langen Durchfahrsilo, aber das liegt vermutlich daran, das ich meine Produktionen nicht alle auf einer Linie habe, sondern alle quasi fast einzeln anfahre.
Ein Screenshot der Problemstellen inkl. Wegnetz wäre vielleicht von Vorteil, um das Optimierungspotenzial ermitteln zu können.
username_0: Hier mal ein Screenshot.

Wenn ich den Siloabstand auf einen höheren Wert einstelle und ein Traktor auf der Route des weißen Fahrzeugs die Trigger passiert, dann fängt er an zu ruckeln weil er vermutlich die Abladetrigger erkennt obwohl er dort nicht abladen oder beladen soll.
username_1: Das er vor erkannten Triggern langsamer wird, ist vollkommen normal.
Dies hat auch nichts mit dem max. Siloabstand zu tun, der die Strecke zwischen Trigger und Zielpunkt angibt.
Status: Issue closed
|
cul-2016/quiz | 202898591 | Title: Hosting
Question:
username_0: At the end of the project we will need to have the site accessible via a simple URL rather than the herokuapp.com URL it is currently on. So I'm just raising this as an issue to make sure we talk about those practicalities before the project ends.
Answers:
username_0: I think we should do this now. I have registered quodl.co.uk, quodl.com. and quodl.uk. At the moment I am only using quodl,co,uk to redirect to the staging version on heroku (which is fine for our purposes). Presumably we now need to move the staging version to the main app site, and then give me instructions on setting up redirection from quodl.co.uk.
I have a meeting with a user on Tuesday afternoon, so would be ideal to get this done today, or at least by Tuesday lunchtime.
username_1: 👍 I'm on writing this today!
username_1: @username_0
Please add the following records to your DNS:
+ For quodl.co.uk:
+ `CNAME` record, name = `www`, DNS = `www.quodl.co.uk.herokudns.com`
username_0: I'm doing this via a gui on uk2,

I've updated cname - is that right? Do I also need to update the name servers, or is that it?
username_1: @username_0 This should be it! I've done this for a test website as well to confirm.
Takes a few hours (up to 24h) to propagate, depending on your provider
username_0: Great - thanks! Much less painful than I imagined. Will check tomorrow.
username_1: @username_0 I think this is currently redirecting.
Could you try adding an `ALIAS` or `ANAME` record?
You'd want to have the name either empty or as `@` with the target DNS as above.
https://devcenter.heroku.com/articles/custom-domains#configuring-dns-for-root-domains
username_0: I put a redirect in separately a few days ago, so I'm guessing that's what's causing the redirect - quodl.com and quodl.uk don't have redirects (and they just give the uk2 holding page).
Can't see any reference to ALIAS or ANAME on the DNS configuration pages. This is what I'm able to change:

username_0: As it doesn't seem to have worked (yet), I wanted to check that if for today I redirect students to cul-app.herokuapp.com, that will have the same effect (albeit with the heroku URL visible) as what we're trying to do here. Essentially, can students register and complete their first quiz on cul-app.herokuapp.com and then later log in and see their results when the quodl.co.uk redirect works?
username_1: Yes, absolutely!
I've emailed our own domain provider to confirm things haven't changed since I did this for the dwyl.io domain as well
username_0: Looks like DNS has propagated, as now www.quodl.co.uk give a DNS error whereas quodl.co.uk redirects as per my setup on uk2.
Please keep me updated on this, as it would be great if it could be sorted before tomorrow's session. But not crucial if it can't. Also, can we have everything set up to point to heroku rather than just www - so that if someone enters http://quodl.co.uk, that works in the same way as http://www.quodl.co.uk.

username_1: @username_0 I'm following up with my hosting provider as the same thing has happened on my side.
I'm not sure why this is happening yet as it's *exactly* the same setup we have for dwyl.io
In terms of having everything pointing to heroku, you just need to contact your service provider and as for the IP address so that you can add a forwarding `A` record where the name is `@` (the root domain). This will alias http://quodl.co.uk to http://www.quodl.co.uk.
username_1: @username_0 The issue with the test site I'm using is that it's creating a malformed URL so I've now opened a ticket with heroku. The same seems to be happening with quodl so I'll let you know as soon as I hear back!
username_0: @username_1 great - thanks!
username_1: @username_0 Could you try changing that target DNS to `cul-app.herokuapp.com` instead of `www.quodl.co.uk.herokudns.com` please?
username_0: @username_1 I did that last night, and nothing yet - may still be propagating, though I did set TTL to 1 hour for two of the domains a day or two ago.
username_1: I've opened an urgent ticket with heroku, so sorry this is taking longer than expected, it's usually a click-and-go operation that takes 5 minutes!
username_1: @username_0 When I use curl www.quodl.co.uk on the command line, it's not coming back with any CNAME records, could you please add a screenshot of the latest one here?
username_0: @username_1 This is how it's currently set for quodl.co.uk (and has been over the weekend):

username_0: If it's something at my end, I can raise a ticket with my providers.
username_1: @username_0 Our reply from heroku and question at the end - could you please confirm?
There appears to be something strange going on with the DNS resolution here as we are seeing different results from http://zone.vision/#/www.quodl.co.uk to running dig from our individual machines.
```shell
$ dig www.quodl.co.uk
; <<>> DiG 9.8.3-P1 <<>> www.quodl.co.uk
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40371
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;www.quodl.co.uk. IN A
;; AUTHORITY SECTION:
quodl.co.uk. 1799 IN SOA dns1.uk2.net. hostmaster.uk2.net. 1487283419 86400 7200 3600000 86400
;; Query time: 126 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Feb 21 11:51:12 2017
;; MSG SIZE rcvd: 92
```
**Can you clarify which DNS provider you are using and whether you have changed records recently?**
username_0: Trying to find an answer for this, I stumbled on some instructions for solving the problem. For UK2, the CNAME name entry needs to be www.quodl.co.uk rather than www. It is now working with www,quodl.com. Current issue is that redirecting to cul-app.herokuapp.com gives a security alert:

You should now see it updated on Zone Vision. I've tried using www.quodl.co.uk.herokudns.com for quodl.co.uk, to see if that corrects the https issue, but it doesn't seem to resolve the DNS:

username_0: 
username_1: @sohilpandya @username_2 Is there anything in the app config that is forcing the redirect to `https` instead of `http`?
username_2: Yes, [here](https://github.com/cul-2016/quiz/blob/staging/server/plugins/index.js#L4) we are forcing any `http` requests to redirect to `https`
Using [this](https://github.com/bendrucker/hapi-require-https) module
Would you like this changed?
username_0: (For more, see #300.)
username_1: In that case we'll need you to confirm whether this is a security requirement for this iteration @username_0 and if so, you'll need to purchase an SSL certificate to ensure this error message disappears
username_0: If it's a $15/mo one like here
https://elements.heroku.com/addons/expeditedssl
Then yes let's do it. If it's going to be more will need to look into it. Please advise as to what's required.
username_1: @username_2 @sohilpandya Let's activate expedited SSL for now and confirm this is the issue.
We're working on some documentation for a longer term solution 👍
username_2: @username_0, if you are happy to go with expeditedssl then the following steps should set up expeditedssl:
* From the link you posted above: https://elements.heroku.com/addons/expeditedssl
* Click the Install link here:

* Select 'cul-app', then continue
* Click 'provision' on the window that follows, here:

* Then after choosing the domain which you would like to have ssl set up on and going through the approval process, it should be installed for that domain
I found [this video](https://www.youtube.com/watch?v=OcyR7Yus4pc) explains the steps to take quite well
Let me know if there are any issues with this setup
username_1: @username_2 Sohil and I were also just discussing Open SSL as a temporary measure - just thought I'd note it here
username_0: Success! www.quodl.co.uk now working as it should - many thanks @username_2.
username_1: @username_0 Excellent!
Thanks for the info @username_2 :tada:
Status: Issue closed
|
Subsets and Splits