repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
square/Valet | 666406198 | Title: Contexts being retained?
Question:
username_0: When I'm running several unit tests for my app, if I have a shared `SinglePromptSecureEnclaveValet` that I use from a "singleton helper", I get several errors like the following...
`2020-07-27 10:10:08.967933-0500 Foo App[69001:6412884] [LAClient] Error Domain=com.apple.LocalAuthentication Code=-10 "Invalidated due to exceeded number of allocated contexts." UserInfo={NSLocalizedDescription=Invalidated due to exceeded number of allocated contexts.}`
But if I use a `SecureEnclaveValet` instead, the issue does not appear.
Are contexts being created and not disposed of?
Here is the first part of my singleton class:
```
class ValetUtil {
static var shared: ValetUtil = ValetUtil()
var secureEnclave_ERROR: SinglePromptSecureEnclaveValet
var secureEnclave: SecureEnclaveValet
var keychain: Valet
private init() {
secureEnclave_ERROR = SinglePromptSecureEnclaveValet.valet(with: Identifier(nonEmpty: Constants.General.valet_id.rawValue)!, accessControl: .userPresence)
secureEnclave = SecureEnclaveValet.valet(with: Identifier(nonEmpty: Constants.General.valet_id.rawValue)!, accessControl: .userPresence)
keychain = Valet.valet(with: Identifier(nonEmpty: Constants.General.valet_id.rawValue)!, accessibility: .whenUnlocked)
}
```
Answers:
username_1: We only explicitly retain a single `LAContext` per `SinglePromptSecureEnclaveValet` instance. We do put this object into dictionaries that we pass to the keychain, but those dictionaries are transient and only reference the `LAContext` object (reference rather than copy).
username_0: You nailed it! I found another class in our code that was creating LAContext objects and not getting rid of them when it was finished.
Sorry to have opened this without finding that first.
username_1: All good! Thanks for following up. Closing out this issue 🙂
Status: Issue closed
|
clangd/vscode-clangd | 1096899980 | Title: Add option to activate extension as soon as workspace is opened
Question:
username_0: Currently, the extension only activates as soon as a C++ file is opened. This is a problem if you want to use "Go to symbol in workspace" to open the first C++ file; you first have to open an arbitrary C++ file to make that work. I reload my windows fairly often for a number of reasons, so I'm running into this problem a lot.
I'd like to have an option to activate the extension right after opening a workspace. I'm not sure how easy that is to implement.
Another option would be to add `"workspaceContains:**/*.cpp" to `"activationEvents"`, but I'm not sure if that's desirable for everybody.
Answers:
username_1: Even if running, the language server wouldn't respond to workspace/symbols today until a file is open: it gets results from the index, and it knows which indexes to load based on which files are open.
(This is solvable by searching for indexes based on the workspace path, I think, but we have to decide what to do about background indexing etc)
And such a mode could never be the default: the purpose of activation is to avoid wasting resources on extensions that won't be used, and spawning language servers which load indexes is expensive (maybe the user isn't even going to edit c++ code).
If this isn't quite trivial to add/maintain, and it needs to be opt-in, I'm not sure it's going to benefit enough people to justify it. (I don't think we've had previous requests for this workflow).
username_0: I see, thanks for the quick reply! Guess I'll have to live it, then.
Status: Issue closed
username_1: Yeah, sorry about that :-( |
nestjs/swagger | 712227315 | Title: Add decorator to allow specification of the schema type name
Question:
username_0: <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!--
Please search GitHub for a similar issue or PR before submitting.
Check one of the following options with "x" -->
<pre><code>
[ ] Regression <!--(a behavior that used to work and stopped working in a new release)-->
[ ] Bug report
[X] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead post your question on Stack Overflow.
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
The Open API schema type name is the same as the type name.
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Add a decorator to allow specification of a schema type name that is different from the type name.
For example:
```
@ApiSchema({ name: 'MyType' })
class MyTypeDto {
}
```
## Minimal reproduction of the problem with instructions
<!-- Please share a repo, a gist, or step-by-step instructions. -->
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
Type names often have suffixes such as `Entity` or `Dto` that represent internal
implementation choices. Allowing explicit specification of the schema type name
would avoid leaking these choices into the public interface.
## Environment
<pre><code>
Nest swagger version: 4.5.12
<!-- Check whether this is still an issue in the most recent Nest version -->
For Tooling issues:
- Node version: 12.13 <!-- run `node --version` -->
- Platform: Mac <!-- Mac, Linux, Windows -->
Others:
<!-- Anything else relevant? Operating system version, IDE, package manager, ... -->
</code></pre>
Answers:
username_1: Would you like to create a PR for this?
username_2: May I try?
username_0: @username_2 - got for it! I likely won't have time to get to it in the near future.
username_3: See the PR here
https://github.com/nestjs/swagger/pull/983
username_0: On review, I'm thinking `name` actually be `title` to be consistent with both the OpenAPI and JSON schema specifications.
And as we're adding that, we should add the other properties that are tied to the schema object that can't be set via annotations on the properties:
- description
- externalDocs
- deprecated
After creating this issue, I found that I really wanted at least `description` where now I can only provide documentation comments on the type.
username_4: Hello, what's the status of this issue?
username_3: The original feature is implemented in my fork but the additional requested features not. I think my PR is still not merged.
username_1: Let's track this here https://github.com/nestjs/swagger/pull/983
Status: Issue closed
|
WoWManiaUK/Blackwing-Lair | 734517315 | Title: [Quest] The Heart of the Matter - Slave Pens
Question:
username_0: **Links:**
quest item https://cata-twinhead.twinstar.cz/?item=72118
https://cata-twinhead.twinstar.cz/?item=72119
**What is happening:**
- The Invader's Claw
- The Slave Master's Eye
_quest item not dropping_
**What should happen:**
quest items should drop by bosses to complete the quest |
jlippold/tweakCompatible | 582027680 | Title: `VoiceChanger XS (iOS 11/12/13) Cracked` working on iOS 13.2.2
Question:
username_0: ```
{
"packageId": "net.pulandres.voicechangerx",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "net.pulandres.voicechangerx",
"deviceId": "iPhone12,5",
"url": "http://cydia.saurik.com/package/net.pulandres.voicechangerx/",
"iOSVersion": "13.2.2",
"packageVersionIndexed": true,
"packageName": "VoiceChanger XS (iOS 11/12/13) Cracked",
"category": "Tweaks",
"repository": "The Pulandres Repo",
"name": "VoiceChanger XS (iOS 11/12/13) Cracked",
"installed": "2.2-51-2",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "net.pulandres.voicechangerx",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Change your voice on calls! Support A12",
"latest": "2.2-51-2",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
Nevcairiel/NominalStructures | 776562193 | Title: Transfer Control Unit (TCU) saving issues
Question:
username_0: Hello everyone! I've noticed 3 bugs with the TCU preset storage system. They've annoyed me for months (by making it very hard to save presets and having to retry multiple times to make changes "stick"), but today is when I finally realized how the quirks work:
1. If you don't hit save before switching to another preset in the editing dropdown menu, then the changes to that preset are lost. So the user always has to manually click save before looking at another preset. Maybe this is intentional but it wasn't intuitively logical.
2. If you carry an omni tool in your hand, and then go into "edit presets" on a TCU object, it actually shows the wall presets in the menus, but any time you save, it saves the edits to the given slot into your handheld tool instead of the wall mounted TCU. Maybe this only happens if the omni tool is set to advanced mode.
3. If multiple presets have the same name, it saves to the topmost one. It clearly treats preset names as the unique identifier for each preset, instead of using their dropdown slot location (slot 1/2/3/4/5). This is problematic if you want to fill your "set active transfer" radial menu with presets named "---" to make the empty slots look less distracting than "Preset X". |
ziglang/zig | 346782639 | Title: ability to set enum(union) active field based on comptime known field name
Question:
username_0: Currently we have [@field](https://ziglang.org/documentation/master/#field) for doing field access expressions when the field name is a comptime known slice of bytes. However this cannot be used to set the active field of a tagged union, because the syntax to choose the active union field is:
`x = UnionName { .FieldName = value };`
This is not a field access expression, so it cannot be done based on a comptime `field_name` slice.
I propose `@unionInit` for this purpose:
`fn @unionInit(comptime Union: type, comptime active_field_name: []const u8, value: var) Union`
`x = @unionInit(UnionName, field_name, value);`
Answers:
username_1: Why cant you just say:
```C
var x: UnionName = value;
```
username_0: Because there could be multiple union fields with the same type as `value`. However we do have this related proposal: #995
username_0: I marked this as "help wanted" for people who were asking for smaller, well-defined contribution tasks.
username_2: @username_0 Any chance you can point me in the right direction to start looking at this? I would like to contribute, especially if it is a smaller, well-defined contribution task.
Status: Issue closed
username_0: Thanks @username_2 for the pull request. I know it took a long time for me to get to it.
I added documentation and integrated it with result location semantics, and then merged it into master.
Here's an example where you can see it participating in result location semantics:
```zig
export fn entry() void {
var x = @unionInit(U, "Two", [_]Foo{ foo(), foo() });
}
fn foo() Foo {
return Foo{ .a = 1 };
}
const U = union(enum) {
One: i32,
Two: [2]Foo,
};
const Foo = struct {
a: i32,
};
```
```llvm
define void @entry() #2 !dbg !40 {
Entry:
%x = alloca %U, align 4
%0 = getelementptr inbounds %U, %U* %x, i32 0, i32 1, !dbg !61
store i1 true, i1* %0, !dbg !61
%1 = getelementptr inbounds %U, %U* %x, i32 0, i32 0, !dbg !61
%2 = bitcast { i32, [4 x i8] }* %1 to [2 x %Foo]*, !dbg !61
%3 = getelementptr inbounds [2 x %Foo], [2 x %Foo]* %2, i64 0, i64 0, !dbg !62
call fastcc void @foo(%Foo* sret %3), !dbg !62
%4 = getelementptr inbounds [2 x %Foo], [2 x %Foo]* %2, i64 0, i64 1, !dbg !63
call fastcc void @foo(%Foo* sret %4), !dbg !63
call void @llvm.dbg.declare(metadata %U* %x, metadata !44, metadata !DIExpression()), !dbg !64
ret void, !dbg !65
```
You can see that the `foo()` function calls directly write their result directly into the result location. |
tensorflow/tensorflow | 933513550 | Title: How to apply a hierarchical mask in Tensorflow2.0 (tf.keras)?
Question:
username_0: I am trying to build a hierarchical sequence model for time series classification (refer to the paper: hierarchical attention networks for document classification). But I'm very confused about how to mask the hierarchical sequences.
My data is a hierarchical time series. Specifically, each sample is composed of multiple sub-sequences and each sub-sequence is a multiple multivariate time series (just like word--> sentence -->document in NLP). So I need to pad and mask it twice. This is critical as a document will often not have the same number of sentences (or all sentences the same number of words). Finally, I get data as follows:
```
array([[[[0.21799476, 0.26063576],
[0.2170655 , 0.53772384],
[0.18505535, 0.30702454],
[0.22714901, 0.17020395],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]],
[[0.2160176 , 0.23789616],
[0.2675753 , 0.21807681],
[0.26932836, 0.21914595],
[0.26932836, 0.21914595],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]]],
[[[0.03941338, 0.3380829 ],
[0.04766269, 0.3031088 ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]],
[[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]]]], dtype=float32)
```
Then I build a hierarchical model as follows:
```
inputs = Input(shape=(maxlen_event, maxlen_seq, 2))
x = TimeDistributed(
Sequential([
Masking(),
LSTM(units=8, return_sequences=False)
])
)(inputs)
x = LSTM(units=32, return_sequences=False)(x)
x = Dense(16, activation='relu')(x)
output = Dense(16, activation='sigmoid')(x)
```
As my data is padded in on both dimensions, I don't know how to mask it correctly. I have two questions about it:
Q1: In TimeDistributed, do I use the masking layer correctly to mask the first padding?
Q2: How to mask the second padding?
Thank you.
Answers:
username_1: @username_0
This is not Build/Installation or Bug/Performance issue. Please post this kind of support questions at Tensorflow [Forum.](https://discuss.tensorflow.org/) There is a big community to support and learn from your questions. GitHub is mainly for addressing bugs in installation and performance. Please move this to closed status.
Thanks
username_0: Thanks for your advice. I have closed the issue.
Status: Issue closed
|
xtaci/kcp-go | 360806548 | Title: Default mode max rtt is larger than 1 min
Question:
username_0: Change ping count in kcp_test.go from 100 to 10000 and test the default mode:
```
- if next > 100 {
+ if next > 10000 {
break
}
```
The result maxrtt is very large:
```
E:\github-other\kcp-go (master -> origin)
λ go test
beginning tests, encryption:salsa20, fec:10/3
default mode result (285431ms):
avgrtt=51168 maxrtt=87190
PASS
ok _/E_/github-other/kcp-go 308.101s
```
Save all rtt to rtt.txt and plot with gnuplot:
`plot "rtt.txt"`

Code diff:
```
kcp_test.go | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/kcp_test.go b/kcp_test.go
index a4759b0..577a5b5 100644
--- a/kcp_test.go
+++ b/kcp_test.go
@@ -6,6 +6,7 @@ import (
"encoding/binary"
"fmt"
"math/rand"
+ "os"
"sync"
"testing"
"time"
@@ -181,6 +182,8 @@ func test(mode int) {
var hr int32
ts1 := iclock()
+ file, _ := os.Create("rtt.txt")
+ defer file.Close()
for {
time.Sleep(1 * time.Millisecond)
@@ -247,6 +250,7 @@ func test(mode int) {
binary.Read(buf, binary.LittleEndian, &sn)
binary.Read(buf, binary.LittleEndian, &ts)
rtt = uint32(current) - ts
+ file.WriteString(fmt.Sprintf("%d\n", rtt))
if sn != uint32(next) {
// 如果收到的包不连续
@@ -267,7 +271,7 @@ func test(mode int) {
//println("[RECV] mode=", mode, " sn=", sn, " rtt=", rtt)
}
- if next > 100 {
+ if next > 10000 {
break
}
}
@@ -281,8 +285,8 @@ func test(mode int) {
func TestNetwork(t *testing.T) {
test(0) // 默认模式,类似 TCP:正常模式,无快速重传,常规流控
- test(1) // 普通模式,关闭流控等
- test(2) // 快速模式,所有开关都打开,且关闭流控
+ // test(1) // 普通模式,关闭流控等
+ // test(2) // 快速模式,所有开关都打开,且关闭流控
}
func BenchmarkFlush(b *testing.B) {
```
Answers:
username_1: 这是协议测试,调度不够及时,参考session.go updater.go的调度
Status: Issue closed
|
tencentyun/cos-python-sdk-v5 | 1145531941 | Title: get_auth() error in latest version
Question:
username_0: Python Version: 2.7.18
SDK Version: 1.9.15
___
if headers contains unicode key, `get_auth()` will throw error `TypeError: descriptor 'lower' requires a 'str' object but received a 'unicode'`.
Code:
```Python
sign_str = COS_CLIENT.get_auth(Method=method,
Bucket=COS_BUCKET,
Key=path,
Headers={u'Content-Length': 10607}, # Cause error
Params=params,
Expired=expird)
```
Log:
<img width="1195" alt="Screen Shot 2022-02-21 at 17 22 18" src="https://user-images.githubusercontent.com/17962902/154925752-ec8f8c65-2283-4d17-9a49-1d821e05b21a.png">
---
Old version(1.6.5) works fine. |
vert-x3/vertx-eventbus-bridge-clients | 264000195 | Title: Transport reinitialization
Question:
username_0: Right now, the `Transport` (netty `Channel`) is only being constructed in the `EventBusClient` factory methods (`tcp` and `websocket`). Yet, in the life of an `EventBusClient` instance, it may be necessary to re-construct a `Transport`.
This depends on the actual `Transport` implementation, e.g. a `WebSocketTransport` channel might be closed by the server if a message larger than `maxWebsocketFrameSize` was send to the server.
A `xhr-polling` or `xhr-streaming` protocol additionally might have the requirement to have multiple channels open simultaneously or in series. On the other hand, it's `ping` command might be different (my current understanding is, that the server issues a heartbeat-frame every now and then instead of the client sending a message of type `ping`).
What do you think about creating an intermediate layer between `EventBusClient` and `Transport` to handle those differences? I would PR soon. ;-)
Answers:
username_0: Implemented in https://github.com/username_0/vertx-eventbus-bridge-clients/tree/java-client-next - will be PRed next. |
inspursoft/board | 770421908 | Title: add repoServicePath check when download service yaml files - [closed]
Question:
username_0: In GitLab by @sokril on Jul 17, 2018, 06:27
_Merges weidev -> dev_
Answers:
username_0: In GitLab by @wknet123 on Nov 13, 2019, 11:15
The test coverage for backend is <a href=http://10.110.18.40:8080/job/goTest/949//TOTAL_REPORT/index.html>FAIL</a> <img src=http://10.110.18.40:8080//userContent/error.jpg width=20 height=20> ,The test coverage for frontend is <a href=http://10.110.18.40:8080/job/goTest/949//UI/index.html>38.11 </a> <img src=http://10.110.18.40:8080//userContent/correct.jpg width=20 height=20> , check <a href=http://10.110.18.40:8080/job/goTest/949//console> consolse log</a>
username_0: In GitLab by @wknet123 on Nov 13, 2019, 11:15
The test coverage for backend is <a href=http://10.110.18.40:8080/job/goTest/950//TOTAL_REPORT/index.html>FAIL</a> <img src=http://10.110.18.40:8080//userContent/error.jpg width=20 height=20> ,The test coverage for frontend is <a href=http://10.110.18.40:8080/job/goTest/950//UI/index.html>38.11 </a> <img src=http://10.110.18.40:8080//userContent/correct.jpg width=20 height=20> , check <a href=http://10.110.18.40:8080/job/goTest/950//console> consolse log</a>
username_0: In GitLab by @wknet123 on Nov 13, 2019, 11:15
closed
Status: Issue closed
|
AzureAD/microsoft-authentication-library-for-dotnet | 480488954 | Title: FIDO2 Support
Question:
username_0: Both [Microsoft](https://www.microsoft.com/en-us/microsoft-365/blog/2018/11/20/sign-in-to-your-microsoft-account-without-a-password-using-windows-hello-or-a-security-key/) and [Azure AD](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Announcing-the-public-preview-of-Azure-AD-support-for-FIDO2/ba-p/746362) accounts support FIDO2 based logins using Windows Hello (i.e., FIDO2 platform authenticator supporting resident keys) or USB/Lightning/NFC based security keys. Android is also a certified platform authenticator but I do not believe it supports resident keys yet.
There are also platform APIs that are implemented and/or are in the works:
* [Windows 10](https://github.com/Microsoft/webauthn)
* [Android](https://developers.google.com/android/reference/com/google/android/gms/fido/package-summary) - [Xamarin bindings](https://www.nuget.org/packages/Xamarin.GooglePlayServices.Fido/)
* [Apple WebKit](https://bugs.webkit.org/show_bug.cgi?id=181943)
* [YubiKey lightning](https://www.yubico.com/lightning-project/)
Are there any plans to support FIDO2 in MSAL.NET so I can use my security keys without a webview (using Windows/Android platform APIs and [libfido2](https://github.com/Yubico/libfido2) on Linux/macOS) or using the OS webview (iOS/macOS)? Is there a unified repository for having this as a cross platform feature request for all supported scenarios?
I believe MSAL support should be a part of Microsoft's FIDO2 roadmap. Without MSAL support plans in the near future I do not see the point of FIDO2 support if majority of applications can't login with it
Answers:
username_1: @username_0 We are considering it. No ETA for now.
username_2: FIDO keys should work fine. On Windows, you need to use the system browser, ideally the new Chromium based Edge, although Chrome also works as far as I know.
In the future, we will extend support to the embedded browser as well. Tracking issue: https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/1398
Status: Issue closed
username_2: Both [Microsoft](https://www.microsoft.com/en-us/microsoft-365/blog/2018/11/20/sign-in-to-your-microsoft-account-without-a-password-using-windows-hello-or-a-security-key/) and [Azure AD](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Announcing-the-public-preview-of-Azure-AD-support-for-FIDO2/ba-p/746362) accounts support FIDO2 based logins using Windows Hello (i.e., FIDO2 platform authenticator supporting resident keys) or USB/Lightning/NFC based security keys. Android is also a certified platform authenticator but I do not believe it supports resident keys yet.
There are also platform APIs that are implemented and/or are in the works:
* [Windows 10](https://github.com/Microsoft/webauthn)
* [Android](https://developers.google.com/android/reference/com/google/android/gms/fido/package-summary) - [Xamarin bindings](https://www.nuget.org/packages/Xamarin.GooglePlayServices.Fido/) (I'm not sure if resident keys and FIDO2 PIN UI are supported)
* [Apple WebKit](https://bugs.webkit.org/show_bug.cgi?id=181943)
* [YubiKey lightning](https://www.yubico.com/lightning-project/)
Are there any plans to support FIDO2 in MSAL.NET so I can use my security keys without a webview (using Windows/Android platform APIs and [libfido2](https://github.com/Yubico/libfido2) on Linux/macOS) or using the OS webview (iOS/macOS)? Is there a unified repository for having this as a cross platform feature request for all supported scenarios?
I believe MSAL support should be a part of Microsoft's FIDO2 roadmap. Without MSAL support plans in the near future I do not see the point of FIDO2 support if majority of applications can't login with it
username_0: A seamless FIDO2 experience across Microsoft applications will require the adoption of WebView2, as mentioned in your linked issue. Given that a lot of Microsoft first party desktop applications use the embedded view, I still cannot use my YubiKey to login anywhere on my personal computer except on the web apps.
username_0: MSAL desktop and mobile libraries should also consider supporting a 100% native login experience on platforms where it is possible (Windows currently) to make the passwordless experience seamless without browser redirects. Native interop with the WebAuthN Win32 API would be required - https://github.com/Microsoft/webauthn
username_2: Thanks for the detailed explanation @username_0 . WebView2 is still in preview so we cannot use it until it is generally available. I have a prototype of MSAL using WebView2 and it's working well.
We are also looking at integrating with WAM https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/643 which should open up the Windows Hello and other native experience.
username_3: @username_2 : adding a link on how to install WebView2: https://docs.microsoft.com/en-us/microsoft-edge/webview2/concepts/distribution#understanding-the-webview2-runtime
username_3: @username_2 : I think that we can close? this would be a dupe of the support for the system browser in ASP.NET Core?
username_3: Status:
- FIDO2 does not work on embedded browsers yet, but we have a solution (IWebView2). It should work with system browser
- it works with WAM, but only for Work or school accounts, not MSA (known issue)
username_3: See also https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/1670
username_2: With MSAL 4.25, apps can use WAM which mostly supports FIDO (it doesn't currently work with MSA accounts)
With WebView2, MSAL + embedded browser should also support FIDO (https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/1398)
With system browser, Chrome and Edge support FIDO
Status: Issue closed
username_2: Closing this as we now support FIDO via;
- system browser
- Win10 broker (WAM)
- WebView2 embedded browser
We are keeping WebBrowser1 (legacy embedded browser) only on .NET Classic targets for backwards compatibility and will probably remove it in MSAL 5 |
barryvdh/laravel-debugbar | 57750116 | Title: Works perfectly but is not visible on the Laravel 5 'Whoops' page
Question:
username_0: Works perfectly, as described, and I love it except it does not appear on the Laravel 5 'whoops' error page which means I have to make do with its horrible and un-descriptive errors. It's as if it is not being injected on that page.
Answers:
username_1: Yes, that's the expected behaviour. You will need to change your http kernel to use the debugbar midleware yourself if you want that.
username_1: NB, I'm not talking about the middleware array here.
username_2: Well there is a L5 middleware but can't inject it automatically. You could try to add it to your middleware array but haven't fully tested it yet.
username_1: That won't work. Laravel doesn't go through the same response cycle for error handling.
username_3: It took me some tinkering but eventually I got this working in the Exceptions\Handler.php file:
```
public function render( $request, Exception $e )
{
$whoops = new Run;
$whoops->allowQuit( false );
$whoops->writeToOutput( false );
$whoops->pushHandler( new PrettyPageHandler() );
$status = $e instanceof HttpExceptionInterface ? $e->getStatusCode() : 500;
$headers = $e instanceof HttpExceptionInterface ? $e->getHeaders() : [ ];
$debugbar = App::make( 'debugbar' );
$debugbar->boot();
return $debugbar->modifyResponse( $request, Response::make( $whoops->handleException( $e ), $status, $headers ) );
}
```
username_2: I have another solution. Will push soon.
username_2: This would make more exceptions caught by the Debugbar: https://github.com/username_2/laravel-debugbar/commit/007534bf62f1415e9ddf9aba5d38ccdfc17180dc
Can you test with `2.x@dev`?
username_0: Just tested now. Appears on the whoops page but does not capture exceptions itself. Is this expected behaviour?

username_2: I think you can add Debugbar::addException to your report function in your aerror handler.
username_4: Cool, would be nice if it works also in ` \username_1\Exceptions\ExceptionHandler` |
MicrosoftDocs/azure-docs | 346698606 | Title: AAD: Revoke / Invalidate access tokens
Question:
username_0: - Product: Azure Active Directory
- Documentation link: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-oauth-code
- Feedback: The document does not describe how an access token can be explicitly revoked / invalidated.
Example: A client application uses the OAuth 2.0 code grant flow to obtain an access token. Once the user is done with their work, the "logout" action needs to invalidate the access token.
Answers:
username_1: @username_0 Thank you for the valuable feedback,we are investigating the issue.
username_2: @username_0 Unfortunately currently we don't have a specific revocation API. However, you can set access token lifetime based on your requirement. Please refer to this document for the same - [Azure Active Directory v2.0 tokens reference](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-tokens).
Also please upvote below Azure Feedback request regarding Invalidate JWT Token. This will allow the product team to further prioritize it and include into their plans.
https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/19474918-invalidate-jwt-token
username_2: @username_0 We will now proceed to close this thread. If there are further questions regarding this matter, please open a new issue and we will gladly continue the discussion.
Status: Issue closed
username_0: Thank you for your response @username_2
username_3: Thanks for clarifying. Along similar lines, I'm wondering if it's possible for a third party to disconnect their app from a users account? Will "logging out" have that effect?
username_4: @username_2 What exactly does deleting the OAuth2PermissionGrant entity for the user do? Will it revoke the refresh token?
username_5: No it will not log out since i am also facing the same issue where after logout from the application old request are still valid |
Geopras/IdeaWatcher | 187818418 | Title: Testfall 10: Idee folgen und aufhören zu folgen
Question:
username_0: Ein angemeldeter Benutzer kann einer veröffentlichten Idee folgen. Dann wird sie in ihn seiner Liste verfolgter Ideen angezeigt. Wenn er aufhört ihr zu folgen, wird sie
aus der Liste wieder entfernt. |
pikax/gin-downloader | 235916824 | Title: kissmanga.images Error: Malformed UTF-8 data
Question:
username_0: Whenever i try to retrieve images from Kissmanga i get the Error: Malformed UTF-8 data.
Is there something i can do about it?
Here is the code i'm using:
``` javascript
gin.kissmanga.images("Gintama", 1)
.then(x=>{
console.log('starting...')
console.log('getting %d images', x.length);
return x.map(p=>{
p.then(r=>{
//this will run as soon each promise is resolved
//r contains url
console.log(r);
})
});
})
.then(x=>Promise.all(x))
.then(x=>console.log('resolved'))
.catch(console.error);
```
And this is the Error message:
```
evalmachine.<anonymous>:4
Error: Malformed UTF-8 data
for (var img in lstImages) imgs.push(wrapKA(lstImages[img]).toString());
at Object.stringify (evalmachine.<anonymous>:1:7704)
^
at init.toString (evalmachine.<anonymous>:1:4960)
at wrapKA (evalmachine.<anonymous>:2:4746)
TypeError: Cannot read property 'toString' of undefined
at evalmachine.<anonymous>:4:38
at ContextifyScript.Script.runInContext (vm.js:53:29)
at evalmachine.<anonymous>:4:60
at Parser.imagesList
at ContextifyScript.Script.runInContext (vm.js:53:29)
at KissManga.<anonymous>
at Parser.imagesList
at KissManga.<anonymous>
at step
at Object.next
at step
at Object.next
at fulfilled
at fulfilled
at <anonymous>
at process._tickCallback
```
Thanks for your hard work and have a nice day! :)
Answers:
username_1: Hi,
Sorry just got time to look to this issue, I can't reproduce it.
Can you check wich version you have installed?
```bash
npm list gin-downloader
```
The latest is 1.0.3, please upgrade if you have a different one.
If the issue is still there, let me know.
Thanks for open the issue :)
username_0: It appears that i am on the latest version.

username_1: does still happen?
can you go to the (Gintama kissmanga)[http://kissmanga.com/Manga/Gintama] and open first chapter from the website?
username_0: yes, the same problem still occurs as described in the post above.
Opening the first chapter in the webbrowser works perfectly fine.
username_1: I manage to reproduce using a proxy from USA, I published a fix, just update to 1.0.4.
thank you, let me know if is working
username_0: Yep that fixed it. Thank you :)
Status: Issue closed
|
JuliaMath/QuadGK.jl | 1114087992 | Title: AD compatibility
Question:
username_0: `quadgk` is not compatible with AD (well, with ForwardDiff.jl) because of `cachedrule`. I am trying to solve an optimization problem using JuMP where some nonlinear constraints involve integrals. I can compute them using other methods, but it would be nice if QuadGK "just worked"
Here is the rather uninformative error
```
ERROR: StackOverflowError:
Stacktrace:
[1] cachedrule(#unused#::Type{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#137#139"{typeof(integratesegmentgk)}, Float64}, Float64, 5}}, n::Int64) (repeats 79984 times)
@ QuadGK C:\Users\beasont\.julia\packages\QuadGK\ENhXl\src\gausskronrod.jl:253
```
The reference to `integratesegmentgk` is my user-defined function in the constraint. It simply calls `quadgk`, but for completeness:
```
function integratesegmentgk(mL,mU,a,b,c)
hfun(x) = exp(-a -b*x -c*x^2)
res,tol = quadgk(hfun,mL,mU)
return res
end
```
I am using FastGaussQuadrature.jl in the meantime.
Answers:
username_0: oops sorry dupe of #13
Status: Issue closed
|
grommet/grommet | 276522886 | Title: Map only supports acyclic dependencies.
Question:
username_0: Had an error with the general map because of Cyclic dependency. I was wondering what it was, until I found out `toposort` library only support acyclic sorting. This should at least be documented behavior. Here is the snippet of the faulty set and its call stack.
```
[(children, parent)]
(2) [1936, 1832]
(2) [1672, 1936]
(2) [1832, 1672]
```
```
Uncaught Error: Cyclic dependency: "1832"
at visit (index.js:29)
at visit (index.js:47)
at visit (index.js:47)
at visit (index.js:47)
at toposort (index.js:22)
at module.exports (index.js:10)
at Object.nodesToGrommetMap [as c] (snapshot-map.js:47)
at GeneralMap.render (GeneralMap.js:141)
at GeneralMap.tryRender (index.js:34)
at GeneralMap.proxiedMethod (createPrototypeProxy.js:44)
```

# Entire list
```
(2) [4, 0]
(2) [1712, 4]
(2) [1844, 1832]
(2) [1904, 1712]
(2) [1912, 1904]
(2) [1936, 1832]
(2) [1672, 1936]
(2) [1832, 1672]
(2) [364, 1972]
(2) [2072, 1672]
(2) [2108, 1672]
(2) [2156, 1672]
(2) [1924, 1672]
(2) [2568, 1672]
(2) [1820, 1672]
(2) [3032, 1672]
(2) [3000, 1672]
(2) [2824, 1672]
(2) [3144, 1672]
(2) [3512, 1672]
(2) [3604, 1672]
(2) [3612, 1672]
(2) [3716, 1672]
(2) [4204, 1672]
(2) [4428, 360]
(2) [5588, 5532]
(2) [5348, 1972]
(2) [6048, 2072]
(2) [3864, 2072]
(2) [6000, 1672]
(2) [2916, 1672]
(2) [2612, 1672]
(2) [4352, 1672]
[Truncated]
(2) [6380, 6896]
(2) [5964, 1672]
(2) [6756, 2072]
(2) [6548, 3032]
(2) [6212, 6548]
(2) [7084, 1672]
(2) [8020, 360]
(2) [8128, 360]
(2) [9060, 6232]
(2) [6308, 9060]
(2) [2648, 360]
(2) [8836, 360]
(2) [7864, 3748]
(2) [7512, 1672]
(2) [3764, 3416]
(2) [3680, 3416]
(2) [9008, 9084]
(2) [6932, 9008]
(2) [6332, 9008]
```
Answers:
username_1: This appears to be fixed now in the latest version of grommet 1.x.
I was just able to create a Map with a cycle in it and it has no errors. It looks like grommet is no longer using the toposort package. This package is no longer found in the node_modules directory after running "npm install".
username_2: hey @username_0 can you try this with the latest grommet?
username_1: This appears to be fixed in the latest grommet.
Status: Issue closed
|
ant-design/ant-design | 321116912 | Title: InputNumber使用箭头或者键盘上下键加减时无法触发Form的校验
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Version
3.5.0
### Environment
win10、Chrome 66.0.3359.139
### Reproduction link
[https://codepen.io/zh4l/pen/ELoyzZ](https://codepen.io/zh4l/pen/ELoyzZ)
### Steps to reproduce
1. 在输入框中输入一个大于31的数
2. 会提示“只能输入1-31的整数”,此时没问题
3. 用InputNumber右边的箭头或者用键盘下箭头减小数字,减小到小于31时仍旧会提示“只能输入1-31的整数”
还有一种情况是:
由于我设置了InputNumber的max和min,所以输入32时,会报“只能输入1-31的整数”,然后输入框失焦后会自动变成31,但是此时提示还是在
### What is expected?
无论是手动输入、箭头加减值还是blur时,都应当触发form的校验
### What is actually happening?
箭头加减值和blur时,没有触发校验
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: @username_0 是不是只用鼠标点击上下箭头时,无法触发 onblur 事件?
username_0: @username_1 你可以看我的栗子,由于我没写onblur,不知道有没有触发,但是不论哪种情况下,blur后都没触发form的校验
username_2: 这种情况确实不好处理,但我的建议是如果已经用 min max 限制了输入,那么校验其实已经没有必要了。直接把限制文案作为默认提示就好。
Status: Issue closed
|
firebase/FirebaseUI-Android | 244443765 | Title: Slower data retrival using Firebase-UI Recyclerview
Question:
username_0: * Android device: Redmi 3S Prime
* Android OS version: M
* Google Play Services version: 11
* Firebase/Play Services SDK version: 11.0.1
* FirebaseUI version: 2.0.1
The data retrival is very slow when using Firebase UI recycler view, it takes long time to load the data .
#### Relevant Code:
`@Override
public void onViewCreated(View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
mdatabase = FirebaseDatabase.getInstance().getReference().child("Book");
recyclerView = (RecyclerView) view.findViewById(R.id.book_recycle);
recyclerView.setHasFixedSize(true);
LinearLayoutManager linearLayoutManager = new LinearLayoutManager(getActivity());
recyclerView.setLayoutManager(linearLayoutManager);
prodialog = new ProgressDialog(getContext());
/* floatingActionButton = (FloatingActionButton)view.findViewById(R.id.crop_fab);
floatingActionButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
CropAddFragment cropAddFragment = new CropAddFragment();
FragmentTransaction fragmentTransaction = getActivity().getSupportFragmentManager().beginTransaction();
fragmentTransaction.replace(R.id.frame,cropAddFragment,"fragment");
fragmentTransaction.commit();
}
});*/
}
// TODO: Rename method, update argument and hook method into UI event
public void onButtonPressed(Uri uri) {
if (mListener != null) {
mListener.onFragmentInteraction(uri);
}
}
/* @Override
public void onAttach(Context context) {
super.onAttach(context);
if (context instanceof OnFragmentInteractionListener) {
mListener = (OnFragmentInteractionListener) context;
} else {
throw new RuntimeException(context.toString()
+ " must implement OnFragmentInteractionListener");
}
}*/
@Override
public void onStart() {
super.onStart();
prodialog.setMessage("Loading");
prodialog.show();
FirebaseRecyclerAdapter<Book,BookViewHolder> firebaseRecyclerAdapter = new FirebaseRecyclerAdapter<Book, BookViewHolder>(
Book.class,
[Truncated]
textView3.setText(college);
}
public void setBranch(String branch) {
TextView textView4 = (TextView) mView.findViewById(R.id.branch);
textView4.setText(branch);
}
public void setImage(Context ctx,String image) {
ImageView imageView = (ImageView) mView.findViewById(R.id.postimage);
Picasso.with(ctx).load(image).into(imageView);
Log.d("Image","inmahgeg");
}
}`
Answers:
username_1: @username_0 we need a lot more information to debug this:
1. How slow is "slow"?
2. Slower than what? Are you able to load the data faster when you don't use the RecyclerAdapter?
username_0: @username_1 When i was using firebase database and storage10.0.1' and firebase-ui-database:1.1.1' it was faster it was loading in a seconds . But When i have upgraded to latest version both gradle version of database and UI it takes minutes to load initially.
username_1: @username_0 can you enable debug logging for Firebase Database and share the logs?
Status: Issue closed
username_2: Same problem here for me.
As I have updated libraries |
intelsdi-x/snap | 177044123 | Title: Simplify `snapctl plugin unload` to use positional arguments
Question:
username_0: Proposal: Instead of accepting flags or a formatted string to unload plugins use positional arguments
Before: `snapctl plugin unload collector:mock:1` or `snapctl plugin unload -t collector -n mock -v 1`
After: `snapctl plugin unload collector mock 1`<issue_closed>
Status: Issue closed |
ccpgames/sso-issues | 398787777 | Title: [SSOv2] Authorization Failure on verify endpoint
Question:
username_0: # Bug
When making a GET request to https://esi.evetech.net/verify it responds with { "error": "authorization" }
### Reproduction Steps
I've asked on the Slack and on Reddit but I haven't yet found anyone with the same issue.
1. Get a JWT token
2. Send a GET request with "Authorization: Bearer JWT_token" header
### Actual Behaviour
Returns
```json
{ "error": "authorization" }
```
### Expected Behaviour
Returning my character information
Answers:
username_1: I know it's not an answer to your issue, but why don't you extract all data and validate the token yourself if you have the JWT token, instead of using the /verify endpoint ?
username_0: @username_1 I am extracting / validating the token myself now. I was just very confused that the /verify endpoint doesn't work for me but does work for others.
username_2: I've asked him to make the issue here since it's supposed to be possible to drop-in replace v1 with v2 urls and to have it documented here. It's a really weird behaviour that I wasn't able to replicate and I though we might want to track it somewhere.
username_0: I have no idea how the API is setup. MOST third parties are still using SSOv1 it seems. SSOv1 works perfect for me. I found a /verify endpoint for SSOv2 for anyone interested: https://login.eveonline.com/oauth/verify
I still haven't found a solution to why my JWT token isn't accepted at any endpoint but I'll continue to look at github documentation of SSOv2 auths and third parties that use SSOv2 because it has better documentation and is less confusing.
Status: Issue closed
|
kubermatic/kubermatic | 944416732 | Title: Allow cluster creation without an initial machine deployment
Question:
username_0: **User Story**
I would like to create a cluster without an initial machine deployment (esp. via API), since it keeps my automation setup simpler and with a clean scope. The current API endpoint for cluster creation is mixing unrelated functionality.
**Acceptance criteria**
The API offers a new endpoint, or a flag on the existing endpoint (`/api/v2/projects/{project_id}/clusters`), to create a cluster without an inital machine deployment.
Answers:
username_0: Please consider this issue on hold, I've realized that one can simply omit the NodeDeploymentSpec when creating a cluster.
Once I've verified that works as described above, I'll close this issue.
Status: Issue closed
username_0: I was able to verify that omitting `NodeDeploymentSpec` works as expected. However, it might still be nice to have this option in the GUI. |
AlexBacich/sticky-headers-table | 1140880541 | Title: add border for table, change background for header
Question:
username_0: I can't add border for table, also i can't change background for header. Please help m
<img width="370" alt="border" src="https://user-images.githubusercontent.com/5988428/154408830-b31f5906-f1e2-4b7e-a767-6180cb738a9f.png">
e |
firebase/firebase-admin-node | 1047922436 | Title: using a custom bucket name causes errors
Question:
username_0: when I create a bucket instance with its default name it works fine, but it causes errors when I provide a custom bucket name.
```
import { storage, app as _app } from 'firebase-admin';
import { Bucket } from '@google-cloud/storage';
let bucket: Bucket = storage().bucket('custom-bucket-name')
bucket.upload('./file.jpg');
```
using a custom bucket name causes an error:
```
The project to be billed is associated with an absent billing account
```
but using `.bucket()` without any name makes it work fine and `.upload()` works
Answers:
username_1: This might be because the Cloud Storage bucket you are trying to access does not exist. Please make sure the a bucket with the custom-name exists in your project. To create a new storage bucket using the SDK you should use the `storage.createBucket(...)` API.
username_2: Yep, please verify that the bucket you're trying to access exists and you have permission to access it.
Also note that creating new buckets is only supported for Blaze projects.
Status: Issue closed
|
JackyXiong/jackyxiong.github.io | 624578907 | Title: Readed List
Question:
username_0: 记录看过的一些技术资源
Answers:
username_0: - https://blog.cloudflare.com/graceful-upgrades-in-go/
类似于 Nginx 这类处理 TCP数据的程序需要平滑更新,因为在新版本开始处理请求和旧版本停止运行之间有时间差,这个时间差内,listen queue 可能会因为队列满而拒绝新的请求。
Nginx 是通过主进程处理重启,主进程收到重启指令后,创建新的主进程和 worker 进程,并不在接受新请求,直到老 worker 处理完成请求退出。
很多处理 http 层面重启的 go 的代码库,但是无法处理 tcp 层面信息。
- https://stackoverflow.com/questions/45396155/iptables-do-not-block-ip-with-ipset-immediately
iptables 对加入ipset的 ip 不能马上拦截,因为出于对性能的考虑,iptables对 tcp 包的检查只检查连接状态的包syn、syn+ack、ack,放行已经建立连接的包。若攻击者建立连接后,通过大量的数据包进行 ddos 攻击,iptables 可能不能即时匹配。
ao
- https://openacid.github.io/tech/algorithm/slimtrie-design/#
- 索引构建的2个基本思路:无序的 hash map 型,有序的 tree 结构
- hash map: 在内存中一次检索定位到数据,O(n)时间复杂度。key 只能等值匹配,不支持<>操作,内存开销大,无序。
- tree: key 是排序的可以顺序查找,有成熟实现(B, B+, skiplist)。内存开销大。 |
Piotrekol/StreamCompanion | 849810355 | Title: StreamCompanion crashing very often and giving the same error report
Question:
username_0: "There was an unhandled problem with the program and it needs to exit. Error report was sent to username_2." This pops up very frequently, followed by this error report:

This has been a problem for about a month now. Is this something that is an issue only with me, and if so, what do I need to do to fix it?
Answers:
username_1: Having the same problem for about 2 weeks, couldn't find anything on it :(
username_2: Already fixed in pre-release. will close once next release is published
Status: Issue closed
|
evandixon/Online-PKM-DB | 170073129 | Title: Pokemon editing - Part 2
Question:
username_0: - [ ] Pokemon aren't modified, but another version is added, making it appear like Pokemon were modified
- [ ] Let users besides the uploader edit a Pokemon by making a copy of it first.
- [ ] Display that a Pokemon has been copied from another, complete with a link to the original Pokemon |
gcanyon/navigator | 362232945 | Title: Update readme
Question:
username_0: Readme.md refers to some files and folders (e.g. "commands") that are not included.
Answers:
username_0: The readme says it should be copied to the LC Plugins folder, but obviously it shouldn't be since it doesn't exist until it's created by Navigator
Status: Issue closed
|
hyperion-project/hyperion.ng | 161175202 | Title: cleanup: remove smoothing from colors array
Question:
username_0: Smoothing should be a own component like blackborder, kodicheck, ...
This is more a small todo note in general :)
Answers:
username_1: ... the config part is boring - how about a rewrite and integrated smoothing in priomuxer - then we have half the way to idle detection, but this a complex task and this shouldn't be done in a hurry
username_0: and cause of this i just waited for some suggestions :D
Status: Issue closed
|
ga-dc/project2-gallery | 544615296 | Title: Kelly - Api
Question:
username_0: Production (deployed) URL: Still need to deploy heroku
Repository: https://github.com/username_0/studio-ghibli
Screencast:
Things you'd like specific feedback on:
The search component shows all api names instead of one searching.
Answers:
username_1: For the API, I would need a link to the repo of the backend. Also, there needs to be a README about the project.
Let me know when this part is done.
<img src="https://www.all-about-psychology.com/images/bobo.gif"/>
Status: Issue closed
|
ibm-js/decor | 100507026 | Title: has("object-is-api") always false, Object.is() never used
Question:
username_0: decor/features.js has this line:
```js
has.add("object-is-api", Object.is);
```
It's not operating as expected because `Object.is` is a function, and `has.add(string, function)` has different behavior than `has.add(string, boolean)`.
So, even on Chrome, `has("object-is-api")` evaluates to false, and the builtin `Object.is()` is **not** used.
Should probably just change the above code to:
```js
has.add("object-is-api", !!Object.is);
```
or alternately
```js
has.add("object-is-api", "is" in Object);
```
etc. But of course it needs to be tested.<issue_closed>
Status: Issue closed |
rauenzi/BetterDiscordAddons | 849759727 | Title: [Bug] RemoveMinimumSize not working
Question:
username_0: **Which plugin/theme is this about?**
RemoveMinimumSize
**Describe the Bug**
RemoveMinimumSize v0.0.1 could not be started. TypeError: Cannot read property 'getCurrentWindow' of undefined
at RemoveMinimumSize.start (<anonymous>:14:29)
at Object.startPlugin (<anonymous>:4:161590)
at Object.startAddon (<anonymous>:4:161385)
at Object.enableAddon (<anonymous>:4:122468)
at Object.toggleAddon (<anonymous>:4:122711)
at Object.togglePlugin (<anonymous>:4:159557)
at Xe.onChange (<anonymous>:4:135239)
at Ie.onChange (<anonymous>:4:128454)
at Object.E (89812792459e55f80f96.js:2)
at I (89812792459e55f80f96.js:2)
at 89812792459e55f80f96.js:2
at O (89812792459e55f80f96.js:2)
at C (89812792459e55f80f96.js:2)
at Array.forEach (<anonymous>)
at R (89812792459e55f80f96.js:2)
at L (89812792459e55f80f96.js:2)
at On (89812792459e55f80f96.js:2)
at ce (89812792459e55f80f96.js:2)
at Dn (89812792459e55f80f96.js:2)
at Mn (89812792459e55f80f96.js:2)
at Pn (89812792459e55f80f96.js:2)
at t.unstable_runWithPriority (89812792459e55f80f96.js:2)
at zi (89812792459e55f80f96.js:2)
at fu (89812792459e55f80f96.js:2)
at Cn (89812792459e55f80f96.js:2)
at HTMLDocument.r (4be545bff08dc07c3545.js:2)
**To Reproduce**
Enable the plugin for le big red text to appear/for it to not work as it previously did
**Expected Behavior**
For the plugin to work lol
**Discord Version**
Stable
**Additional Context**
Seems like the plugin broke after re-installing BetterDiscord through the new installer, but could also be a change from Discord's devs
Answers:
username_1: That doesn't seem to be the case. Uninstalling the new BBD and installing again using the old installer makes it work once again.
username_2: It is kinda both. There is a change coming from the Discord devs that is already out to a good portion of the users. That change requires a big change in BD which is installed by the new installer. The original change by Discord will mean plugins like this will break. However this one specifically will be integrated into BD
username_2: I have added a notice of being discontinued and can confirm this will be included with the next release of BD!
Status: Issue closed
|
gluonhq/client-samples | 592898932 | Title: Please add examples not using Maven
Question:
username_0: Hello.
It would very useful if you could add some examples of how to build native client applications using only the command-line. Not all projects use Maven nowadays.
Having Gradle samples would be nice too, but once you show how to build on the command-line, it's easy for people to write a Gradle plugin or just code it in the gradle file. Same with Ant, SBT and any other build tool.
Answers:
username_1: Hi
I agree with @username_0. Also it would be nice to have a possible migration guide from the old plugin :-) |
sammi/bootstrap-jss | 358774344 | Title: Too much good stuff!
Question:
username_0: Hello, there's lots of good stuff in here. I would suggest separating some stuff into a utility package, like the color functions, and the mixins, etc. So they are easier to find and consume individually.
F.e. maybe the color functions would be a package called `color-functions`, etc.
Answers:
username_1: cool, tanks fir the review. i will porting full feature first, re org them like you aais
Status: Issue closed
|
mcoope13/f1-3-c2p1-colmar-academy | 252790384 | Title: Summary
Question:
username_0: Overall Excellent, outside of two minor items I really do not have any issues with this site.
Color scheme is great, love your choices in styling for both desktop and mobile view.
Your html was easy to read, commented properly, manageable. Your css was equally as good and I didn't feel see any major or unneeded repetition. This is one of the better Colmar projects I have reviewed.
Now lets talk about the future of web development, as a final challenge I like you to look into PWAS(progressive web apps). https://developers.google.com/web/fundamentals/getting-started/codelabs/your-first-pwapp/
Apple is starting to support this, all recent android devices do support this. This makes your website work like an app on Android with offline viewing, push notifications, and the ability to even add an icon to the home screen.
See if you can make Colmar a PWA. |
pxgrid/ui-spec-md | 529701241 | Title: クリップボードから画像を貼り付けたときに名前をつけないとundefined.pngになってしまう
Question:
username_0: ブラウザ上のテキストエリアで編集中に、クリップボードに画像を入れた状態でペーストすると、
ダイアログが表示されファイル名を入れることになります。
デフォルトが `undefined.png` なのですが、esaのようにユニークなidを振ってあると良いのかなと思いました。
ただ、コミットするファイルになる想定だとは思うので、ファイル名の指定を必須にするという手もあるかなと思いました。
Answers:
username_1: ひとまず、このような実装にして対応しました。#85
また、実際の使い勝手を確認しながら、必要があれば改修していきたいと思います。
Status: Issue closed
|
gatsbyjs/gatsby | 364538902 | Title: Remark plugin does not prefix reference links
Question:
username_0: <!--
Please fill out each section below, otherwise your issue will be closed. This info allows Gatsby maintainers to diagnose (and fix!) your issue as quickly as possible.
Useful Links:
- Documentation: https://www.gatsbyjs.org/docs/
- How to File an Issue: https://www.gatsbyjs.org/docs/how-to-file-an-issue/
Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues
-->
## Description
Links in Markdown files using the [reference format] are not currently rendered with the specified path prefix.
[reference format]()
### Steps to reproduce
For a project using markdown files as a data source and the `gatsby-transformer-remark` plugin, author a link using the reference style, e.g.:
```
```
Clear steps describing how to reproduce the issue. Please please please link to a demo project if possible, this makes your issue _much_ easier to diagnose (seriously).
### Expected result
What should happen?
### Actual result
What happened.
### Environment
Run `gatsby info --clipboard` in your project directory and paste the output here. Not working? You may need to update your global gatsby-cli - `npm install -g gatsby-cli`
Answers:
username_0: This is due to the plugin not currently listening for `reference` nodes, only `link` nodes:
https://github.com/gatsbyjs/gatsby/blob/master/packages/gatsby-transformer-remark/src/extend-node-type.js#L141
Status: Issue closed
|
thyyppa/fluent-fm | 528714151 | Title: auto_id should be editable
Question:
username_0: The auto_id field of the FluentFMRepository should be public or have a setter method so it can be set to false
In my use case the responsibility of setting the id lies with the database, so we do not want to set the id when creating a new dataset.
Status: Issue closed
Answers:
username_1: Good point! Added in 1.0.27 |
technojam/TechnoJam-App | 719993095 | Title: Home screen implementation
Question:
username_0: This issue is tracking the following tasks -
- Correct creation of home activity package inside the already existing UI package.
- Creation of an Activity and a corresponding viewModel class of the activity.
- Correct XML implementation of the screen. The whole XML should be surrounded with <layout> tags for DataBinding implementation.
- Setting up of dataBinding and viewModel inside the activity class. For example - See SplashScreenActivity.
Remember - There will be a viewpager kind of implementation of the home activity which will contain 4 fragments. Make the XML implementation accordingly. |
MicrosoftDocs/windows-itpro-docs | 660039910 | Title: 怎么开软件费发票-本地宝
Question:
username_0: 怎么开软件费发票-本地宝开票【█1 З 5-电-ЗЗ45-嶶-З429█】杨生【σσ/V信1З00█507█З60】正规税务业务代理.100%真-票此.信.息.永.久.有.效” 实体公司开/详情-项.目.齐.全 可先开验。无需打开直接联系点击上方“百度快照”现场曝光!美准航母烧了4天还冒烟 飞机洒水1500次(原标题:美"准航母"烧了4天还冒烟,直升机洒水1500次,现场曝光)海外网7月16日电 美国海军“好人理查德”号两栖攻击舰烧了四天,火还没灭。军方15日曝光了救援现场的最新画面。据今日俄罗斯消息,“好人理查德”号自7月12日爆炸起火至今,消防人员一直在持续进行灭火工作。美国海军在15日的一份声明中说,为了抑制大火蔓延,直升机已经洒水超过1500次,较大的火焰已经被熄灭。目前,消防人员正在全力以赴,扑灭军舰闷烧的个别地点。目前共有63人受伤并接受治疗,其中包括40名船员和23个平民。美海军表示,尽管发生大火和爆炸,船体还是避免了无法弥补的损害,并称“燃油箱没有受到威胁,船体稳定,结构安全。”<issue_closed>
Status: Issue closed |
OneDrive/onedrive-api-docs | 99206472 | Title: Forbidden response when downloading chunks
Question:
username_0: I'm trying to download a file in chunks.
I'm making the following request:
```
GET https://api.onedrive.com/v1.0/drive/items/82A4826DAA392C28!5146/content
Headers:
Authorization: bearer EwCA...ZxAQ==
Range: bytes=0-2097152
```
And I'm getting a valid 302 Found response:
```
StatusCode: 302
Headers:
X-WLSPROXY: BN1302____PAP237
X-MSNSERVER: BL3301____PAP110
X-AsmVersion: UNKNOWN; 172.16.17.32
X-AsmVersion-ProxyApp: UNKNOWN; 172.16.17.32
Date: Wed, 05 Aug 2015 13:31:52 GMT
Location: https://public.bl3301.livefilestore.com/y3m9aaOZd0wyyDVSJDL_DI_v0K1mU8NwXMjZ37sB6bOF7_Vq2L4AwcfuqFgO3IeJdZkp_U1q9Dq7O26O0lXJrExCDzXcv1kmex-tapAWQ8M6v298Mb7_jVzda2YBZnTTwtlCHVSY95mM6h1W7mVWaESLczOzfUTyHbP3dui4OSu8f0co8M_4TyE4w6poveTN3O0YXfGBEGwFvLBwFvAb9eyx9wEfknlXlhdWKYlKrFG6LE/Document%20with%20spaces.tmp
P3P: CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"
Server: Microsoft-HTTPAPI/2.0
Server: Microsoft-HTTPAPI/2.0
Via: 1.1 BN1302____PAP237 (wls-colorado)
```
After this, I'm making another request to the URL from the Location header:
```
GET https://public.bl3301.livefilestore.com/y3m9aaOZd0wyyDVSJDL_DI_v0K1mU8NwXMjZ37sB6bOF7_Vq2L4AwcfuqFgO3IeJdZkp_U1q9Dq7O26O0lXJrExCDzXcv1kmex-tapAWQ8M6v298Mb7_jVzda2YBZnTTwtlCHVSY95mM6h1W7mVWaESLczOzfUTyHbP3dui4OSu8f0co8M_4TyE4w6poveTN3O0YXfGBEGwFvLBwFvAb9eyx9wEfknlXlhdWKYlKrFG6LE/Document with spaces.tmp
Authorization: bearer EwCA...ZxAQ== (SAME AS BEFORE)
Range: bytes=0-2097152
```
Only this time, I'm getting a 403 Forbidden response:
```
StatusCode: 403
X-MSNSERVER: BL3301____PAP212
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Virus-Infected: ScanError
X-MSDAVEXT_Error: 9830409
X-QosStats: {"ApiId":0,"ResultType":2,"SourcePropertyId":0,"TargetPropertyId":42}
X-ThrowSite: 1e99.cd8a
X-ClientErrorCode: VirusError
X-AsmVersion: UNKNOWN; 172.16.17.32
X-MSEdge-Ref: Ref A: 33218739EA894F05BC2F8FF1CAC09ACE Ref B: 1755B82AA6FC9880B766E46DD4BB5FF8 Ref C: Wed Aug 05 06:35:30 2015 PST
Accept-Ranges: bytes
Date: Wed, 05 Aug 2015 13:35:29 GMT
P3P: CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"
Server: Microsoft-IIS/8.5
```
Trying to download the file manually from OneDrive gives me a warning message, saying "Unable to scan Document with%20spaces.tmp for viruses".
Answers:
username_1: Hi @username_0, does this still repro for you?
username_0: Yes, this is still happening.
URL:
```
https://public.bl3301.livefilestore.com/y3mh67UfVK7UMe8I4-3li2FPGDa6dW6GRvKT8Kg8jh1MzvR1VnYf1v6HL1E1O1dCxVcVXHAgEkShMAc9ALeGJtZQAJKjWqH2osB7arapuvzJGR71GV_I9jXlPACTQNcnG8oB32bZtlAS874kQbYml4ZGP3PkMU00iILUVvQH5MNGOfJyvAwtSMml01BGaaygxlhq1AKysSFtJrwg7Wr6JL9LlTaBQiheAUUyUJGjThdo6Q/Document with spaces.tmp
```
username_1: Thanks for confirming - we'll need to investigate this one a little more to figure out what's going wrong. Do you see this occurring on other files as well, or is this the only one?
username_0: Not sure if you guys changes anything, but this seems to be working fine now...
username_1: It looks transient - I think it works for a while until some event occurs that makes it fail consistent for a period of time. Definitely a strange issue... I'll make sure it gets redirected to the right people.
username_2: This is happening for me as well - I think it has to do with the first file in chunks looking like a valid archive that OneDrive couldn't scan and thus thinks is malware. Is there some kind of override param like with Google Drive that allows you to download it anyway?
username_0: I'm afraid I'm getting this issue again. Same file:
GET https://public.by3302.livefilestore.com/y3m1AflA9ZVa7XqoHrxITSRrpDvHpz-Rj_PhefGL4st4E8BZC9p4wHZZt6jHCCV1b-40AAf4e2oZetgeMeHsXY2sVM5X6M3hWoBLET6sj6pxhQry1Mwxc8ckHxrhoAZIhtQyC7RzWTy517jjDqso5e7VDHbpyKtuLsukx5_XtRd_HEO20Ku0sb8OjwlseYm2tRGXHuxnw2aqOQpofq4UXdBk9chvyXBbVr1Avqnrpk2LiI/Document%20with%20spaces.tmp HTTP/1.1
Range: bytes=0-42649
Host: public.by3302.livefilestore.com
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
And the response I'm getting is:
HTTP/1.1 403 Forbidden
Content-Length: 0
Accept-Ranges: bytes
Server: Microsoft-IIS/8.5
P3P: CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"
X-MSNSERVER: BY3302____PAP016
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Virus-Infected: ScanError
X-MSDAVEXT_Error: 9830409
X-QosStats: {"ApiId":0,"ResultType":2,"SourcePropertyId":0,"TargetPropertyId":42}
X-ThrowSite: 1e99.cd8a
X-ClientErrorCode: VirusError
X-AsmVersion: UNKNOWN; 1192.168.3.11
X-MSEdge-Ref: Ref A: 3D629D7DE4A545A39F6EDDEE867A4FEF Ref B: 8D134593FB2216F1DA8144C3A5890DAD Ref C: Tue Feb 02 01:04:19 2016 PST
Date: Tue, 02 Feb 2016 09:04:19 GMT
Any chance this is related to `X-ClientErrorCode: VirusError`?
username_0: It appears that the API cannot download files that are not scanned:

Is there any flag that I can set in the request to automatically accept the above warning?
username_0: Apparently, adding `AVOverride=1` to the requested URL works.
I had to look at the requests performed by the browser, since this is not documented anywhere.
username_1: Just to update this, we have tracked down a class of issues relating to an issue with the virus scanner and we're working on change that should negate the need to use AVOverride=1.
username_1: There have been a few AV related fixes that have shipped recently, given that has anyone had this issue occur in the past couple of months?
username_3: We are encountering this with the Cloudberry Backup application.
`2016-07-25 19:00:31,120 [Base] [19] ERROR -
CloudBerryLab.Base.HttpUtil.Light.LightWebException
The remote server returned an error: (403) Forbidden.
at po.A(pj )
at po.B(pj )
System.Net.WebException
The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at po.A(pj )
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-MSNSERVER: BN1304____PAP175
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header Strict-Transport-Security: max-age=31536000; includeSubDomains
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-Virus-Infected: ScanError
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-MSDAVEXT_Error: 9830409
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-QosStats: {"ApiId":0,"ResultType":2,"SourcePropertyId":0,"TargetPropertyId":42}
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-ThrowSite: 1e99.cd8a
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-ClientErrorCode: VirusError
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-AsmVersion: UNKNOWN; 172.16.58.3
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header X-MSEdge-Ref: Ref A: 5F6B4B4BE17D4409AECB633D4FA3697D Ref B: C945FD480CCA130AE398A0448A5F43B0 Ref C: Mon Jul 25 12:00:38 2016 PST
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header Accept-Ranges: bytes
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header Content-Length: 0
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header Date: Mon, 25 Jul 2016 19:00:38 GMT
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header P3P: CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"
2016-07-25 19:00:31,120 [Base] [19] ERROR - Header Server: Microsoft-IIS/8.5
2016-07-25 19:00:31,135 [CL] [19] ERROR - Command::Run failed:
Copy; Source:/####01/IMG_1742.JPG; Destination:####/IMG_1742.JPG:/20151217002833/
CloudBerryLab.Base.Exceptions.Status403Exception: The remote server returned an error: (403) Forbidden. ---> CloudBerryLab.Base.HttpUtil.Light.LightWebException: The remote server returned an error: (403) Forbidden. ---> System.Net.WebException: The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at po.A(pj )
--- End of inner exception stack trace ---
at po.A(pj )
at po.B(pj )
--- End of inner exception stack trace ---
at po.B(pj )
at po.b(pj )
at po.C(pj )
at pp.HL(pj )
at oI.A(op )
at oI.A(String , QA , ICancelable )
at Se.hA(Int64 , Int64 )
at Se.a()
at oI.a(String , QA , ICancelable )
at ok.A(QA , rZ )
at th.Gv(tA , String , TT )
at TV.DQ()
at TU.hV()
2016-07-25 19:00:31,135 [PL] [19] INFO - Search thread continued
2016-07-25 19:00:31,135 [PL] [19] ERROR - Copy creator interrupted
2016-07-25 19:00:31,151 [PL] [19] INFO - Cloud operations has been canceled
2016-07-25 19:00:31,213 [Base] [6] INFO - The request was canceled
2016-07-25 19:00:31,213 [PL] [19] FATAL - Fatal error occurred during Upload operation. Cloud path: ###/IMG_1742.JPG:/20151217002833/IMG_1742.JPG. IsSimple: False. Modified date: 12/17/2015 12:28:33 AM. Size: 4.2 MB (4374049)
CloudBerryLab.Base.Exceptions.Status403Exception
The remote server returned an error: (403) Forbidden.
at po.B(pj )
at po.b(pj )
at po.C(pj )
at pp.HL(pj )
at oI.A(op )
[Truncated]
at Se.a()
at oI.a(String , QA , ICancelable )
at ok.A(QA , rZ )
at th.Gv(tA , String , TT )
at TV.DQ()
at TU.hV()
at uL.D()
CloudBerryLab.Base.HttpUtil.Light.LightWebException
The remote server returned an error: (403) Forbidden.
at po.A(pj )
at po.B(pj )
System.Net.WebException
The remote server returned an error: (403) Forbidden.
at System.Net.HttpWebRequest.GetResponse()
at po.A(pj )
2016-07-25 19:00:31,213 [CL] [19] INFO - The queue worker thread 19 finished.
2016-07-25 19:00:31,213 [CL] [6] ERROR - Command::Run failed:
Copy; Source:/####/IMG_1746.JPG:/20151217002748/
The operation was canceled.`
username_1: Is anyone still seeing issues like this? There have been a few fixes in relation to virus scanning that hopefully resolved such repros.
Status: Issue closed
username_3: Still getting this error.
2017-02-22 12:15:05,764 [Base] [4] ERROR - Header X-MSNSERVER: BN2BAP25420FC5B
2017-02-22 12:15:05,771 [Base] [4] ERROR - Header Strict-Transport-Security: max-age=31536000; includeSubDomains
2017-02-22 12:15:05,778 [Base] [4] ERROR - Header X-MSDAVEXT_Error: 9830409
2017-02-22 12:15:05,785 [Base] [4] ERROR - Header X-Virus-Infected: ScanError
2017-02-22 12:15:05,791 [Base] [4] ERROR - Header X-QosStats: {"ApiId":0,"ResultType":2,"SourcePropertyId":0,"TargetPropertyId":42}
2017-02-22 12:15:05,798 [Base] [4] ERROR - Header X-ThrowSite: 1e99.cd8a
2017-02-22 12:15:05,810 [Base] [4] ERROR - Header X-ClientErrorCode: VirusError
2017-02-22 12:15:05,817 [Base] [4] ERROR - Header X-AsmVersion: UNKNOWN; 192.168.3.11
2017-02-22 12:15:05,823 [Base] [4] ERROR - Header X-MSEdge-Ref: Ref A: DE3D1241C7C54CF29A9FA5025F9F37C9 Ref B: WSTEDGE0409 Ref C: Wed Feb 22 12:15:05 2017 PST
2017-02-22 12:15:05,829 [Base] [4] ERROR - Header Accept-Ranges: bytes
2017-02-22 12:15:05,835 [Base] [4] ERROR - Header Content-Length: 0
2017-02-22 12:15:05,842 [Base] [4] ERROR - Header Date: Wed, 22 Feb 2017 20:15:05 GMT
2017-02-22 12:15:05,850 [Base] [4] ERROR - Header P3P: CP="BUS CUR CONo FIN IVDo ONL OUR PHY SAMo TELo"
2017-02-22 12:15:05,857 [Base] [4] ERROR - Header Server: Microsoft-IIS/8.5
2017-02-22 12:15:05,881 [CL] [4] ERROR - Command::Run failed:
File: /IPKs/ventana/kernel-dev_3.10.17-12_ventana.ipk; Destination:ventana
username_3: 6 months later... still getting that error on a load of files.
username_1: @username_3 It shouldn't be possible for that error to be returned any more. If you're still seeing it could you provide a new set of response data similar to what you provided above?
username_3: Unfortunately I had just deleted the job to "start from scratch" and of course now everything is working perfectly without any errors and of course it deleted the logs for the previous job when I deleted the job. #EveryTime.
If you say the error is dead I'll believe you. Maybe I accidentally missed the end of the previous logs from months ago.
Thanks though for the follow up! I'll post specific logs if through some horrible miracle it starts breaking again.
username_1: Thanks @username_3, definitely let us know if you see something like this again. In the past a `ScanError` would fail the request, but we made a change to make sure that doesn't happen so hopefully it's resolved :). |
rancher/rancher | 177777754 | Title: Rancher ingress and Traefik: How to properly disable `rancher-ingress-controller`
Question:
username_0: **Rancher Version:**
v1.2.0-pre2
**Docker Version:**
1.12.1
Let's try to be simple and comprehensive :wink: It will be about Rancher with Kubernetetes. I am a beginner in k8s and rancher so if you think I misunderstood some points.
I want to use Rancher+k8s for deploying a bunch of different apps. I will use namespaces for isolate each others.
Hosts are in a private cloud and public traffic will be routed by a historic nginx, configured by hand.
For exposing the app, I am thinking to create one or more Ingress in each namespace/app that should be exposed.
With Rancher ingress, it will create a new container binding a host port for each `Ingress`.
I would use host based rules for being able to bring all the rules together (all rules from all namespaces), expose a single port and is scaled at the number of hosts.
It is possible to accomplish with Rancher Ingress ?
I was able to produce that with [https://github.com/containous/traefik](Træfɪk, a modern reverse proxy). I simply deployed a template like this [https://raw.githubusercontent.com/containous/traefik/master/examples/k8s.rc.yaml](k8s.rc.yaml).
But it conflict with Rancher Ingress controller, for each Ingress created, Rancher create a load balancer and bind a host port.
It is possible to disable Rancher Ingress ?
I tried to stop `rancher-ingress-controller` in the system stack but, 1. it seems hacky, 2. it makes the Rancher UI display view "Setting up Kubernetes..." view.
The question on the forum (asked by my colleague): https://forums.rancher.com/t/rancher-ingress-and-traefik/3930
Thanks
Status: Issue closed
Answers:
username_1: @username_0 stopping an individual container won't help as service reconcile will restart it. You can stop ingress-controller service, and it will disable rancher ingress controller. Note that if you decide to upgrade k8s system stack to the latest template version, the service will be restarted, and you might need to start it again.
username_2: But if i disable rancher ingress controller the kubernetes view disappear and kubernetes installation get active because this service is not running.
username_3: Does rancher ingress obey `kubernetes.io/ingress.class` ?Perhaps that could be used? https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx#running-multiple-ingress-controllers
https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#disabling-glbc
username_3: Looks like in v.1.1.4 it does not.
username_3: I think this should be revisited -- it appears there is no way to disable Rancher's k8s ingress integration without causing Catalog to fail (it returns to the "Setting up Kubernetes" screen)
username_4: **Rancher Version:**
v1.2.0-pre2
**Docker Version:**
1.12.1
Let's try to be simple and comprehensive :wink: It will be about Rancher with Kubernetetes. I am a beginner in k8s and rancher so if you think I misunderstood some points.
I want to use Rancher+k8s for deploying a bunch of different apps. I will use namespaces for isolate each others.
Hosts are in a private cloud and public traffic will be routed by a historic nginx, configured by hand.
For exposing the app, I am thinking to create one or more Ingress in each namespace/app that should be exposed.
With Rancher ingress, it will create a new container binding a host port for each `Ingress`.
I would use host based rules for being able to bring all the rules together (all rules from all namespaces), expose a single port and is scaled at the number of hosts.
It is possible to accomplish with Rancher Ingress ?
I was able to produce that with [Træfɪk, a modern reverse proxy](https://github.com/containous/traefik). I simply deployed a template like this [k8s.rc.yaml](https://raw.githubusercontent.com/containous/traefik/master/examples/k8s.rc.yaml).
But it conflict with Rancher Ingress controller, for each Ingress created, Rancher create a load balancer and bind a host port.
It is possible to disable Rancher Ingress ?
I tried to stop `rancher-ingress-controller` in the system stack but, 1. it seems hacky, 2. it makes the Rancher UI display view "Setting up Kubernetes..." view.
The question on the forum (asked by my colleague): https://forums.rancher.com/t/rancher-ingress-and-traefik/3930
Thanks
Status: Issue closed
username_5: In the latest k8s catalog (k8s v1.6.2), we add in a configuration option to remove the ingress controller.
username_6: @username_5 I see an option "Enable Kubernetes Add-ons", that disables all services.
Is there an other option with finer granularity to disable only the ingress controller?
username_5: @username_6 This will be available when our template for k8s 1.6.x is released.
username_7: However the rancher-ingress-controller still started by default.
username_8: @username_7 - support for disabling the ingress controller is in Rancher 1.6.3, released 4 days ago (and not yet tagged stable). This issue is closed, so continuing conversation in it does not notify the Rancher team. If you find that you're still having issues when running 1.6.3 or later, please open a new issue.
username_9: How can we set the option?
username_6: Just want to confirm that this works for me.
You can edit the the Kubernetes environment template.
- Click the first dropdown with your environments and click "Manage Environments".
- On the environments view on the bottom you see the Environment Templates section, where click edit on the Kubernetes one.
- On this view you can rename your template, etc, change orchestration type. Click "Edit Config".
- Make sure it says "Kubernetes 1.6.6" on the top. You need Rancher v1.6.3 for this. Then find the "Enable Rancher Ingress Controller" radiobutton.
Once you updated this, the next deployment, or your next upgrade will adhere to this setting.
If you watch what the browser send to the Rancher API you will see that it's the ENABLE_RANCHER_INGRESS_CONTROLLER answer in the /v2-beta/projectTemplates/#{kubeTemplateId} endpoint.
username_7: Thx! Yes, I was testing with rancher 1.6.2 initially. In Rancher 1.6.4 I can not edit the default Kuberentes template, I see the option, but I get a 404 when saving the template with a PUT at https://ranchertest.io.nuxeo.com/v2-beta/projecttemplates/1pt1.
However I have created a new k8s template and I can disable the rancher ingress controller in the new template
username_10: @username_7 - I can't reproduce your 404 problem. Which browser were you using? Did you change anything else in the config or template beside setting ingress to false?
username_11: @username_6 Its unclear to me how I would apply an updated template to an existing environment. I never understood the "templating" stuff in Rancher. Maybe I haven't found the documentation.
Anyway, I have followed your instructions to disable the Rancher ingress controller. How do I get my current k8s environment updated so that these changes will take effect?
username_6: Go to Kuberetes / Infrastructure Stacks. Under the "kubernetes" collapsable group, click on the "Up to date" or "Upgrade available" buttons on the right side.
It let's you pick a newer template if there is any, or allows you to fiddle with the settings of the current one. Now I have only upgraded in the past, so I don't know if changing the existing template's params will trigger a reconfiguration, but upgrade does. Hope it will turn the ingress off for you. You might have to delete the ingress controller manually after changing the template. If you do it now without changing the template, it will always redeploy it :)
username_11: Thanks! What threw me is that the button is actually called "up to date". I'd never thought of that being an entrypoint into changing things up. I'll add a separate bug for that, but I managed to remove the Rancher controller so I'm happy. |
firebase/firebase-android-sdk | 671479752 | Title: Firebase Authentication - Email field becomes empty
Question:
username_0: 1. Users sign in as anonymous
`mAuth.signInAnonymously()`
2. At certain point, sets email/pw
`AuthCredential credential = EmailAuthProvider.getCredential(email, password);
mAuth.getCurrentUser().linkWithCredential(credential)`
3. Most of them are OK. But the email field become empty for some users' account.
When this problem occurs, the user is not an anonymous nor non-anonymous.
The following is how these users looks like in the firebase console.

And the following is how an ordinary anonymous user looks like.

This only happens in Android. Not on iOS.
And the most important part is that I CAN'T REPRODUCE THIS BUG.
I've become aware of this bug through some crash logs.
Any help would be appreciated.
Answers:
username_1: Thanks for the super clear report!
username_0: Maybe this is happening on iOS too.
I'm not 100% sure.
username_2: Tracking this internally (b/162959460). Thanks for raising this issue!
username_0: Any update? It's been more than 2 weeks.
username_0: @username_2 @malcolmdeck @username_1 Any news? Please.
username_2: Hi, unfortunately no update yet. We're still tracking this internally
username_3: Hi @username_0, just bumping this thread. Since it's been a while, may I ask if you're still seeing this issue on the latest SDK? |
itzg/docker-minecraft-server | 1160768290 | Title: Allow delayed start by using env varibles
Question:
username_0: ### Enhancement Type
A completely new feature
### Describe the enhancement
Hey there!
It would be really great to have some environment variable that allows the server to start with a given delay in seconds using environment variables.
For example:
Setting `START_DELAY=5` would make the server hold for 5 seconds and then start.
## Why this would be useful?
Imagine we have a stack with docker-compose, where we expect to run a database and a Redis server, there might exist a situation where the Minecraft server starts faster than the DB and some plugins would complain that they can't access DB.
Having the delay enabled for servers would mitigate this issue seamlessly.
This could also be applied in the [docker-bungeecord](https://github.com/username_2/docker-bungeecord) project as it can be benefited from this as well.
Answers:
username_1: This should already be controlled by depends on and the proper health checks on the containers themselves. AKA if the DB has a proper health check it'll let docker know when its "healthy" and it can start minecraft when set in "depends_on". More information can be found on the docker compose website. https://docs.docker.com/compose/startup-order/
username_0: For MariaDB/MySQL container is marked as "healthy" even when the connection allowance has not been reached yet, this leads to the issue I describe in the request.
username_2: I agree with @username_1 's suggestion. This is a broader aspect of Docker that is already solved and adding an arbitrary startup delay is a brittle solution.
The official MariaDB, etc images tend not to provide health checks out of the box, since it is left to define by the end user. So, you'll need to declare the health checks yourself.
Status: Issue closed
|
greiman/SdFat | 1028442003 | Title: SD_CARD_ERROR_INIT_NOT_CALLED error at begin()
Question:
username_0: Hi !
I made an experiment with SDFat on a Nano, but each time I try to `SD.begin(SD_CONFIG)` I obtain the SD_CARD_ERROR_INIT_NOT_CALLED error code
My definition at the top of my Arduino file is:
```
#include <SdFat.h>
#define SD_FAT_TYPE 0
SdFat SD;
int SD_PIN = A0;
#define SD_CONFIG SdSpiConfig(SD_PIN, SHARED_SPI)
```
And I have this in my setup():
```
// Initialize the SD.
if (!SD.begin(SD_CONFIG)) {
Serial.println("BAD");
SD.initErrorHalt(&Serial);
}
```
SDFat version : 2.10
SDInfo example works without problem
Thanks !<issue_closed>
Status: Issue closed |
spcl/rFaaS | 955106219 | Title: Add testing framework
Question:
username_0: We need a custom system to execute integration and system tests without mocking.
- [ ] Add device configuration (#1)
- [ ] Add user-based configuration of testing endpoints
- [ ] Integrate gtest with CMake
- [ ] Add test: basic allocation
- [ ] Add test: random-based allocation
- [ ] Add test: simple invocation
- [ ] Add test: warm invocations
- [ ] Add test: async invocations
- [ ] Add test: parallel invocations |
nodejs/TSC | 1066014303 | Title: Node.js Technical Steering Committee (TSC) Meeting 2021-12-02
Question:
username_0: ## Time
**UTC Thu 02-Dec-2021 22:00 (10:00 PM)**:
| Timezone | Date/Time |
|---------------|-----------------------|
| US / Pacific | Thu 02-Dec-2021 14:00 (02:00 PM) |
| US / Mountain | Thu 02-Dec-2021 15:00 (03:00 PM) |
| US / Central | Thu 02-Dec-2021 16:00 (04:00 PM) |
| US / Eastern | Thu 02-Dec-2021 17:00 (05:00 PM) |
| EU / Western | Thu 02-Dec-2021 22:00 (10:00 PM) |
| EU / Central | Thu 02-Dec-2021 23:00 (11:00 PM) |
| EU / Eastern | Fri 03-Dec-2021 00:00 (12:00 AM) |
| Moscow | Fri 03-Dec-2021 01:00 (01:00 AM) |
| Chennai | Fri 03-Dec-2021 03:30 (03:30 AM) |
| Hangzhou | Fri 03-Dec-2021 06:00 (06:00 AM) |
| Tokyo | Fri 03-Dec-2021 07:00 (07:00 AM) |
| Sydney | Fri 03-Dec-2021 09:00 (09:00 AM) |
Or in your local time:
* https://www.timeanddate.com/worldclock/fixedtime.html?msg=Node.js+Foundation+Technical%20Steering%20Committee%20(TSC)+Meeting+2021-12-02&iso=20211202T2200
* or https://www.wolframalpha.com/input/?i=10PM+UTC%2C+Dec+02%2C+2021+in+local+time
## Links
* Minutes Google Doc: <https://docs.google.com/document/d/1N_WAgFmguv5mmVLGyp3Rx6rOZA5MK-rS7luO12mf69k/edit>
## Agenda
Extracted from **tsc-agenda** labelled issues and pull requests from the **nodejs org** prior to the meeting.
### nodejs/node
* stream: remove thenable support [#40773](https://github.com/nodejs/node/pull/40773)
* docs: Clarification around real world risks and use cases of VM module [#40718](https://github.com/nodejs/node/issues/40718)
* Rename default branch from "master" to "main" [#33864](https://github.com/nodejs/node/issues/33864)
* Migration of core modules to primordials [#30697](https://github.com/nodejs/node/issues/30697)
### nodejs/TSC
* add security triaging to core repo GOVERNANCE.md and/or charter? [#1100](https://github.com/nodejs/TSC/issues/1100)
### nodejs/admin
* move bnb/devenv to nodejs/devcontainer [#641](https://github.com/nodejs/admin/issues/641)
## Invited
* <NAME> @aduh95 (TSC)
* <NAME> @apapirovski (TSC)
* <NAME> @BethGriggs (TSC)
* <NAME> @username_3 (TSC)
* <NAME> @ChALkeR (TSC)
* <NAME> @cjihrig (TSC)
* <NAME> @codebytere (TSC)
* <NAME> @danielleadams (TSC)
* <NAME> @fhinkel (TSC)
* <NAME> @gabrielschulhof (TSC)
* <NAME> @username_2 (TSC)
[Truncated]
Zoom link: <https://zoom.us/j/611357642>
Regular password
## Public participation
We stream our conference call straight to YouTube so anyone can listen to it live, it should start playing at **<https://www.youtube.com/c/nodejs+foundation/live>** when we turn it on. There's usually a short cat-herding time at the start of the meeting and then occasionally we have some quick private business to attend to before we can start recording & streaming. So be patient and it should show up.
---
**Invitees**
Please use the following emoji reactions in this post to indicate your
availability.
* :+1: - Attending
* :-1: - Not attending
* :confused: - Not sure yet
Answers:
username_1: No Moderation Team activity this week.
@nodejs/tsc @nodejs/moderation
username_2: @username_3 - I cannot make it to tomorrow's sitting, as it is 3.30 AM for me. Request you to review the refined voting options for primordials, documented in https://github.com/nodejs/TSC/issues/1104#issue-1029188071 , and provide your views - in concurrence, or any alternative views you may have, if you are attending today. If not, we could defer it to the next sitting.
username_3: @username_2 thanks for the ping, I'll have a look at it before the meeting.
Status: Issue closed
username_0: PR for minutes: https://github.com/nodejs/TSC/pull/1134 |
rust-lang/rust | 17601792 | Title: Move LLVM bindings to their own crate
Question:
username_0: Right now our LLVM bindings are tightly integrated into trans. Would be good for abstraction sake to fence LLVM off into its own crate. It would also make LLVM more available to other Rust projects - Rust is a good language for writing compilers.
cc #8274
Answers:
username_1: I think this is about as addressed as it's going to get. We have the `rustc_llvm` crate for internal use, and people can use Cargo if they want to make better / nonrustc specific bindings.
@username_0, if you disagree, feel free to re-open :)
Status: Issue closed
|
slimkit/plus | 368525928 | Title: 【管理后台-音乐】- 一首音乐丢失,一首音乐归属专辑错误
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
测试服在后台添加了歌手、专辑,然后添加音乐,添加完后,前端获取数据时,一首音乐丢失,一首音乐归属专辑错误

编号为2的专辑归属错误(在管理后台显示的专辑正确,在前端获取时错误);编号为3的丢失
- 测试服务器 test-plus
- 服务器版本 2.0.4
专辑接口地址:http://test-plus.zhibocloud.cn/api/v2/music/specials/2
**Additional context**
Add any other context about the problem here.
Answers:
username_1: 音频这个功能太有用了。怎么给去掉了呢?现在私家车主越来越多了。车上听音频更方便 |
veyeimaging/raspberrypi | 598857486 | Title: Please add newer Pi 4 board revision 'c03112'
Question:
username_0: https://github.com/veyeimaging/raspberrypi/blob/2b088f23d69d6ea10799191a9b731f030c680ca6/i2c_cmd/bin/camera_i2c_config#L102
Please add `c03112` to the list of boards here for newer Raspberry Pi 4 4GB revisions.
Answers:
username_1: done,thank you!
Status: Issue closed
|
pandas-dev/pandas | 392749038 | Title: CI: Failing geopandas test
Question:
username_0: https://travis-ci.org/pandas-dev/pandas/jobs/470161644#L2601
Answers:
username_0: Here are the differing packages between
https://travis-ci.org/pandas-dev/pandas/builds/470122716 and
https://travis-ci.org/pandas-dev/pandas/builds/470140296
package | old version | old hash | new version | new hash
----------------------- | -------------- | -------------- | --------------- | --------------
cryptography | 2.3.1 | py36hc365091_0 | 2.4.2 | py36h1ba5d50_0
curl | 7.61.0 | h84994c4_0 | 7.63.0 | hbc83047_1000
fiona | 1.7.12 | py36h3f37509_0 | 1.7.10 | py36h48a52f0_0
hdf5 | 1.10.2 | hba1933b_1 | 1.8.18 | h6792536_1
json-c | 0.13.1 | h1bed415_0 | 0.12.1 | ha6a3662_2
libcurl | 7.61.0 | h1ad7b7a_0 | 7.63.0 | h20c2e04_1000
libgdal | 2.2.4 | h6f639c0_1 | 2.2.2 | h6bd4d82_1
libnetcdf | 4.6.1 | h10edf3e_1 | 4.4.1.1 | h97d33d9_8
libpq | 10.5 | h1ad7b7a_0 | 11.1 | h20c2e04_0
openssl | 1.0.2p | h14c3975_0 | 1.1.1a | h7b6447c_0
proj4 | 5.0.1 | h14c3975_0 | 4.9.3 | hc8507d1_7
psycopg2 | 2.7.5 | py36hb7f436b_0 | 2.7.6.1 | py36h1ba5d50_0
python | 3.6.6 | h6e4f718_2 | 3.6.7 | h0371630_0
qt | 5.9.6 | h8703b6f_2 | 5.9.7 | h5867ecd_1
some packages also had their build number incremented, but the version didn't change.
username_0: In
https://github.com/pandas-dev/pandas/pull/24360 we pinned Python to 3.6.6. We should remove that pin once liberal and openssl have been updated for 3.6.7.
username_1: Looks like the pin went away in the travis builds. Closing
Status: Issue closed
|
ant-design/ant-design | 246936571 | Title: input has a avatar in safari
Question:
username_0: <!--
IMPORTANT: Please use the following link to create a new issue:
http://new-issue.ant.design
If your issue was not created using the app above, it will be closed immediately.
-->
if the input placeholder is english is ok, but if the placeholder is Chinese, the end of the input will show a avatar, and in chrome is well
<img width="279" alt="2017-08-01 10 16 36" src="https://user-images.githubusercontent.com/12936800/28806475-ea4ee23c-76a2-11e7-8f52-2d245a531cdc.png">
<img width="303" alt="2017-08-01 10 16 52" src="https://user-images.githubusercontent.com/12936800/28806476-ebd8b6a0-76a2-11e7-8d00-0472224e2935.png">
<!--
注意:请使用下面的链接来新建 issue:
http://new-issue.ant.design
不是用上面的链接创建的 issue 会被立即关闭。
--> |
7552-2C-2018/App-Server | 372226161 | Title: ID of get individual post
Question:
username_0: En el get de un post pide el facebookID para chequear la autorizacion pero no el id que forma parte de la clave, por lo tanto no encuentra el post. Habria que cambiar la clave actual compuesta por una mas simple para evitar errores.<issue_closed>
Status: Issue closed |
mpv-android/mpv-android | 264219259 | Title: Android kitkat support
Question:
username_0: Hello,
I wanted to know if Kitkat could be supported, or if mpv-android depends on Android 5 too much for this to happen.
Answers:
username_1: i think mpv-android does not actually use much from 5, the reason it was targeted in the beginning was the support for OpenGL ES 3.0 IIRC
oh also arm64 and x86_64 both require at least android 5
username_2: Hi,
Yes, it would be really great if one could build mpv on Andoid KitKat ( and even Jelly Bean :-) ) !
username_1: Here's a quick test build targeting KitKat ([branch](https://github.com/mpv-android/mpv-android/tree/username_1/kitkat_test)):
[mpv-android_kitkat-test.zip](https://github.com/mpv-android/mpv-android/files/1882054/mpv-android_kitkat-test.zip)
We certainly don't plan to support this officially.
Status: Issue closed
username_2: @username_1 Thanks
Actually my android Kitkat device is down because of filesystem full but I don't want to do a factory reset because that would erase all my data.
I have another device with android 4.1.2, can you please rebuilt this test build for android 4.1.2 ?
It would be really great :)
username_2: @username_1 I've just tried it. I didn't have the time to browse (with mpv file manager ?) to my `/Removable/MicroSD/` directory that android said :
`Unfortunately, mpv has stopped.`
username_1: Our file manager library is probably using some too-new API.
¯\\\_(ツ)_/¯
username_1: Maybe, but I'm not going to bother.
You can directly open videos in mpv from your native file app anyway.
username_2: @username_1 If I use another file manager such as `Amaze` to do an "Open with" and the select the `mpv` application, mpv crashes with the same error message :
`Unfortunately, mpv has stopped.`
Any idea why ?
username_1: I'd need a logcat to answer that
username_2: @username_1 I don't know how to generate a logcat.
I think my mpv pb. has something to do with my RAM, because 96% unto 98% of (1GB) is already used.
Do you know a good program to find out what applications are consuming the most of the RAM ?
username_3: A "legacy" branch/fork of mpv-android similar to NewPipe-Legacy would be quite nice as a youtube-dl player. I can't even use NewPipe-Legacy on my older devices because the kernel source is no longer available to enable swap and it eats too much memory. Right now I've resorted to VLC 3.0.13 as that is the latest version with minsdk that allows 4.1. |
joel16/3DShell | 342223309 | Title: Searching freezes 3DSX version
Question:
username_0: Confirming or canceling the prompt with any input string freezes the system (holding power required to turn off) on the 3DSX version, running on Luma3DS 9.0 via Download Play on a New 3DS XL (11.7). This is a fresh install (old files removed beforehand).
Answers:
username_1: Crap, knew I forgot to check something. Will look into it.
username_1: Fixed https://github.com/username_1/3DShell/commit/795d14597b157f2027a251d1af9033893df03cd3
You can update using the nightly option.
Status: Issue closed
username_1: Confirming or canceling the prompt with any input string freezes the system (holding power required to turn off) on the 3DSX version, running on Luma3DS 9.0 via Download Play on a New 3DS XL (11.7). This is a fresh install (old files removed beforehand).
username_1: Reopening as this was wrongly reported as working.
username_1: Fixed for sure now :P https://github.com/username_1/3DShell/commit/6657bb05207e3a939c9d4f129c8e03efe66aec18
Status: Issue closed
|
UECIDE/UECIDE | 371136166 | Title: Editor popup menu improvements
Question:
username_0: Please don't swear :) but could you add additional items to the editor popup menu? I would like to see two more items (context dependant):
- open file (at cursor position, for example at ``#include`` section)
- go to declaration/definition (for variables, including external references)
I believe UECIDE become "IDE-ish" with these features, and it's not too hard to implement 'em ('cause you already have all references list, right?)
P.S. Will be good to add additional search results window (to the bottom pane) and "Find all references" to the popup menu, but this is a little bit harder.
P.P.S. If you found what I submitting too many feature requests, please let me know...
Answers:
username_1: How's this for you?

Can't really do the "open file" thing - it only makes sense for header files that are part of your sketch, and those you have in the tree on the left (and now in the Sketch menu as well).
username_1: I've also created a "minimalist" mode:

username_0: Thanks! I'll try today or tomorrow and let you know.
username_1: I haven't released it yet. It's in my *highly experimental* new version with many many changes to the way the UI works.
I guess I could build a pre-release version...
username_1: Ok, there's a beta release now: https://github.com/UECIDE/UECIDE/releases/tag/0.10.0-beta1
username_0: New menu "go to" is working, I confirm. Commenting shortcut is not working (for me). Also, commenting one line is useless - it has a sense when you can quickly comment/uncomment whole selected block of code only; non-default (non Word Star compatible) shortcuts is also completely useless for me.
By the way, you should try (at least once) Visual Studio. It's an industry standard used for a decades around the world. Your own "bicycle invention" isn't too helpful, sorry.
P.S. I finally ended up with [Visual Micro](https://www.visualmicro.com/): it has (mostly) all features I'm using and used for a years (because it's a plugin to VS). It also has a nice ability to keep and properly handle projects for different platforms in one solution. And it's also free (but not an open source) however I paid 'em (to remove nag screen :) ).
Sorry for disturbing you! I really appreciate your work but unfortunately UECIDE can't compete with Visual Studio, it's a different kind of software, practically.
username_1: I have been working on the other things today. In the next release, comments will be toggled with ctrl-shift-c and work either on the current line or the selected text. If the first line is commented then comments are removed for any selected lines. If the first is not commented then comments are added to all selected lines that don't have comments.
I can't (and won't) use VS since I don't use Windows. I do have a copy somewhere around, but never use it unless I have to write a piece of Windows GUI software (and then under duress) - so that means never. And UECIDE is not supposed to compete with VS. As you say - it's a different kind of software.
* Arduino IDE: Robin reliant
* UECIDE: Volvo S90
* VS: MiG-29
I have found a way of doing the "open include file" thing - I have been playing with the token parser of RSyntaxTextArea. I've added all sorts of things now: open a header file if it's part of the sketch, open the library folder if it's not part of the sketch, go to the manual page for a standard Arduino function, go to the definition of globals and defines, etc. All sorts of funky things.
* c35acd6: Advanced token parser allows more context-aware options in popup menu
Status: Issue closed
|
MicrosoftDocs/azure-docs | 792955937 | Title: The Japanese translation is incorrect.
Question:
username_0: "Expected network bandwidth (Mbps)" in the size table has been translated as "Required network bandwidth".
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 908096d8-f3ad-8402-56c0-381e1be906cf
* Version Independent ID: aaef6dd2-65c8-9ce6-f3d0-3f1ffa03bb86
* Content: [Memory optimized Dv2 and DSv2-series VMs - Azure Virtual Machines](https://docs.microsoft.com/en-us/azure/virtual-machines/dv2-dsv2-series-memory#size-table-definitions)
* Content Source: [articles/virtual-machines/dv2-dsv2-series-memory.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/dv2-dsv2-series-memory.md)
* Service: **virtual-machines**
* Sub-service: **sizes**
* GitHub Login: @joelpelley
* Microsoft Alias: **jushiman**
Answers:
username_1: @username_0
Thanks for the feedback! I have assigned the issue to content author to check and update the document as appropriate.
username_1: @joelpelley
Can you please check and add your comments on this doc update request as applicable.
username_2: @username_0 Can you confirm that this has been fixed in more recent versions of the doc? If it is fixed, I'd like to close this issue. Thanks!
username_0: @username_2 I checked the document, but it has not been fixed yet.
username_2: @username_0 Thanks for the update! I've sent this off to the localization team. I'll update this issue when I hear back :) |
scikit-learn-contrib/imbalanced-learn | 377084539 | Title: I don't think the conversion to dense array is required here as self.ohe_.inverse_transform() will also work on sparse matrix. It can cause a MemoryError for large datasets and/or if there are too many categorical values!
Question:
username_0: https://github.com/scikit-learn-contrib/imbalanced-learn/blob/f17107efa56199d66523bd4253c80be7fb60a6ec/imblearn/over_sampling/_smote.py#L986
Answers:
username_0: I don't think the conversion to dense array is required here as self.ohe_.inverse_transform() will also work on sparse matrix. It can cause a MemoryError for large datasets and/or if there are too many categorical values!
username_1: You are right. I probably make a mistake when dealing with the deprecation warning raised by numpy.
Status: Issue closed
|
MatthieuHernandez/StraightforwardNeuralNetwork | 869286538 | Title: Improve Max-pooling layer
Question:
username_0: [Max-pooling](https://jefkine.com/general/2016/09/05/backpropagation-in-convolutional-neural-networks/) - the error is just assigned to where it comes from - the “winning unit” because other units in the previous layer’s pooling blocks did not contribute to it hence all the other assigned values of zero. |
ethercreative/seo | 548470731 | Title: Incorrect title format
Question:
username_0: I'm not sure if I'm doing anything wrong, but when when i use dafault title format, which is this:
```
{title} - {{sitename}}
```
My title for entries is generated like this:
```
site name - site name
```
And when i switch lock icon on `{title} - so its locked, title changes to this:
```
- sitename
```
Shouldnt default title config use current entry title?
Answers:
username_1: any news on this?
username_2: I have the same issue. It seems to happen on all installations now.
username_3: can confirm
username_4: Also having this issue.
username_4: Anyone figure this out?
username_5: Same issue here.
username_4: This is definitely a bug right? I have this exact problem and am still not sure if I'm doing something wrong or not.
username_4: I have found perhaps the issue I was having is not the same as this. I understood that I only needed to add the SEO field if I wanted to be able to set custom attributes per page. I did not need to so I just installed the plugin and added the "{% hook "seo" %}" part into my template. The result was instead of "Site name - page title" I just got "Site name - Site name". I have now realised that you need to add the SEO field to entries for it to work correctly, even if you do not want/need to be able to set page titles in the entry edit page.
username_6: The title never shows up for me. I've done all that is listed above including what @username_4 stated. Any ideas?
username_7: Figured it out. There's a difference between the Field settings and the SEO settings. It's a bit confusing.
username_8: @username_6-opi Can you elaborate? I'm still getting the same site name - site name for everything even if I edit it on an entry itself.
username_8: Nm. I'm an idiot. I had the handle set different than 'seo' from {% hook 'seo' %}
username_9: See @username_4 answer. You need to create SEO field and add it to entries. Even if You have no need to adjust seo per entry.
username_10: I'm getting this error too. I've added the SEO field to all my entries. The title is set to {title} - {{siteName}} but the title is output as " - siteName".
Any ideas what I've missed?
username_11: Can those of your who are finding solutions to this provide a bit more detail on how you solved it?
@username_6-opi
@username_8
Currently I have a _base.twig file and in the <head> section I have word for word: {% hook "seo" %}
In SEO config I have the title set to: {title}
I have also confirmed that the entry for which I am testing has a title of "About us"
username_12: @username_11 The `{% hook 'seo' %}` tag will look for a field with the handle `seo` on the entry you're viewing. So, you'll want to go to the **field** settings to adjust the variables included in the meta.
If you don't have a field called `seo` on the entry, you'll want to create that first. Here's an example from one of our sites.

username_11: ugh... my bad. I feel like this a solid RTFM moment for me. Thank you for taking the time and effort to post this @username_12 !! |
SzFMV2020-Tavasz/AutomatedCar-A | 571926949 | Title: Rendszer Komponens Hozzáadása
Question:
username_0: Rendszer komponens hozzáadása.
- [] `dev-team-3` branch-re rakása
Answers:
username_1: move it to done
Status: Issue closed
username_0: Rendszer komponens hozzáadása.
Tasks:
- [x] komponens létrehozása saját branch-en (#59-add-system-component)
- [x] PowerTrain osztály létrehozása
- [x] PowerTrain példány integrálása az AutomatedCar osztályba
Status: Issue closed
|
JFXtras/jfxtras | 394589771 | Title: Error setting calendar for CalendarPicker
Question:
username_0: If i set a UTC Calendar set to 0 hours, minutes and seconds to CalendarPicker, the clock will start from 2 instead 0.
Answers:
username_1: Show me some code. :-)
username_0: Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("UTC"), Locale.US);
calendar.setTimeInMillis(DateTimeDialog.getInstance().getTimeToSet());
mCalendarPicker.setCalendar(calendar);
username_0: 

username_0: thanks 4 help dude
username_1: You are setting a time in the calendar in the second line, no idea what value that is.
But, CalendarPicker is not doing anything with time zones, it uses merely a locale to get the day names right. But it just renders the day, month, year, hour, minute, seconds that are in the calendar. Very curious what this line inserted before the setCalendar gives as a result:
System.out.println(new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(calendar.getTime()));
username_0: as u can see in the photos above, the clock starts from 2 and ends to 1:59. That seems a bug to me since it must start from 0 and end to 23:59
username_1: ohhhh, I see!
username_0: It happens when i switch the calendar from default to UTC and set it to CalendarPicker.
username_1: Indeed. Interesting. I can reproduce it now.
username_0: so when i should expect a patch? :))
username_1: This is open source, so never :-) But I am looking into it now
username_1: So the cause is in calendar, but I have to think about why. If you execute this code:
Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("UTC"), Locale.US);
System.out.println("H1=" + calendar.get(Calendar.HOUR_OF_DAY) + " " + new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(calendar.getTime()));
GregorianCalendar calendar2 = new GregorianCalendar();
System.out.println("H2=" + calendar2.get(Calendar.HOUR_OF_DAY) + " " + new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(calendar2.getTime()));
I get this code (its just noon here):
H1=11 2018-12-28T12:14:45.468
H2=12 2018-12-28T12:14:45.476
What you see if that format into the same time (12 o'clock) but if you query for the 24-hour value, they differ.
username_0: that s strange indeed. I figured a workaround for me, i get the default time in millis then i extract the timezone raw offset, It works.
username_1: The formatter uses Date, which is not timezone aware, and thus always renders in the current timezone. The hour of day returns the value in the current timezone. Have to thing about how to solve this best, because date and time picking is done timezone unaware.
username_1: I think I have fixed it, but the release needs to pass all tests first. It is a fairly large change.
username_0: i see
Status: Issue closed
|
sgur/vim-editorconfig | 278029990 | Title: Path is not relative to root .editorconfig
Question:
username_0: Let's say the root directory name is `test`. This is minimal `test/.editorconfig`:
``` editorconfig
root = true
[test/foo.rb]
trim_trailing_whitespace = true
```
Open `test/foo.rb`, and try to save with trailing whitespace.
Since `test` is the root, trimming should be applied only to `test/test/foo.rb`, but `test/foo.rb` is also affected now.<issue_closed>
Status: Issue closed |
flutter/flutter | 635887881 | Title: Unhandled error MissingPluginException
Question:
username_0: I installed a qrcode plugin, and it said i have to update minsdkVersion to 21, add firebase and after i tried to run my app the message started to appear, since that change my app was working perfectly
`[ +477 ms] I/flutter (26042): Error: AuthenticationBloc MissingPluginException(No implementation found for method getAll on channel
plugins.flutter.io/shared_preferences),
[ +6 ms] E/flutter (26042): [ERROR:flutter/lib/ui/ui_dart_state.cc(157)] Unhandled Exception: Unhandled error MissingPluginException(No implementation found for method
getAll on channel plugins.flutter.io/shared_preferences) occurred in bloc Instance of 'AuthenticationBloc'.
[ +1 ms] E/flutter (26042):
[ +1 ms] E/flutter (26042): #0 Bloc.onError.<anonymous closure> (package:bloc/src/bloc.dart:146:7)
[ +1 ms] E/flutter (26042): #1 Bloc.onError (package:bloc/src/bloc.dart:147:6)
[ +1 ms] E/flutter (26042): #2 _rootRunBinary (dart:async/zone.dart:1204:38)
[ ] E/flutter (26042): #3 _CustomZone.runBinary (dart:async/zone.dart:1093:19)
[ ] E/flutter (26042): #4 _CustomZone.runBinaryGuarded (dart:async/zone.dart:995:7)
[ ] E/flutter (26042): #5 _BufferingStreamSubscription._sendError.sendError (dart:async/stream_impl.dart:358:15)
[ +1 ms] E/flutter (26042): #6 _BufferingStreamSubscription._sendError (dart:async/stream_impl.dart:376:16)
[ +1 ms] E/flutter (26042): #7 _BufferingStreamSubscription._addError (dart:async/stream_impl.dart:275:7)
[ ] E/flutter (26042): #8 _SyncBroadcastStreamController._sendError.<anonymous closure> (dart:async/broadcast_stream_controller.dart:393:20)
[ ] E/flutter (26042): #9 _BroadcastStreamController._forEachListener (dart:async/broadcast_stream_controller.dart:327:15)
[ +1 ms] E/flutter (26042): #10 _SyncBroadcastStreamController._sendError (dart:async/broadcast_stream_controller.dart:392:5)
[ ] E/flutter (26042): #11 _BroadcastStreamController._addError (dart:async/broadcast_stream_controller.dart:294:5)
[ ] E/flutter (26042): #12 _rootRunBinary (dart:async/zone.dart:1204:38)
[ ] E/flutter (26042): #13 _CustomZone.runBinary (dart:async/zone.dart:1093:19)
[ ] E/flutter (26042): #14 _CustomZone.runBinaryGuarded (dart:async/zone.dart:995:7)
[ ] E/flutter (26042): #15 _BufferingStreamSubscription._sendError.sendError (dart:async/stream_impl.dart:358:15)
[ ] E/flutter (26042): #16 _BufferingStreamSubscription._sendError (dart:async/stream_impl.dart:376:16)
[ ] E/flutter (26042): #17 _BufferingStreamSubscription._addError (dart:async/stream_impl.dart:275:7)
[ ] E/flutter (26042): #18 _ForwardingStreamSubscription._addError (dart:async/stream_pipe.dart:139:11)
[ +1 ms] E/flutter (26042): #19 _ForwardingStream._handleError (dart:async/stream_pipe.dart:104:10)
[ ] E/flutter (26042): #20 _ForwardingStreamSubscription._handleError (dart:async/stream_pipe.dart:170:13)
[ +1 ms] E/flutter (26042): #21 _rootRunBinary (dart:async/zone.dart:1204:38)
[ ] E/flutter (26042): #22 _CustomZone.runBinary (dart:async/zone.dart:1093:19)
[ ] E/flutter (26042): #23 _CustomZone.runBinaryGuarded (dart:async/zone.dart:995:7)
[ ] E/flutter (26042): #24 _BufferingStreamSubscription._sendError.sendError (dart:async/stream_impl.dart:358:15)
[ ] E/flutter (26042): #25 _BufferingStreamSubscription._sendError (dart:async/stream_impl.dart:376:16)
[ ] E/flutter (26042): #26 _DelayedError.perform (dart:async/stream_impl.dart:605:14)
[ ] E/flutter (26042): #27 _StreamImplEvents.handleNext (dart:async/stream_impl.dart:710:11)
[ ] E/flutter (26042): #28 _PendingEvents.schedule.<anonymous closure> (dart:async/stream_impl.dart:670:7)
[ ] E/flutter (26042): #29 _rootRun (dart:async/zone.dart:1180:38)
[ ] E/flutter (26042): #30 _CustomZone.run (dart:async/zone.dart:1077:19)
[ ] E/flutter (26042): #31 _CustomZone.runGuarded (dart:async/zone.dart:979:7)
[ ] E/flutter (26042): #32 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1019:23)
[ +2 ms] E/flutter (26042): #33 _rootRun (dart:async/zone.dart:1184:13)
[ ] E/flutter (26042): #34 _CustomZone.run (dart:async/zone.dart:1077:19)
[ +2 ms] E/flutter (26042): #35 _CustomZone.runGuarded (dart:async/zone.dart:979:7)
[ +1 ms] E/flutter (26042): #36 _CustomZone.bindCallbackGuarded.<anonymous closure> (dart:async/zone.dart:1019:23)
[ ] E/flutter (26042): #37 _microtaskLoop (dart:async/schedule_microtask.dart:43:21)
[ ] E/flutter (26042): #38 _startMicrotaskLoop (dart:async/schedule_microtask.dart:52:5)
[ +1 ms] E/flutter (26042): `
`name: staff
description: Hamco Staff App
# The following line prevents the package from being accidentally published to
# pub.dev using `pub publish`. This is preferred for private packages.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
# The following defines the version and build number for your application.
# A version number is three numbers separated by dots, like 1.2.43
# followed by an optional build number separated by a +.
# Both the version and the builder number may be overridden in flutter
# build by specifying --build-name and --build-number, respectively.
[Truncated]
# fonts:
# - family: Schyler
# fonts:
# - asset: fonts/Schyler-Regular.ttf
# - asset: fonts/Schyler-Italic.ttf
# style: italic
# - family: Trajan Pro
# fonts:
# - asset: fonts/TrajanPro.ttf
# - asset: fonts/TrajanPro_Bold.ttf
# weight: 700
#
# For details regarding fonts from package dependencies,
# see https://flutter.dev/custom-fonts/#from-packages
`
Im using
fast_qr_reader_view
the plugin appears in the GeneratedPluginRegistrant file
Answers:
username_1: Hi @...
From what I can see, the issue is related to the 3rd party plugin [fast_qr_reader_view](https://pub.dev/packages/fast_qr_reader_view)
rather than to Flutter itself. Please open the issue in the dedicated [repository](https://github.com/facundomedica/fast_qr_reader_view/issues).
Closing, as this isn't an issue with Flutter itself. If you disagree, please write in the comments, possibly providing a minimal reproducible code sample that does not use 3rd party plugins, and I will reopen it.
Thank you
Status: Issue closed
|
cake-build/website | 225598643 | Title: Link Update - Visual Studio Extension
Question:
username_0: The link to the Visual Studio extension on [this page](http://cakebuild.net/docs/editors/visualstudio) points to a working, but outdated [link](https://visualstudiogallery.msdn.microsoft.com/). Instead something like [this](https://marketplace.visualstudio.com/items?itemName=vs-publisher-1392591.CakeforVisualStudio) should be used
Answers:
username_1: @username_0 are you in a position to send a PR through for this?
username_0: I'll draft something after work, sure
Status: Issue closed
|
jlippold/tweakCompatible | 349374647 | Title: `iCleaner Pro 64bit` working on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "com.repo.xarold.com.icleanerpro",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.repo.xarold.com.icleanerpro",
"deviceId": "iPhone7,1",
"url": "http://cydia.saurik.com/package/com.repo.xarold.com.icleanerpro/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": true,
"packageName": "iCleaner Pro 64bit",
"category": "Utilities",
"repository": "Xarold Repo",
"name": "iCleaner Pro 64bit",
"installed": "7.7.0k",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.repo.xarold.com.icleanerpro",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "The first real iOS system cleaner & optimizer",
"latest": "7.7.0k",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
``` |
Praqma/helmsman | 728642137 | Title: extractChartName fails when only devel versions of a chart are published
Question:
username_0: Helmsman fails to get the chart information when only devel version (such x.y.z-rc0) are published.
```
2020-10-22 19:56:51 INFO: Adding helm repository [ my-repo ]
2020-10-22 19:56:54 INFO: Validating [ my-repo/my-chart ] chart's version [ 0.22.0-SNAPSHOT.15806 ] availability
6932020-10-22 19:56:54 DEBUG: helm search repo my-repo/my-chart --version 0.22.0-SNAPSHOT.15806 -l
2020-10-22 19:57:25 INFO: Extracting chart information for [ my-repo/my-chart ]
8172020-10-22 19:57:25 DEBUG: helm show chart my-repo/my-chart
8182020-10-22 19:57:26 INFO: Getting latest non-local chart's version my-repo/my-chart-0.22.0-SNAPSHOT.15806
8192020-10-22 19:57:26 DEBUG: helm search repo my-repo/my-chart --version 0.22.0-SNAPSHOT.15806 -o json
2020-10-22 19:57:28 CRITICAL: While getting chart information: Error: failed to download "my-repo/my-chart" (hint: running `helm repo update` may help)
```
This doesn't happen when a X.Y.Z version of the chart is available on the repo.
It seems to come from the `extractChartName(releaseChart string)` function which use the `helm show chart` command without the --devel argument.<issue_closed>
Status: Issue closed |
AmericanAirlines/Hangar | 1031687258 | Title: `Prizes` component
Question:
username_0: ### Pre-requisites:
- [x] I looked through both [open and closed issues](../issues?utf8=✓&q=is%3Aissue) and did not find another request for the same or similar feature.
## Description
<!-- A clear and concise description of what you want to happen. Consider user story format. -->
Pull the prize data from #388 and display prize data, including a header and rows for each item. If items with `isBonus == true` are included, display a separate section below the other prizes. Each item should display a description if one is provided.
```
/components/Prizes/
index.ts (only exports `Prizes`)
PrizeRow.tsx (with `variant = primary` prop, `secondary` for bonus prizes)
Prizes.tsx
```
Dependent on #390<issue_closed>
Status: Issue closed |
w3c/ttml1 | 194649728 | Title: Typos in Examples
Question:
username_0: In Appendix O [1], there are a number of typos in the examples, e.g., bad </p end tag, bad xmlns declaration (for ttp).
[1] https://www.w3.org/TR/ttaf1-dfxp/#common-styling
Reported by <NAME>.
Status: Issue closed
Answers:
username_0: Fixed in master branch. Need to follow-up with errata (adding label).
username_1: Please could you add a commit reference? Reopening since more work is required (i.e. the errata update).
username_1: In Appendix O [1], there are a number of typos in the examples, e.g., bad </p end tag, bad xmlns declaration (for ttp).
[1] https://www.w3.org/TR/ttaf1-dfxp/#common-styling
Reported by <NAME>.
username_0: See 2977bd78cb53ae535111e1fed24fbe9e10ff5c80 and c099a534b7ec03f38e196c8bbd02a4da813ab638.
username_1: Thanks!
username_0: Reclosing. See #226.
username_1: OK fine as long as this is tracked. I'll _actually_ close it now...
Status: Issue closed
|
libreyextremo/kaona | 264836933 | Title: Map component
Question:
username_0: A map component will help users to see where workarts are.
Improvements:
1.- In the first version, it will show a map.
2.- In the second version, browser will read GPS position and map will show the map that is referenced by these coordinates.
3.- In the third version, map will show a workart spot in the map.
4.- In the third version, map will show all the workart spots that is getting from result table. |
71stSOG/71stSOG-Insurgency | 87511203 | Title: JWC_CASFS not workign 100%
Question:
username_0: suppose to have 3 cas calls per team leader but unless i mistaken it only allows 1 currently
Answers:
username_1: Why not let clients use ALiVE's CAS calls?
username_0: this is a seperate cas call, it calls in an a10 to lauch jdam strikes. much easier and cant be done with alive.
username_0: this is lower on the list of things to get fixed tho
username_0: username_0 aka roy: hey man, gotta ask you how the heck you made that basass add on for the jwc_casfs spent hours trying to figure
Jigsor=BMR=: hey man, don't know what you mean by addon exactly. All I've done was add a repawn eventhandler that starts the script again on guys listed in array INS_W_PlayerJTAC. Other than that the script is in original form except class type of plane was changed
Jigsor=BMR=: have a look at function JIG_p_actions_resp in client_fncs.sqf and how the funtion is called initially and then with repaswn eventhandler in init_player.sqf
we need to mimic that for our mission
username_0: Its been so long since I really got itno inner workings of the script. Before BMR insurgency even. I just can't remember. Just had a look thruogh the scripts, I may be missing it but can't seem to find a refference to time/cool down period. I did however replace mapclick with stackedeventhandler mapclick for mod compatibility though I've not released this version yet.
Jigsor=BMR= is currently offline, they will receive your message the next time they log in.
username_0 aka roy: rgr
Jigsor=BMR=: I think it may not have a time refference, but instead just waits for script to complete maybe before next call available. Not really sure. Its just not obvious
username_0: waitUntil{_buzz distance _object >= 2000 || !alive _buzz};
{
_num = _num - 1;
deleteVehicle vehicle _x;
deleteVehicle _x;
sleep 300;// delay /cool off period timer
[_object, _distance, _doLock, _num] execVM "JWC_CASFS\addAction.sqf"
} forEach units _grp; |
OpenNMT/OpenNMT-py | 326455859 | Title: Documentation out of date?
Question:
username_0: I am trying to build a simple network and train it by following this tutorial: http://opennmt.net/OpenNMT-py/Library.html
But I am getting this error:
```
ValueError Traceback (most recent call last)
<ipython-input-4-513b7ee0d962> in <module>()
1 optim = onmt.Optim(method="sgd", lr=1, max_grad_norm=2)
----> 2 optim.set_parameters(model.parameters())
/Users/rahul/code/OpenNMT-py/onmt/Optim.py in set_parameters(self, params)
70 self.params = []
71 self.sparse_params = []
---> 72 for k, p in params:
73 if p.requires_grad:
74 if self.method != 'sparseadam' or "embed" not in k:
ValueError: too many values to unpack (expected 2)
```
when I execute `optim.set_parameters(model.parameters())`
Answers:
username_1: Indeed.
Use instead: `optim.set_parameters(model.named_parameters())`.
username_0: @username_1 thanks for your quick reply. That solved the problem, but I ran into couple of others.
Running `data = torch.load("../../data/data.train.pt")` gave me an error saying `FileNotFoundError: [Errno 2] No such file or directory: '../../data/data.train.pt'`. When I went into the data directory, I saw that the pre-processed file is named `data.train.1.pt` and not `data.train.pt`. A simple fix.
But, I was not able to solve the below problem.
Running `trainer.train(epoch, report_func)` gives me the following error:
```
TypeError Traceback (most recent call last)
<ipython-input-10-2a4f8612446d> in <module>()
7
8 for epoch in range(2):
----> 9 trainer.train(epoch, report_func)
10 val_stats = trainer.validate()
11
/Users/rahul/code/OpenNMT-py/onmt/Trainer.py in train(self, train_iter, epoch, report_func)
153 try:
154 add_on = 0
--> 155 if len(train_iter) % self.grad_accum_count > 0:
156 add_on += 1
157 num_batches = len(train_iter) / self.grad_accum_count + add_on
TypeError: object of type 'int' has no len()
```
username_1: It seems that you are passing `epoch` as the first parameter when it should be `train_iter`.
username_0: Then it says this:
```
TypeError Traceback (most recent call last)
<ipython-input-14-2a4f8612446d> in <module>()
7
8 for epoch in range(2):
----> 9 trainer.train(epoch, report_func)
10 val_stats = trainer.validate()
11
/Users/rahul/code/OpenNMT-py/onmt/Trainer.py in train(self, train_iter, epoch, report_func)
153 try:
154 add_on = 0
--> 155 if len(train_iter) % self.grad_accum_count > 0:
156 add_on += 1
157 num_batches = len(train_iter) / self.grad_accum_count + add_on
TypeError: object of type 'int' has no len()
```
Therefore, I changed the arguments to adhere to [this](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/Trainer.py#L137) function.
username_1: Your code is still `trainer.train(epoch, report_func)`, it should be `train(train_iter, epoch, report_func)` where `train_iter` is an iterator over the training batches.
Status: Issue closed
|
caarlos0/env | 820797429 | Title: Need help understanding how to use the custom parser
Question:
username_0: I have a Metadata type that is defined by pulling in values from different environment variables. There's one type on the struct that is of a custom Tag type which consists of a key and value.
```
(root) ~ # echo $ENV_TAGS
[{"foo1":"bar1"},{"foo2":"bar2"}]
```
```go
type Metadata struct {
ID string `env:"ENV_ID" json:"id,omitempty"`
Tags []Tag `env:"ENV_TAGS" json:"tags,omitempty"`
}
type Tag struct {
Key string `json:"key,omitempty"`
Value string `json:"value,omitempty"`
}
func (i *Metadata) Parse() error {
var v string
metadataParser := env.ParserFunc(parseMetadata)
metadataType := reflect.TypeOf((*Metadata)(nil))
customParsers := env.CustomParsers{}
customParsers[instanceMetadataType] = instanceMetadataParser
if err := env.ParseWithFuncs(&i, customParsers); err != nil {
return err
}
return nil
}
func parseMetadata(metadataJSON string) (Metadata, error) {
return Metadata{}, nil
}
```
Where metadataParser is defined, I'm getting the following error message: `cannot convert ParseMetadata (value of type func(metadataJSON string) (Metadata, error)) to env.ParserFunc`. Any help would be greatly appreciated! I need to get past the error message, and I simply need to use the custom parsers to set the environment variables to the proper fields and convert the map[string]string(s) of key-value pairs into a slice of Tags with the Metadata Type.
Answers:
username_1: There is an example in the tests:
https://github.com/username_1/env/blob/master/env_test.go#L732-L769
username_1: closing as no response since months ago...
Status: Issue closed
|
ContinualAI/avalanche | 1046905252 | Title: Omniglot dataset using MLP model got shape error during training.
Question:
username_0: 🐛 **Describe the bug**
A clear and concise description of what the bug is.
🐜 **To Reproduce**
Steps / minimal snipped of code to reproduce the issue.
🐝 **Expected behavior**
A clear and concise description of what you expected to happen.
🐞 **Screenshots**
If applicable, add screenshots to help explain your problem.
🦋 **Additional context**
Add any other context about the problem here like your python setup.
Answers:
username_0: 
Attach is the colab file
[https://colab.research.google.com/drive/1Xdh56wVxrym0nC8rxZscg2sLcBxFQ8e6#scrollTo=_pbWXbiZYGo1](url)
username_1: We need authorization to access the colab. Please, check that the input size of your model matches the Omniglot input size (e.g., if you are using the `SimpleMLP` you may want to change the input size when creating it).
username_0: Hi please tried it again if you can access now. I did try to change the input size to 28 x 28 but not sure about the batch size to have a total input of size 352800. |
itchio/itch | 711323198 | Title: Html5 Unity WebGL not working.
Question:
username_0: Won't play Unity WebGL won't load using itch.io app.
On Windows 10 using itch.io app, and using game [digidream-1989](https://elijahwilliams.itch.io/digidream-1989)
Answers:
username_1: Likely related to #2496
username_2: Similar thing happening with [Krali](https://marquesrj.itch.io/krali). But in our case it doesn't just hang, it gives an error.
I confirmed and both [Krali](https://marquesrj.itch.io/krali) and digidream-1989 work in Kitch.

Cheers |
OpenExoplanetCatalogue/open_exoplanet_catalogue | 53403015 | Title: Add mass of Kepler-78
Question:
username_0: http://arxiv.org/abs/1501.00369
Answers:
username_1: Also check the paper referenced in #284 (probably best to resolve both of these in one go)
username_2: Just saw your comment after committing. That's what I was thinking of however they mentioned in the paper Hanno posted in this issue that the reason they don't rely so heavily on previous papers is due to the deviation being too big. That's the same reason they barely used any of the properties given by Sanchis-Ojedas et al.
Status: Issue closed
username_0: http://arxiv.org/abs/1501.00369
username_0: I've closed #284, but we still need to complete the entry for this system. Specifically, go over the paper http://arxiv.org/abs/1407.0853
Why did we not add this one in 2013?
Status: Issue closed
|
dotnet/docs | 863430884 | Title: Suggestion to hyperlink "App secrets" to relevant article
Question:
username_0: The same way it is done here:
https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0
```1. [App secrets](xref:security/app-secrets) when the app runs in the `Development` environment.```
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b458a209-339d-e896-4e85-6d1c6a7aa1f9
* Version Independent ID: 0d5de37c-8f89-b6a5-3132-2c5214196bfd
* Content: [Configuration in .NET](https://docs.microsoft.com/en-us/dotnet/core/extensions/configuration)
* Content Source: [docs/core/extensions/configuration.md](https://github.com/dotnet/docs/blob/main/docs/core/extensions/configuration.md)
* Product: **dotnet-fundamentals**
* GitHub Login: @username_1
* Microsoft Alias: **dapine**<issue_closed>
Status: Issue closed |
taichi-dev/taichi | 1037500883 | Title: Support frontend type check
Question:
username_0: **Concisely describe the proposed feature**
Currently, Taichi doesn't have frontend type check. All type related stuff are not handled until Taichi kernels are turned into CHI IRs. This causes the following issues:
1. Type error messages are not readable. It is very hard for users to distinguish between a user error and a compiler internal error, as they always see a bunch of purple stuff. Even for an experienced user, in some cases it is almost impossible to figure out the connections between the reported CHI IR-level errors and the original Taichi kernels.
2. It is not possible to support local variable definition requiring type inference. For example, we may want a local array with only initial values (which may or may not be constants) provided.
To solve these issues, I hope to make Taichi support frontend type check.
**Describe the solution you'd like (if any)**
Add `ret_type` (return type) member variable and `type_check` member function for the `Expression` class (which represents frontend expressions) in Taichi core, and call `type_check` when a Python-side `Expr` is constructed. Note that the type inference/check rules need to be consistent with the `type_check` pass of CHI IR. Our goal is to infer all frontend types and report all type errors at this stage, and the `type_check` pass of CHI IR should only serve the compiler itself.
Progress:
- [x] ArgLoadExpression
- [x] RandExpression
- [x] UnaryOpExpression
- [x] BinaryOpExpression
- [x] TernaryOpExpression
- [x] InternalFuncCallExpression
- [x] ExternalFuncCallExpression
- [x] ExternalTensorExpression
- [x] GlobalVariableExpression
- [x] GlobalPtrExpression
- [x] TensorElementExpression
- [x] <del>EvalExpression</del>
- [x] RangeAssumptionExpression
- [x] LoopUniqueExpression
- [x] IdExpression
- [x] AtomicOpExpression
- [x] SNodeOpExpression
- [x] LocalLoadExpression
- [x] GlobalLoadExpression
- [x] ConstExpression
- [x] ExternalTensorShapeAlongAxisExpression
- [x] FuncCallExpression<issue_closed>
Status: Issue closed |
fluxcd/flux | 635417774 | Title: Invalid revision range after force-pushing to branch
Question:
username_0: **Describe the bug**
After rebasing and forcepushing to the git branch that flux is configured for, flux complains about an
`Invalid revision range`, stops working and doesn't recover from this failed state.
**To Reproduce**
Steps to reproduce the behaviour:
1. Install flux via helm:
`helm upgrade --install --repo "https://charts.fluxcd.io" --namespace oas --version 1.3.0 --set git.url="https://open.greenhost.net/username_0/openappstack " --set git.branch="username_0" --set git.path="flux" --set git.readonly=true --set registry.excludeImage='*' --set sync.state="secret" --set syncGarbageCollection.enabled=true --set manifestGeneration=true --set git.pollInterval=1h flux flux `
2. Provide a GitHub repository with Kubernetes manifests: See https://open.greenhost.net/username_0/openappstack
3. Rebase the configured git branch and force push
**Expected behavior**
The rebased, force-pushed branch should get checked out without any problem and the flux resources should get applied.
**Logs**
If applicable, please provide logs of `fluxd`. In a standard stand-alone installation of Flux, you'd get this by running `kubectl logs deploy/flux -n flux`.
```
Flag --git-verify-signatures has been deprecated, changed to --git-verify-signatures-mode, use that instead
ts=2020-06-09T12:51:42.103797283Z caller=main.go:259 version=1.19.0
ts=2020-06-09T12:51:42.103946496Z caller=main.go:303 warning="configuring any of {git-user, git-email, git-set-author, git-ci-skip} has no effect when --git-readonly is set"
ts=2020-06-09T12:51:42.10410687Z caller=main.go:412 msg="using kube config: \"/root/.kube/config\" to connect to the cluster"
ts=2020-06-09T12:51:42.160811998Z caller=main.go:498 host=https://10.43.0.1:443 version=kubernetes-v1.18.3+k3s1
ts=2020-06-09T12:51:42.161113171Z caller=main.go:510 kubectl=/usr/local/bin/kubectl
ts=2020-06-09T12:51:42.163756801Z caller=main.go:527 ping=true
ts=2020-06-09T12:51:42.170570349Z caller=main.go:666 url=https://@open.greenhost.net/username_0/openappstack user="<NAME>" email=<EMAIL> signing-key= verify-signatures-mode=none sync-tag=flux-sync state=secret readonly=true registry-disable-scanning=false notes-ref=flux set-author=false git-secret=false sops=false
ts=2020-06-09T12:51:42.177066044Z caller=main.go:772 upstream="no upstream URL given"
ts=2020-06-09T12:51:42.179815008Z caller=main.go:795 addr=:3030
ts=2020-06-09T12:51:42.180623224Z caller=loop.go:61 component=sync-loop info="Repo is read-only; no image updates will be attempted"
ts=2020-06-09T12:51:42.180929767Z caller=loop.go:107 component=sync-loop err="git repo not ready: git repo has not been cloned yet"
ts=2020-06-09T12:51:42.922683297Z caller=checkpoint.go:24 component=checkpoint msg="up to date" latest=1.19.0
ts=2020-06-09T12:51:50.349362748Z caller=loop.go:133 component=sync-loop event=refreshed url=https://@open.greenhost.net/username_0/openappstack branch=username_0 HEAD=dff92ab3eb73a84ab0f1dd9049eef24da5d08239
ts=2020-06-09T12:51:50.548001206Z caller=loop.go:107 component=sync-loop err="fatal: Invalid revision range 02f68e40212176ecf0da8955de178f75c5213f98..dff92ab3eb73a84ab0f1dd9049eef24da5d08239, full output:\n fatal: Invalid revision range 02f68e40212176ecf0da8955de178f75c5213f98..dff92ab3eb73a84ab0f1dd9049eef24da5d08239\n"
```
`dff92ab3eb73a84ab0f1dd9049eef24da5d08239` is the current HEAD of the branch, whereas `02f68e40212176ecf0da8955de178f75c5213f98` is probably the old HEAD before rebasing.
What makes me wonder is where is the old ref stored, I deleted both flux and flux-memcached pods, I don't understand where a possible state would be stored ?
**Additional context**
- Flux version: 1.19.0
- Kubernetes version: v1.18.3+k3s1
- Git provider: self-hosted gitlab
- Container registry provider: several (doesn't apply here)
Answers:
username_0: There's also #696 and #859 which are related.
username_0: So sth is really weird here, I changed back the branch to master and the git repo url to the upstream one (https://open.greenhost.net/openappstack/openappstack/), deleted all flux related pods (`kubectl -n oas delete pods -l 'app in (helm-operator, flux, flux-memcached)'`), but flux still complains:
```
root@oas:~# kubectl -n oas logs flux-dd5849c44-zfxpv
Flag --git-verify-signatures has been deprecated, changed to --git-verify-signatures-mode, use that instead
ts=2020-06-09T14:55:18.366656036Z caller=main.go:259 version=1.19.0
ts=2020-06-09T14:55:18.366810638Z caller=main.go:303 warning="configuring any of {git-user, git-email, git-set-author, git-ci-skip} has no effect when --git-readonly is set"
ts=2020-06-09T14:55:18.366998283Z caller=main.go:412 msg="using kube config: \"/root/.kube/config\" to connect to the cluster"
ts=2020-06-09T14:55:18.417295073Z caller=main.go:498 host=https://10.43.0.1:443 version=kubernetes-v1.18.3+k3s1
ts=2020-06-09T14:55:18.41751348Z caller=main.go:510 kubectl=/usr/local/bin/kubectl
ts=2020-06-09T14:55:18.419152028Z caller=main.go:527 ping=true
ts=2020-06-09T14:55:18.422770359Z caller=main.go:666 url=https://@open.greenhost.net/openappstack/openappstack user="<NAME>" email=<EMAIL> signing-key= verify-signatures-mode=none sync-tag=flux-sync state=secret readonly=true registry-disable-scanning=false notes-ref=flux set-author=false git-secret=false sops=false
ts=2020-06-09T14:55:18.437519917Z caller=main.go:772 upstream="no upstream URL given"
ts=2020-06-09T14:55:18.437849626Z caller=loop.go:61 component=sync-loop info="Repo is read-only; no image updates will be attempted"
ts=2020-06-09T14:55:18.438012705Z caller=loop.go:107 component=sync-loop err="git repo not ready: git repo has not been cloned yet"
ts=2020-06-09T14:55:18.439842592Z caller=main.go:795 addr=:3030
ts=2020-06-09T14:55:19.669361686Z caller=checkpoint.go:24 component=checkpoint msg="up to date" latest=1.19.0
ts=2020-06-09T14:55:26.967553335Z caller=loop.go:133 component=sync-loop event=refreshed url=https://@open.greenhost.net/openappstack/openappstack branch=master HEAD=6810ea7782d810bd45c0c3f32bd63ed7d240eb32
ts=2020-06-09T14:55:27.171488323Z caller=loop.go:107 component=sync-loop err="fatal: Invalid revision range 02f68e40212176ecf0da8955de178f75c5213f98..6810ea7782d810bd45c0c3f32bd63ed7d240eb32, full output:\n fatal: Invalid revision range 02f68e40212176ecf0da8955de178f75c5213f98..6810ea7782d810bd45c0c3f32bd63ed7d240eb32\n"
```
We have the same repo using the master branch configure for flux in exact the same way and there it works fine. What's going on ?
username_1: Your setup uses `sync.state: secret`, which means that the "high water mark", which tracks up to which git revision flux followed your repo, is stored as an annotation on a Kubernetes secret on your cluster. So even if you removed all flux pods, that piece of information still remains, and it refers to a git commit that flux can probably no longer access.
I'm not sure what the easiest recovery would be. If you can get back the git repo in a state that includes the commit flux knows about, that would be my first bet. Otherwise you could manually change the annotation, though you'd have to think about the commit you replace it with: flux will try to apply the diffs between that commit and the latest one on your branch. If you're OK with a more radical solution, you could also remove everything flux-related, including the secret, and reinstall it.
username_0: Awesome, thanks !
Editing the secret and restarting hte flux* pods afterwards fixed it !
kubectl -n oas edit secret flux-git-deploy
kubectl -n oas delete pods -l 'app in (helm-operator, flux, flux-memcached)'
username_0: I still would like that flux would recover from force pushes without manual intervention.
username_2: Flux v1 has issues with force push, especially when using the secret sync-store. That is something I'm coming to understand is a common report in Flux v1. Maybe it can be added to the FAQ. Thanks for the report.
Closing, as it sounds like this was resolved for the submitter. Flux v1 is in maintenance mode and we are recommending all users start upgrading to the completely rewritten and redesigned Flux v2, check out https://toolkit.fluxcd.io for more info!
Thanks for using Flux. 🎶
Status: Issue closed
|
danielgtaylor/node-desktop-uploader | 83082817 | Title: support of file chunk and resume
Question:
username_0: does it support of sending file in chunk and can i resume post the file to server in case for any internet interruption?
Answers:
username_1: @username_0 it does not support chunked multipart uploads (this requires special code on the server to concatenate the parts into the final file). In the case of Internet interruption the file will be retried until all retries are exhausted, then it will fail. You can try to detect network interruption and pause the uploader from your code, or listen for error events and track the files which didn't upload that way. Hope this helps!
username_0: thanks.
Status: Issue closed
username_1: No problem. Let me know if you need anything else! |
tobi-wan-kenobi/bumblebee-status | 651336508 | Title: Read theme & modules from the config file
Question:
username_0: ### Feature Request
<!-- Fill in the relevant information below to help triage your issue. -->
Currently the config file only supports configuring individual moduiles for bumblebee-status
But i wanted to also specify my theme & list of modiules , right into the config file. I tried searching for a way to do it but couldn't find any. Having these values in a default config file, saves the effort of maintaining consistency if the user invokes the "bumblebee-status" from different application scripts.
Answers:
username_0: @username_1 : I could not find any developer documentation for contributing to the project itself. Can you atleast specify the minimal things how to generate a build
username_1: That's a nice FRQ, thanks!
OK, so that is true, I didn't document that. You don't need a build, just check out the git and run bumblebee-status :)
Status: Issue closed
username_2: Can someone add some documentation for this?? I am really excited for this feature.
username_1: should be in the "Features" section now |
heroku/cli | 226105179 | Title: Can't suspend single accounts. Bulk suspend works.
Question:
username_0: When suspending single accounts, we get a `Forbidden` error. Reinstalling the CLI does not solve the issue. We believe it may be related to [this](https://github.com/heroku/heroku-sudo/pull/104)
Answers:
username_1: I have a feeling that this is because the single user suspend is essentially masquerading as the user and self-suspending... but that user won't have access to the bulk suspend route.
All that said, @username_0 can you run the command that's failing with `HEROKU_DEBUG=1 HEROKU_DEBUG_HEADERS=1` and share the output?
username_0: ```
MacBook-Pro:~ uholmes$ HEROKU_DEBUG=1 hs user:suspend -u <EMAIL> --notes "https://github.com/heroku/secops/issues/405"
heroku-cli/5.9.0-1b8deac (darwin-amd64) go1.7.5 /usr/local/Cellar/heroku/5.9.0/libexec/bin/heroku cmd: sudo
running heroku user:suspend --notes https://github.com/heroku/secops/issues/405 with
HEROKU_HEADERS={"X-Heroku-Sudo":"true","X-Heroku-Sudo-User":"<EMAIL>"}
HEROKU_SUDO=1
HEROKU_API_KEY=<KEY>
HEROKU_USER=<EMAIL>
heroku-cli/5.9.0-1b8deac (darwin-amd64) go1.7.5 /usr/local/Cellar/heroku/5.9.0/libexec/bin/heroku cmd: user:suspend
--> PUT /abuse/bulk-suspend
--> users%5B%5D=<EMAIL>%4<EMAIL>¬es=https%3A%2F%2Fgithub.com%2Fheroku%2Fsecops%2Fissues%2F405
Suspending <EMAIL>... <-- 403 Forbidden
<-- {"id":"forbidden","message":"Forbidden"}
!!!
▸ Forbidden
Error: Expected response to be successful, got 403
at Request.handleFailure (/Users/uholmes/.local/share/heroku/plugins/node_modules/heroku-client/lib/request.js:284:9)
at /Users/uholmes/.local/share/heroku/plugins/node_modules/heroku-client/lib/request.js:163:14
at IncomingMessage.<anonymous> (/Users/uholmes/.local/share/heroku/plugins/node_modules/heroku-client/lib/request.js:392:5)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:188:7)
at endReadableNT (_stream_readable.js:975:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
From previous event:
at Object.action (/Users/uholmes/.local/share/heroku/plugins/node_modules/heroku-cli-util/lib/action.js:13:18)
at run (/Users/uholmes/.local/share/heroku/plugins/node_modules/heroku-sudo/commands/user/suspend.js:54:15)
at run.next (<anonymous>)
at onFulfilled (/Users/uholmes/.local/share/heroku/plugins/node_modules/co/index.js:65:19)
at /Users/uholmes/.local/share/heroku/plugins/node_modules/co/index.js:54:5
at co (/Users/uholmes/.local/share/heroku/plugins/node_modules/co/index.js:50:10)
at createPromise (/Users/uholmes/.local/share/heroku/plugins/node_modules/co/index.js:30:15)
at createPromise (/Users/uholmes/.local/share/heroku/plugins/node_modules/co/index.js:30:29)
MacBook-Pro:~ uholmes$ HEROKU_DEBUG_HEADERS=1 hs user:suspend -u <EMAIL> --notes "https://github.com/heroku/secops/issues/405"
accept=application/json
content-type=application/x-www-form-urlencoded
user-agent=heroku-cli/5.9.0-1b8deac (darwin-amd64) go1.7.5 heroku-sudo/4.4.9 node-v7.8.0
range=id ]..; max=1000
x-heroku-sudo-user=<EMAIL>
x-heroku-sudo=true
host=api.heroku.com
authorization=REDACTED
Suspending <EMAIL>... content-length=40
content-type=application/json
date=Wed, 03 May 2017 21:16:29 GMT
oauth-scope=global
oauth-scope-accepted=global
request-id=d9826ff9-22f9-4f86-610b-2e61a1687980,e3c08c5b-98da-78fc-d302-d3cb774cda8a
vary=Accept-Encoding
via=1.1 spaces-router (98e0b14051f4), 1.1 spaces-router (98e0b14051f4)
x-content-type-options=nosniff
x-runtime=0.100922498
!!!
▸ Forbidden
```
username_0: @username_2
username_2: @chadbailey59 following the lunch discussion - halp?
username_3: I want to bump this issue. It hasn't been touched in 18 days and it is really impairing our ability to do our work. Is there any way that we can partner on this and try to get it fixed in the next couple days?
The secops team suspends anywhere from 7 to 20 accounts every day. And right now instead of just typing our suspend command we have to first write a file with the user name and then issue the suspend command. I know it doesn't seem like much, and it's fine for a week or so while it's being worked on, but it is a big pain in the butt and we really need it fixed.
username_4: @username_3 @username_1 worked through this yesterday and arrived at a solution that should hopefully work. Could you please `heroku update` and then `heroku plugins` and verify that you have version `4.4.12` and then try again?
username_3: @username_4 cli is being weird for me. I know that I have more plugins than this
```
kthomp-ltm:~ kevin.thompson$ heroku update
heroku-cli: Updating heroku-slugs... done
kthomp-ltm:~ kevin.thompson$ heroku plugins
=== Installed Plugins
heroku-lunch
heroku-repo
```
username_4: @username_3 can you paste in the output of `heroku version`?
username_3: ```
heroku version
heroku-toolbelt/3.43.9999 (x86_64-darwin10.8.0) ruby/1.9.3
heroku-cli/5.9.9-b7d5539 (darwin-amd64) go1.7.5
=== Installed Plugins
heroku-lunch
heroku-repo
```
username_4: @username_0 you need to delete your authorization, I can see it in the debug output.
username_4: @username_3 What is the output of `cat ~/.local/share/heroku/plugins/plugins.json | jq .\[\].name` ? If you do not have `jq` installed, just send me the `name` attributes of the array.
username_3: @username_4
```
kevin.thompson$ cat ~/.local/share/heroku/plugins/plugins.json | jq .\[\].name
"heroku-sudo"
"heroku-event-log"
"heroku-cli-oauth"
"heroku-repo"
"heroku-slugs"
```
username_4: @username_3 If you do not depend on home built ruby plugins could you re-install the CLI from https://cli.heroku.com ? I can see in the version that you still have the old ruby CLI bits installed and I want to eliminate that from the possibilities in debugging (and your CLI will also be faster)
username_0: @username_4 During troubleshooting (before placing this issue) I reinstalled the CLI, and I'm pretty sure at least one other SecOps member did as well. Just FYI.
username_5: @username_4 toolbelt no longer displays plugins from v5 or v6, this is a known issue we will not be fixing.
username_3: ok I've reinstalled CLI and I've got sudo v 4.4.12. Now I just need to find someone that needs suspending. This always happens. Usually I'm swimming in people that need suspending, but when I need to test something everything goes quiet.
username_3: ok, it seems like it's working for me.
Status: Issue closed
|
CyclopsMC/EvilCraft | 169657566 | Title: [1.10.2] Crash on Startup
Question:
username_0: Forge log link: http://pastebin.com/a5j4U9jD
Crash-Report: http://pastebin.com/uz2q4ktg
Problem: Crash on Startup
Expected behaviour: No crash on startup
Steps to reproduce the problem:
- Start Minecraft
- Wait
- See Crash
Answers:
username_1: Did you disable the box of eternal closure?
Are you using a mod that is removing/changing the ender chest and/or potions?
username_0: I didn't disable anything from Evilcraft.
I have a mod that lets you change the vanilla ender chest into a color coded ender chest. But it changes nothing of the recipe, and is done by shift right clicking with an Item on the chest.
I think I have no mod, which changes potions, but i'm not 100% sure of that.
username_1: It probably has something to do with that ender chest mod.
Could you tell me the name of that mod?
I'll look into it then.
Note to self: make infobook recipe baking safer when recipes are not found.
username_0: The mod is called "Enderthing"
[Cursepage](https://minecraft.curseforge.com/projects/enderthing)
[Github page](https://github.com/gigaherz/Enderthing/)
username_0: Okay, I removed Evilcraft, and it's not starting up.
Now It's Vampirism, I think (I have no clue how to read crash reports)
Here is the new crash-Report: http://pastebin.com/VKMXCvcz
I'll go to Vampirisms Github and post it there.
username_1: Yes, that's Vampirism.
But that issue has nothing to do with the crash you posted earlier though.
username_0: Oh, okay. So I'll leave this issue open.
~~Why am I always a Bug Magnet? D:~~
username_0: Update: It works, when you disable the box of eternal Closure in the config.
` B:boxOfEternalClosure=false
`(Just a temporary fix, if anyone else encounters that error :) )
username_1: Ok, this is useful information for other people encountering this issue.
I assume the ender chest mod is messing up the box of eternal closure recipe, which my system does not handle very well.
username_0: Seems not to be Ender-Thing.
I removed it and enabled the box.. again.
Report: http://pastebin.com/bAremr8W
username_1: Fixed in CyclopsMC/CyclopsCore@4b1049ae95c053a1f23ee3e02740fd94edee6bbe
Will be released as CyclopsCore 0.7.5 somewhere this week (hopefully).
Status: Issue closed
|
LeHack/Docker-network-research | 233393662 | Title: Request for comments (RFC#1)
Question:
username_0: Summoning for review:
@bieron @jabbas @piotr-piatkowski @Sahadar @MadaooQuake @somnam @StarLightPL @bjakubski
I'd be really grateful for having a look and commenting on what could be improved/rephrased. |
DakotaNelson/sneaky-creeper | 94601171 | Title: Pass
Question:
username_0: Until PR #19 , decode did not take parameters as an argument, and thus threw an error claiming required parameters were not inputted if the decoder required params.
The above issue was resolved by PR #19, but the receiver does not pass the params argument when calling decode. It seems it should. Comments?
Status: Issue closed
Answers:
username_0: Until PR #19 , decode did not take parameters as an argument, and thus threw an error claiming required parameters were not inputted if the decoder required params.
The above issue was resolved by PR #19, but the receiver does not pass the params argument when calling decode. It seems it should. Comments?
Status: Issue closed
username_1: Yep, it should. This is resolved in 8e793b59f5fd033faf586f4b84e0d61c89593ed5, I believe.
username_0: Thanks for nabbing that. |
iDerekLi/amap-js | 622274467 | Title: ES 方式 import 模块提示错误
Question:
username_0: ```
ERROR Failed to compile with 8 errors friendly-errors 14:07:52
These dependencies were not found: friendly-errors 14:07:52
friendly-errors 14:07:52
* @babel/runtime-corejs2/core-js/promise in ./node_modules/amap-js/es/load.js, ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/core-js/reflect/construct in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/core-js/symbol in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/helpers/esm/extends in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/helpers/esm/getPrototypeOf in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/helpers/esm/inheritsLoose in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
* @babel/runtime-corejs2/helpers/esm/possibleConstructorReturn in ./node_modules/amap-js/es/loaders/amap-jsapi-loader.js
```
相关依赖没有打包?
Answers:
username_1: 感谢反馈! 此问题已解决 更新安装v2.0.2及以上(包含v2.0.2)版本即可.
Status: Issue closed
|
spring-projects/spring-framework | 398218574 | Title: Circular dependency scenario that fails (contains some debug info) [SPR-16181]
Question:
username_0: **[<NAME>](https://jira.spring.io/secure/ViewProfile.jspa?name=cristian.spiescu)** opened **[SPR-16181](https://jira.spring.io/browse/SPR-16181?redirect=false)** and commented
I have the following case: A -> B -> C -> D -> B. So we have a circular dependency/loop.
If all my objects have `@Service` + `@Transactional`, then everything works fine.
If B has `@Repository`, then an error occurs, with the message:
`Error creating bean with name 'b': Bean with name 'b' has been injected into other beans [d] in its raw version as part of a circular reference, but has eventually been wrapped. This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching - consider using 'getBeanNamesOfType' with the 'allowEagerInit' flag turned off, for example.`
I was curious to see what happens, and why the first case worked well but not the second. Here is the stack trace, at the instance where D has been created and populated with a proxy of B (common to the 2 cases):
!spring-bug-1.png|thumbnail!
B is proxified by AnnotationAwareAspectJAutoProxyCreator.getEarlyBeanReference(), cf. the image above.
And here is how the cases differ. In AbstractAutowireCapableBeanFactory.doCreateBean("b") we have the following code:
!spring-bug-2.png|thumbnail!
if "B" has `@Repository`, initializeBean() will return a proxy of B, and not B; hence further down in the code the error will be thrown. If no `@Repository` exists => initializeBean() returns the same object, so the code underneath doesn't complain.
And the above happens because initializeBean() delegates to the post processors, i.e. AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(). More specifically to PersistenceExceptionTranslationPostProcessor, which proxifies our bean because it sees `@Repository`.
---
**Attachments:**
- [spring-bug-1.png](https://jira.spring.io/secure/attachment/25212/spring-bug-1.png) (_221.83 kB_)
- [spring-bug-2.png](https://jira.spring.io/secure/attachment/25211/spring-bug-2.png) (_27.08 kB_)
Answers:
username_1: Working as designed. Bean cycles are also rejected in recent Spring Boot versions.
Status: Issue closed
|
vueComponent/ant-design-vue | 735813548 | Title: Tabs 标签页,文档未更新,无法使用
Question:
username_0: - [ ] I have searched the [issues](https://github.com/vueComponent/ant-design-vue/issues) of this repository and believe that this is not a duplicate.
### Version
2.0.0-beta.13
### Environment
3.0
### Reproduction link
[https://2x.antdv.com/components/tabs-cn/](https://2x.antdv.com/components/tabs-cn/)
### Steps to reproduce
Tabs 标签页无法使用
### What is expected?
Tabs 标签页无法使用
### What is actually happening?
Tabs 标签页无法使用
<!-- generated by issue-helper. DO NOT REMOVE -->
Answers:
username_1: 同样无法正常使用,但是a-tabs在我的项目另一个组件里是正常的,就很奇怪
username_1: 我推测是animated的问题,设置为false后可以正常显示。
username_2: 提供复现demo
Status: Issue closed
|
harm-less/angular-sticky | 276516767 | Title: None
Question:
username_0: Hey man,
I'm doing fine, and you? 😄
That's odd, but how come `mattosborn` is there in the URL?
For me the `bower install angular-sticky-plugin` command works just fine:
```
bower angular-sticky-plugin#* not-cached https://github.com/username_0/angular-sticky.git#*
bower angular-sticky-plugin#* resolve https://github.com/username_0/angular-sticky.git#*
bower angular-sticky-plugin#* download https://github.com/username_0/angular-sticky/archive/v0.4.2.tar.gz
bower angular-sticky-plugin#* extract archive.tar.gz
bower angular-sticky-plugin#* resolved https://github.com/username_0/angular-sticky.git#0.4.2
bower angular-sticky#^0.4.2 install angular-sticky#0.4.2
angular-sticky#0.4.2 bower_components\angular-sticky
└── angular#1.5.3
```
Hope this helps,
Harm |
jeremylong/DependencyCheck | 444615758 | Title: Use local node analysis insteed of npm audit
Question:
username_0: **Is your feature request related to a problem? Please describe.**
My problem is :
- In my CI, I can't finish a dependency check, because npm return http 503 error before the end of the check ...
**Describe the solution you'd like**
After some research, it seems the node audit results are directly available in json on GitHub .
So, I think it will be really better, to download this vulnerabilities in the local database, and search on it, instead of calling npm audit for each package .
Here you can get it : https://github.com/nodejs/security-advisories
In the ecosystem folder, separate by folder for each dependency, and by vulnerabilities, with semver version vulnerable, and bonus, A cvsscore !!!
Just some more information on this repository, it's a work in progress, because as I read they wants to create a GitHub pages website, to search a package ... But not important in our case .
Other things, this repository, is an extract of this repository : https://github.com/nodejs/security-wg , but better separated ( not all the json in same folder, but separated by package for example ), and As you can see in this issue : https://github.com/nodejs/security-wg/issues/494 they planned to migrate vulnerabilities to a separate repository.
So, if you plan to use this, this link can be helpful :
- github api to get last master commit : `https://api.github.com/repos/nodejs/security-advisories/commits/master`
- url to download the zip : `https://github.com/nodejs/security-advisories/archive/master.zip`
**Describe alternatives you've considered**
I've discuss about it ( a little ), with verdaccio-audit ( npm audit compatible api ), about using this local datas insteed of proxying audit request to npmjs, but this will not apply to users doesn't using verdaccio ( like users using nexus ) .
Answers:
username_1: Do you know if npmjs uses that database for their api used by the command npm audit?
username_0: @username_1 I didn't read anything about this.... But, the nodejs/security-wg, is the official repository from nodejs ... So I think they contains all the "open source" vulnerabilities ( and maybe npmjs contain some proprietary informations ? )
username_1: I was asking because it would be nice if local node analysis produces the same result than npm audit (only possible if it is the same database or at least that contains the same vulnerabilities information I think).
username_0: In fact, I think it's the same . But without any informations, the only way
to confirm is to try with lot of dependencies
username_2: It looks like Node Audit Analyzer is not working at all due to this spamming of the site for every single package. Is there any hope of getting this fixed? It is labeled as an enhancement but the NodeAuditAnalyzer is completely non-working. Why is this not a bug?
username_1: @username_2 the bug is here: https://github.com/jeremylong/DependencyCheck/issues/1895 |
f-list/fserv | 23831812 | Title: Subscriber connection limit increase?
Question:
username_0: We’ve discussed the idea of allowing subscribers to have a few more characters online at once; while it’s obviously not pressing, it might be worth keeping in mind as an option for the future. Kira: Easily done inside current code base. Two line change.
Answers:
username_1: With subscriptions no longer being a potential I think this is dead.
Status: Issue closed
|
in2code-de/luxletter | 526671706 | Title: Exception if unsubscribe page in ext configuration is empty or page not exists
Question:
username_0: If the defined page (in extensions configuration) to unsubscribe from the newsletter is wrong or empty then I become an exception on creating a new newsletter in step 3 (review).
(1/1) #1521716622 TYPO3\CMS\Core\Exception\SiteNotFoundException
No site found in root line of page 6
I thought that something was wrong with my newsletter page id that was defined in the step before, but it wasn't. The setting for the page to unsubscribe the newsletter was wrong. That was a page they not exists in my project :-).
Maybe a better description in the exception will help to fix this in the configuration.
Answers:
username_1: Yes, I think an exception is correct - especially in the current times with GDPR and so on...
Nevertheless there should be a better message.
I will search for a solution.
username_1: Fixed and will be released with the next version
Status: Issue closed
|
tjdevries/fi_re_planner | 155049460 | Title: TODO's
Question:
username_0: This issue keeps track of things that I come across that I would like to add to the planner. It will be updated and sorted as time goes on.
# TODO's
## Unsorted
- Success rates for different time horizons (20 years vs 40 years vs forever)
- Potential for side income in retirement
- Different Stock/Bond allocation strategies
- The option to include social security when it kicks in
- Accounting for lifestyle inflation
- Statistical variability in market returns, inflation, side income, and withdrawal |
SAP/fundamental-ngx | 428846553 | Title: Timepicker: changing time with the spinner or dropdown inputs does not trigger form validator
Question:
username_0: ```<fd-time-picker
required
formControlName="startTime"
[displaySeconds]="false">
</fd-time-picker>```
In this example, `startTime` has a validator, which is properly triggered when the main input is changed, but not when using the spinners or updating the inputs in the dropdown.<issue_closed>
Status: Issue closed |
scottohara/loot | 550558932 | Title: Cypress not clearing session storage between tests
Question:
username_0: Needed to add a manual `cy.window().then(win => win.sessionStorage.clear());`
According to https://github.com/cypress-io/cypress/issues/686, session storage _should_ be cleared, but isn't.
Answers:
username_1: This is not currently implemented. See https://github.com/cypress-io/cypress/issues/413 |
package-url/packageurl-dotnet | 561282808 | Title: CI status
Question:
username_0: Currently it is showing unknown in the README for both AppVeyor and TravisCI.
* TravisCI: per discussion on [this thread](https://travis-ci.community/t/status-badge-stuck-at-unknown-build/5313/3), `Build pushed branches` toggle needs to be enabled to get the badge for branches (in this case `master`): https://travis-ci.com/package-url/packageurl-dotnet/settings
* AppVeyor: please sign in with GitHub organization account which has access to settings, so the project lights up at: https://ci.appveyor.com/project/package-url/packageurl-dotnet/branch/master
Answers:
username_1: Would you be interested in a pull request that does more and prepares for the package to be pushed to NuGet.org?
username_0: Publish packages to nuget is possible. We would need a official nuget.org [API key](https://docs.microsoft.com/en-us/nuget/quickstart/create-and-publish-a-package-using-visual-studio?tabs=netcore-cli#acquire-your-api-key) and then encrypt it using TravisCI/AppVeyorCI UI. Could you open a separate issue for package publishing, as it is a valid ask on its own?
ps: I think if we switch to CirrusCI, which supports Linux, macOS and Windows (even FreeBSD) which we can target from one YAML file.
username_2: CC @sschuberth @pombredanne
Are you interested in a PR for solving this issue and also solving #6 ? |
encode/databases | 1053580331 | Title: Database connection issue where sqlalchemy connection works
Question:
username_0: I'm deploying a FastAPI application on Google Cloud Run which connects to a Cloud SQL instance using this package. The crux of the issue is that connecting with:
```python
db = databases.Database(url)
await db.connect()
```
fails whereas connecting through sqlalchemy's `create_engine` with
```python
engine = create_engine(url)
engine.connect()
```
works.
The connection url uses `unix_sock` structure (docs [here](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#unix-domain-connections)) rather than the regular sqlalchemy connection url, something like this:
```python
# all these urls work fine when connecting with sqlalchemy create_engine
"postgresql://user:pass@/db_name?host=/path/to/sock"
"postgresql+psycopg2://user:pass@/db_name?host=/path/to/sock"
"postgresql+pg8000://user:pass@/db_name?unix_sock=/path/to/sock/.s.PGSQL.5432"
```
I'm unsure whether this would be an issue with using async in the Google Cloud environment or something about how connection urls like the one above get translated in this package to work with sqlalchemy. I've posted on Stack Overflow about it [here](https://stackoverflow.com/questions/69963202/issues-connecting-to-a-google-cloud-sql-instance-from-google-cloud-run?noredirect=1#comment123679622_69963202) but thought I'd raise an issue here as well in case it was the latter.
Answers:
username_1: @username_0 Thanks for reporting this.
Can you provide the failure details? And if you could provide a complete example, it would be great.
username_0: @username_1 yes sorry i should have included that in the original message. traceback:
```
async with self.lifespan_context(app):
File "/opt/pysetup/.venv/lib/python3.9/site-packages/starlette/routing.py", line 518, in __aenter__
await self._router.startup()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/starlette/routing.py", line 598, in startup
await handler()
File "/{app_name}/app/main.py", line 46, in startup
await database_.connect()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/databases/core.py", line 88, in connect
await self._backend.connect()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/databases/backends/postgres.py", line 70, in connect
self._pool = await asyncpg.create_pool(**kwargs)
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/pool.py", line 407, in _async__init__
await self._initialize()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/pool.py", line 435, in _initialize
await first_ch.connect()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/pool.py", line 127, in connect
self._con = await self._pool._get_new_connection()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/pool.py", line 477, in _get_new_connection
con = await connection.connect(
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connection.py", line 2045, in connect
return await connect_utils._connect(
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 790, in _connect
raise last_error
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 776, in _connect
return await _connect_addr(
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 676, in _connect_addr
return await __connect_addr(params, timeout, True, *args)
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 720, in __connect_addr
tr, pr = await compat.wait_for(connector, timeout=timeout)
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/compat.py", line 66, in wait_for
return await asyncio.wait_for(fut, timeout)
File "/usr/local/lib/python3.9/asyncio/tasks.py", line 481, in wait_for
return fut.result()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 586, in _create_ssl_connection
tr, pr = await loop.create_connection(
File "uvloop/loop.pyx", line 2024, in create_connection
File "uvloop/loop.pyx", line 2001, in uvloop.loop.Loop.create_connection
ConnectionRefusedError: [Errno 111] Connection refused
```
complete example is a little tricky but essentially i have this in a FastAPI application:
```python
from fastapi import FastAPI
import databases
database = databases.Database(settings.sqlalchemy_database_uri) # mapped from an env var
app = FastAPI()
app.state.database = database
@app.on_event("startup")
async def startup() -> None:
database_ = app.state.database
if not database_.is_connected:
await database_.connect()
```
and the application is started with:
```bash
gunicorn --bind :$PORT --workers 1 --worker-class uvicorn.workers.UvicornWorker --threads 4 app.main:app
```
where `$PORT` is injected by Google Cloud Run deploying a revision. hopefully that's enough context but do let me know if there's any other info i can provide
username_1: I think the issue is how `DatabaseUrl` is parsing the url [here](https://github.com/encode/databases/blob/master/databases/core.py#L422).:
With this database_uri: `"postgresql://user:password@/dbname?host=/var/run/postgresql/.s.PGSQL.5432`
I can see that I get the following parsed data from DatabaseUrl:
```python
{'host': None, 'port': None, 'user': 'user', 'password': '<PASSWORD>', 'database': 'dbname'}
```
Which has invalid host, as the host is now available in the query part, and should be read from the `options` part of url.
This shouldn't be too complicated. Feel free to create a PR for it.
username_0: ah ok that's interesting, seems like that would be the issue then. will try and get a pr raised for that
username_1: Thanks. I'll update the PR to be more precise then.
username_1: I think that should be ok for now.
asyncpg mentions a few common places [here](https://magicstack.github.io/asyncpg/current/api/index.html). Which covers the one in our example. Please do double check.
username_0: interesting they have quite a few fallbacks. would you like me to add them to this pr? or should that come as a separate piece of work when the time comes
username_1: Can you explain what the fallbacks are?
username_0: for asyncpg?
if the host can't be parsed from dsn (either the regular hostname part of the dsn or a `host=` query) then:
- the value of the `PGHOST` environment variable,
- on Unix, common directories used for PostgreSQL Unix-domain sockets: `"/run/postgresql"`, `"/var/run/postgresl"`, `"/var/pgsql_socket"`, `"/private/tmp"`, and `"/tmp"`,
- "localhost"
username_1: Well for the first one I don't think we can do much, as we need to cover more than just asyncpg.
Fore the second one though, I think we should be fine if `asyncpg` can accept `host=None` and by that I mean it will try the fallbacks when `host=None`. If the fallbacks are ignore with `host=None` we need to omit that from the input.
I think it's probably not worth it.
username_0: Ok makes sense. Is my approach in the pr alright or is it missing the mark?
username_1: I think it's pretty good and what we want.
I just need to test it locally and make sure it does what we want.
Because we only test DatabaseUrl, we don't test the integration with postgres.
Status: Issue closed
|
ongakukan-co-ltd/PracticeProject | 590712807 | Title: 23102_Takaku_Test
Question:
username_0: 全体画像.jpg
<!-- 以下モデル作成依頼のテンプレートです。足りなくなったらコピペしてください-->
---

## モデル配置者記入項目
番号:
不足しているモデルの名称:
代用したアクタ名:
スプライン配置の有無:
コリジョンの有無:
備考:
## モデル作成者記入項目
テクスチャの種類(B,M,N,R.O.E):
マテリアルの数(1or2):
ライトマップ解像度(256以下):
LOD(作成数):
- [ ] UVオーバーラップチェック <!-- タスクが完了したら- [x]にしてください-->
<!--テンプレここまで--> |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.