repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
cuba-platform/documentation | 465822137 | Title: Mention new Add-Ons window
Question:
username_0: ### Environment
- Documentation version: 7.0
- Part of the documentation: Manual
- Existing page: 4.5.1. Using Public Add-ons
### Description of the bug or enhancement
Studio 11 has been released. In this version installation procedure of most add-ons has been changed.
It is done via Add-Ons window.
We should add this information to the manual, while keeping existing info in "previous Studio versions" section.<issue_closed>
Status: Issue closed |
AlightYoung/AlightYoung.github.io | 584131010 | Title: Netty | A·Y
Question:
username_0: https://alightyoung.gitee.io/blog/2019/11/20/netty/
Netty简介Netty 是由 JBOSS 提供的一个 java 开源框架。Netty 提供异步的、事件驱动的网络应用程序框架和工具,用以快速开发高性能、高可靠性的网络服务器和客户端程序。也就是说,Netty 是一个基于 NIO 的客户、服务器端编程框架,使用 Netty 可以确保你快速和简单的开发出一个网络应用,例如实现了某种协议的客户,服务端应用。Netty 相当简化和流线化了网络应用的编程开 |
PeteHaitch/MethylationTuples | 117073739 | Title: parallel + multicore processing
Question:
username_0: See https://twitter.com/henrikbengtsson/status/651535275298918400 RE https://twitter.com/henrikbengtsson/status/651535275298918400 and https://github.com/HenrikBengtsson/future/blob/develop/tests/rng.R |
libunwind/libunwind | 448533194 | Title: glibc: _U and _UL are reserved names
Question:
username_0: There are two ways to fix this issue:
1. rename `_U` and `_UL` to something else (will also require changes in `tests/check-namespace.sh.in` file and other places?).
2. just define `_U` explicitly:
```diff
--- a/include/libunwind-common.h.in
+++ b/include/libunwind-common.h.in
@@ -23,6 +23,8 @@ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
+#define _U _U
+
#define UNW_VERSION_MAJOR @PKG_MAJOR@
#define UNW_VERSION_MINOR @PKG_MINOR@
#define UNW_VERSION_EXTRA @PKG_EXTRA@
```
which approach is safe/better? If \#1 is non-breaking-ABI, I would go for it (because \#2 looks quite hacky).
Answers:
username_1: Unfortunately I think #1 breaks ABI. #2 does seem hacky, but I can't think of anything better.
Status: Issue closed
username_0: Thank you, I think we can close this. Solaris initial changes are merged with `undef _U`: https://github.com/libunwind/libunwind/blob/0efe1db0ebf03ed8b7004c75c15cfd286854137f/include/libunwind-common.h.in#L33-L44 |
coding-blocks/codingblocks.online.projectx | 470640875 | Title: Background of logout is not proper.
Question:
username_0: ### Is this issue regarding CSS?
This project uses motley(https://github.com/coding-blocks/motley/) as CSS framework, consider opening an issue there instead.

Answers:
username_0: @championswimmer Please add BOSS
username_1: is this issue still open,can i work on this issue??
Status: Issue closed
|
flutter/flutter | 755530192 | Title: Implement an auto-roller for test Chromium and Firefox versions
Question:
username_0: Currently our test browser versions are specified in two places:
- The [browser_lock.yaml](https://github.com/flutter/engine/blob/master/lib/web_ui/dev/browser_lock.yaml#L20) file.
- CIPD packages.
The upgrade process is tedious and manual. As a result we frequently test on severely out-of-date browsers because nobody bothers to upgrade. For example, as of this writing we test on Firefox 72 while the current stable Firefox is 83, i.e. 11 versions ahead. Current stable Chrome is 87, but we test on 84. On top of that, few team members are familiar with CIPD and the upgrade process.
An auto-roller would improve our testing story considerably. Unfortunately, we cannot roll Safari because it's locked to the OS version, and therefore requires a coordinated effort with the infra team. However, Chromium and Firefox should be straightforward to auto-roll. |
richardgirges/express-fileupload | 374375208 | Title: Different way to enable it then app.use()?
Question:
username_0: Handle HTTP requests.
@param handler
A function that takes a request and response object, same signature as an Express app.
I know I can use `exports.main = functions.https.onRequest(app)` but just wondering, will save some time sometimes.
Status: Issue closed
Answers:
username_0: Handle HTTP requests.
@param handler
A function that takes a request and response object, same signature as an Express app.
I know I can use `exports.main = functions.https.onRequest(app)` but just wondering, will save some time sometimes.
Status: Issue closed
username_1: Hi @username_0, the only supported way to use express-fileupload right now is with `app.use`.
I suppose you could also try to chain it directly in a specific route, like so (beware this isn't officially supported):
```javascript
app.get('/foo', fileUpload(), (req, res) => {
// do stuff
});
``` |
dotnet/runtime | 536017434 | Title: Fix all tests under test\src\readytorun to work with the crossgen2 run
Question:
username_0: These tests are disabled (ex: see https://github.com/dotnet/runtime/pull/741) because they have explicit crossgen.exe commands that produce .ni.dll images.
The presence of these images is tripping crossgen2.
Also, these tests are not giving the coverage we need with crossgen2 currently, so that needs to be fixed.
We can fix them by doing something like:
```
if (RunCrossgen2 is set)
compile using crossgen2.exe
else
compile using crossgen.exe
```
Answers:
username_1: There is also a bug in the test scripts that expects that the tests assemblies are .exe instead of .dll. So the invocation of the old crossgen fails due to that. I assume this is a remnant from the time before we've migrated the test projects to use SDK style project files.
I actually wonder what is the right way to make these tests work with crossgen2. The problem is that the tests themselves always crossgen their components even when no RunCrossGen or RunCrossGen2 are set. This is needed as otherwise they would not make sense. Now the question is how to make them choose between crossgen and crossgen2 for this case.
username_2: This isn't going to happen in .NET 5
username_3: @username_1 / @username_2, could you please help me understand the current status here? I believe that, as of today,
https://github.com/dotnet/runtime/blob/45287ed696a4bea906cc15a4bc870960a4b0e623/src/tests/Common/CLRTest.CrossGen.targets#L70
has full support for CG / CG2 switching on both Windows and Linux. I went over the 20 CoreCLR tests defining <CrossGenTest>false and I haven't found any remaining ones with hardwired CG1 logic -
https://github.com/dotnet/runtime/blob/45287ed696a4bea906cc15a4bc870960a4b0e623/src/tests/JIT/Regression/JitBlue/GitHub_25027/GitHub_25027.ilproj#L4
and the remaining negative tests like
https://github.com/dotnet/runtime/blob/45287ed696a4bea906cc15a4bc870960a4b0e623/src/tests/Loader/classloader/generics/Instantiation/Negative/abstract02.ilproj#L7
and
https://github.com/dotnet/runtime/blob/45287ed696a4bea906cc15a4bc870960a4b0e623/src/tests/Regressions/coreclr/16354/notimplemented.ilproj#L8
anecdotally claim that they are not intended for CG testing while the rest reside under the <code>readytorun</code> subfolder and typically contain specific provisions for running CG1 / CG2 in special ways on purpose. Do we still need this work item?
Status: Issue closed
|
rmosolgo/graphql-ruby | 277118051 | Title: Arguments on scalar fields?
Question:
username_0: Looking at the docs, it appears that if I define my own scalar types, there is no way to define arguments for them to take. The GraphQL documentation gives examples of using arguments with scalar types (like here: http://graphql.org/learn/queries/#arguments), is there any way to do this in graphql-ruby?
Status: Issue closed
Answers:
username_1: Yes, those are _field_ arguments, for example:
```ruby
field :phoneNumber, types.String do
argument :includeCountryCode, types.Boolean, default_value: true
resolve ->(obj, args, ctx) { ... }
end
```
could be used as:
```graphql
phoneNumber(includeCountryCode: false)
# => "555-1234"
```
For custom scalars, the only change is to switch `types.String` for your custom `Types::PhoneNumberType`:
```ruby
field :phoneNumber, Types::PhoneNumberType do
argument :includeCountryCode, types.Boolean, default_value: true
resolve ->(obj, args, ctx) { ... }
end
```
Usage is the same as above:
```graphql
phoneNumber(includeCountryCode: false)
# => "555-1234"
```
But your custom `PhoneNumberType` will be used to format the value.
Hope that helps, feel free to follow up here if you have any more trouble!
username_0: Thanks for the reply @username_1, that's helpful! In my case I'd like to have a TimeType that takes a `format` argument to determine how to render the time, but I would prefer to not have that format switching logic occurring in multiple places around my schema. Is there any recommendation on the proper way to encapsulate that logic?
username_1: It sounds like there would be many different fields (`Event.startsAt`, `Room.reservedUntil`, `Notification.dismissedAt`) that would each take a `format:` argument, right? And they should all perform the same string formatting based on the `format:` value.
It seems like a [resolve wrapper](http://graphql-ruby.org/fields/resolve_wrapper.html) might do the job. The inner `resolve` function could return the proper Ruby `DateTime`, then the wrapper could perform formatting into a String, based on the argument value.
username_2: Hi @username_1, after updating to `1.10` a project I work with started raising the exception described in this issue: https://github.com/jetruby/apollo_upload_server-ruby/issues/24
```
Failures:
1) ActivityLog userActivityLog returns activity for the current user
Failure/Error: mutation(Types::MutationType)
ArgumentError:
Can't add legacy type: Upload (GraphQL::ScalarType)
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1770:in `add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1814:in `block in add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1811:in `each'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1811:in `add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1806:in `block (2 levels) in add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1803:in `each'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1803:in `block in add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1799:in `each'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1799:in `add_type'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1673:in `block in add_type_and_traverse'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1673:in `each'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:1673:in `add_type_and_traverse'
# /home/david/.rvm/gems/ruby-2.6.5/gems/graphql-1.10.0/lib/graphql/schema.rb:972:in `mutation'
# ./app/graphql/app_schema.rb:3:in `<class:HolrSchema>'
# ./app/graphql/app_schema.rb:1:in `<main>'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `block in require_with_bootsnap_lfi'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require_with_bootsnap_lfi'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:48:in `block in require_or_load'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:16:in `allow_bootsnap_retry'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:47:in `require_or_load'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:60:in `block in load_missing_constant'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:16:in `allow_bootsnap_retry'
# /home/david/.rvm/gems/ruby-2.6.5/gems/bootsnap-1.4.4/lib/bootsnap/load_path_cache/core_ext/active_support.rb:59:in `load_missing_constant'
# ./spec/support/contexts/graphql_test.rb:7:in `block (2 levels) in <main>'
# ./spec/graphql_schema/activity_log_spec.rb:48:in `block (3 levels) in <top (required)>'
```
The `resolve wrapper` link is broken and I can't seem to find something with that name around the docs. Is there something, tried using the gem scalar as an argument in an [input type](https://graphql-ruby.org/type_definitions/input_objects) with the same result, the schema tried to evaluate the input type argument and the `Can't add legacy type` exception is raised.
Do you have any recommendations on how wrap this scalar so it can work as an argument?
username_1: The problem is that the gem provides a GraphQL type called `Upload` which is defined using a legacy API. Here's the problematic source in v1.0.0:
https://github.com/jetruby/apollo_upload_server-ruby/blob/3cdcc120d28da804d5b57deb5d512a196e8eec85/lib/apollo_upload_server/upload.rb#L1-L8
In order to work with the new interpreter runtime, that scalar should be rewritten as a class, inheriting from `GraphQL::Schema::Scalar`, as shown here:
https://graphql-ruby.org/type_definitions/scalars.html#custom-scalars
Actually, it looks like someone already did that update, and v2.0.0 includes a class-based definition:
https://github.com/jetruby/apollo_upload_server-ruby/blob/d802a5be6942da23d1bdeeaf1f63a7f3487503b7/lib/apollo_upload_server/upload.rb#L1-L17
What version of apollo_upload_server-ruby are you using? Does it work if you update to 2.0.0?
username_2: @username_1 Pointing the `apollo_upload_gem` gem to repo right after that change was introduced fixed my problem. :)
Thanks so much for your help!
username_1: Glad to hear it, thanks for following up! |
launchdarkly/c-client-sdk | 1149026863 | Title: After LDFree(listener), it still access listener which will cause segment crash.
Question:
username_0: **Describe the bug**
```void
LDi_storeUnregisterListener(
struct LDStore *const store, const char *const flagKey, LDlistenerfn op)
{
struct LDStoreListener *listener, *previous;
LD_ASSERT(store);
LD_ASSERT(flagKey);
LD_ASSERT(op);
previous = NULL;
LDi_rwlock_wrlock(&store->lock);
for (listener = store->listeners; listener; listener = listener->next) {
if (listener->fn == op && strcmp(flagKey, listener->key) == 0) {
if (previous) {
previous->next = listener->next;
} else {
store->listeners = listener->next;
}
LDFree(listener->key);
LDFree(listener);
} else {
previous = listener;
}
}
LDi_rwlock_wrunlock(&store->lock);
}```
There is a bug here. After LDFree(listener), it still access listener which will cause segment crash.
Answers:
username_1: Hi @username_0, thank you for the report. We will take a look.
_Filed internally as 143438_.
username_1: Hi @username_0 , this should be fixed in [2.4.4](https://github.com/launchdarkly/c-client-sdk/releases/tag/2.4.4).
Status: Issue closed
|
zhengshipeng/Web-learning-exchange | 123761613 | Title: js三角形判断
Question:
username_0: ```javascript
window.onload=function(){
//根据传入的三个参数(边长)number,返回一个数值number代表三角形的类型
//提示:可以先算出这个三角形的3个角,然后做比较
/*
*返回△的类型
* 0:不能构成三角形
* 1:锐角
* 2:正角
* 3:钝角
* */
//方法:a+b>c;a²+b²>c²=》锐角,a²+b²=c²=》直角,a²+b²<c²=》锐角
//传入的三个数写入数组,将三个数由小到大排序sort(function),sort不传入参数的排序是调用toString()函数将其值转化为字符串按ASCII排序
/*function compact(num1,num2){
if(num1<num2){
return -1;
}else if(num1>num2){
return 1;
}else{
return 0;
}
}*/
function compact(num1,num2){
return num1-num2;
}
function triangleType(a,b,c){
var nums = [a,b,c];
var numbers = nums.sort(compact);
if(numbers[0]+numbers[1]>numbers[2]){
var one = numbers[0]*numbers[0];
var two = numbers[1]*numbers[1];
var three = numbers[2]*numbers[2];
var sum = one+two-three;
if(sum>0){
return 1;
}else if(sum<0){
return 3;
}else{
return 2;
}
}else{
return 0;
}
}
console.log(triangleType(5,4,6));//1
console.log(triangleType(2,4,6));//0
console.log(triangleType(6,4,6));//1
console.log(triangleType(3,4,5));//2
console.log(triangleType(6,4,4));//3
}
``` |
veganaut/veganaut | 106203587 | Title: dots of ?-locations should be as big & opaque as 3-twig locations
Question:
username_0: Location dots of locations without an offer quality rating are now shown minimally small and opaque. This does not make sense since most of those locations are actually quite good, vegan-wise, and it's just that people didn't bother to rate them. Therefore, I think we should show them as big as 3-twig locations, i.e. with middle size and opacity.
Status: Issue closed
Answers:
username_1: One could also do something like this:
 |
nodejs/help | 186014097 | Title: DeprecationWarning: Calling an asynchronous function without callback is deprecated.
Question:
username_0: * **Version**: 7.0.0
* **Platform**: Windows 10 x64
* **Subsystem**: fs
<!-- Enter your issue details below this comment. -->
I'm getting this warning:
`(node:7512) DeprecationWarning: Calling an asynchronous function without callback is deprecated.`
The offending line of the stack trace:
`at Object.fs.write (fs.js:698:14)`
Using this code:
```
static write(fileDescriptor, buffer, offset, length, position) {
return new Promise(function(resolve, reject) {
fs.write(fileDescriptor, buffer, offset, length, position, function(error, written, buffer) {
if(error) {
reject(error);
}
else {
resolve({
'written': written,
'buffer': buffer,
});
}
});
});
}
```
I'm not sure why it is complaining about the callback missing, as there is one provided. Anyone know what I am doing wrong?
Answers:
username_1: take the comma `,` away after buffer
username_2: Got this solved here:
https://github.com/nodejs/node/issues/9346
Status: Issue closed
|
swc-project/swc | 935338949 | Title: Optional commonjs dependency (in try/catch) causes bundling to fail
Question:
username_0: **Describe the bug**
When using `chokidar` as a dependency the optional `fsevents` module causes bundling to fail on Linux as `fsevents` cannot be installed on Linux.
**Input code**
An MVP that reproduces this is here: https://github.com/username_0/spack-optional-dependency-mvp
**Config**
N/A
**Expected behavior**
I expect the bundler to detect the `try/catch` context for the `require()` and allow the missing dependency.
**Version**
The version of @swc/core: 1.2.62
**Additional context**
N/A |
UCSC-MedBook/MedBook-Wrangler | 164954449 | Title: Make it obvious that 1 Submission can encompass multiple files
Question:
username_0: UI for editSubmission doesn't make obvious that more than 1 file can be uploaded & processed in parallel.
Any suggestions for how to improve? Here's my thoughts, but I'm not satisfied with this solution:
Above import form (which has From File / Url / Blob), text "Add your files to the submission:"
When the first file is added, text appears "You can add more files with the buttons above!" |
nose-devs/nose2 | 45871772 | Title: changelog.rst does not describe differences in 0.5.0
Question:
username_0: PyPI has 0.5.0 but the changelog.rst only describes changes up to 0.4.7. What's new in 0.5.0?
Answers:
username_1: I'm also interested in the changes introduced in v0.5.0.
If I'm not mistaken, `changelog.rst` still hasn't been updated since July 2013.
Status: Issue closed
|
cole-trapnell-lab/cicero-release | 433927371 | Title: Error in preprocessing
Question:
username_0: Hi Hannah,
I have succesfully the cicero pipeline on 10x generated data but for only one of the samples i have the following error:
`Error in if (any(i < 0L)) { : missing value where TRUE/FALSE needed
Calls: [<- -> [<- -> [<- -> [<- -> int2i
In addition: Warning message:
In int2i(as.integer(i), n) : NAs introduced by coercion to integer range`
Does this error point to anything with how the data is for this sample?I tried to check from where this error is raised and i could see that this part of the preprocess/reduceDimension functions,but i cannot figure out why
Can you help in troubleshooting this?
Thank you
Sasi
Answers:
username_1: There've been a couple of reports of this problem but I haven't been able to reproduce it with the publicly available 10x datasets. A couple of questions: is this human data? Is there any possibility of NAs, Infs or NaNs in your expression matrix?
username_0: This is mouse data,i will check if and how the NAs are introduced
username_1: Ok, let me know if possible. Also happy to debug myself if you can share the data
username_0: Hi Hannah,
Looking in to the data it looks it is an issue similar to https://github.com/sdparekh/zUMIs/issues/105
This is happenning when we have data from larger number of cells(In my case 14000 failed against 10000 which was succesfull)
https://bitbucket.org/hrue/r-inla/issues/1/logical-indexing-for-large-matrices-fails
Thank you
Sasi
username_0: `> traceback()
7: int2i(as.integer(i), n)
6: `[<-`(x, i, value = value)
5: `[<-`(x, i, value = value)
4: `[<-`(`*tmp*`, ncounts != 0, value = 1)
3: `[<-`(`*tmp*`, ncounts != 0, value = 1)
2: normalize_expr_data(cds, norm_method, pseudo_expr)
1: reduceDimension(input_cds, max_components = 2, num_dim = 6, reduction_method = "tSNE",
norm_method = "none")`
normalize_expr_data for binomialff family
` FM <- exprs(input_cds)
ncounts <- FM > 0
**ncounts[ncounts != 0] <- 1**`
Is causing issues with larger matrices.
Status: Issue closed
username_1: Thanks for the traceback and info. I just pushed a fix to monocle3, so if you rerun the installation described here: http://cole-trapnell-lab.github.io/monocle-release/monocle3/ It should work now!
username_0: hi @username_1 ,
I was trying Cicero with the latest monocle3 installed and i am still facing the same issue with the error
`Error in if (any(i < 0L)) { : missing value where TRUE/FALSE needed Calls: [<- -> [<- -> [<- -> [<- -> int2i In addition: Warning message: In int2i(as.integer(i), n) : NAs introduced by coercion to integer range`
What is the current recommendation for running Cicero on large datasets?Which monocle version?
Thank you
Sasi
username_2: Hi
Thanks for creating this amazing tool. Unfortunately i'm also having the same issue.
```
Warning message in int2i(as.integer(i), n):
"NAs introduced by coercion to integer range"
Error in if (any(i < 0L)) {: missing value where TRUE/FALSE needed
Traceback:
1. reduceDimension(input_cds, max_components = 2, num_dim = 15,
. reduction_method = "tSNE", norm_method = "none")
2. normalize_expr_data(cds, norm_method, pseudo_expr)
3. `[<-`(`*tmp*`, ncounts != 0, value = 1)
4. `[<-`(`*tmp*`, ncounts != 0, value = 1)
5. as(.TM.repl.i.mat(as(x, "TsparseMatrix"), i = i, value = value),
. "CsparseMatrix")
6. .class1(object)
7. .TM.repl.i.mat(as(x, "TsparseMatrix"), i = i, value = value)
8. `[<-`(x, i, value = value)
9. `[<-`(x, i, value = value)
10. int2i(as.integer(i), n)
```
In my case I have ~12,000 cells
username_1: Cicero should now be up to date with the new Monocle3 beta (https://cole-trapnell-lab.github.io/monocle3/) on it's monocle3 branch. Can you check out the latest instructions here https://cole-trapnell-lab.github.io/cicero-release/docs_m3/ and let me know if that solves your issue? |
keybase/client | 1110428315 | Title: Keybase stuck on 'Loading' page in windows 11
Question:
username_0: I have a key base installed in my old machine when I try to install the same in my new machine. it stuck on the loading page for so long. I did run the key base log sent from my terminal.
Could I need to do any other changes?

Answers:
username_1: Would you agree to say it's dead? https://github.com/keybase/client/issues/24577
username_2: For me this was caused by a missing file:
%localappdata%\keybase\keybaserq.exe
Keybase works properly again on my windows 11 if I either manually extract that from the .msi with 7zip, or uninstall/reinstall Keybase. A missing file should cause an overall installation failure, not sure what happened there and I can't reproduce it. |
dnnsoftware/Dnn.AdminExperience.Extensions | 331889872 | Title: Pages: last page in the treeview shows extra white space and scroll if user enabled scheduling
Question:
username_0: The problem is that date picker reserve some space before it even appears. Moving the date picker to show on top and this should be resolved.
## Steps to reproduce
1. Open the installation
2. Open PB > Manage > Pages
3. in the pages options - content section
4. Toggle enable schedule button
5. Scroll the page and see the content and scrollbar
## Expected results
page container is not deformed
## Results
extra white space appears https://www.screencast.com/t/BpVHxIS7
Answers:
username_0: Fixed by #504
Status: Issue closed
|
clc/eyes-free | 59718273 | Title: [TalkBack] Progressbar focus issue.
Question:
username_0: ```
What steps will reproduce the problem?
1. PlayStore-> App Download start
2. progressbar touch,Focus is revoked when Refresh
What is the expected output? What do you see instead?
Focus isn't revoked when Refresh
What version of the product are you using? On what operating system?
Android 4.2.1, talkback 3.3.*
Please provide any additional information below.
```
Original issue reported on code.google.com by `<EMAIL>` on 24 Apr 2013 at 8:46
Status: Issue closed
Answers:
username_0: ```
What steps will reproduce the problem?
1. PlayStore-> App Download start
2. progressbar touch,Focus is revoked when Refresh
What is the expected output? What do you see instead?
Focus isn't revoked when Refresh
What version of the product are you using? On what operating system?
Android 4.2.1, talkback 3.3.*
Please provide any additional information below.
```
Original issue reported on code.google.com by `<EMAIL>` on 24 Apr 2013 at 8:46
username_0: ```
Please provide a screenshot of where this issue is occurring.
```
Original comment by `<EMAIL>` on 24 Apr 2013 at 7:08
username_0: ```
[deleted comment]
```
username_0: ```
No reply in over 1 month. Closing this report. Please file a new issue with
the information requested.
```
Original comment by `<EMAIL>` on 29 May 2013 at 7:44
* Changed state: **Invalid** |
symfony/symfony | 148853132 | Title: CliDumper Call to undefined function iconv_strlen()
Question:
username_0: i don't know why i see this error with laravel project when i write
```
dd('Ahmed');
```
```
FatalThrowableError in CliDumper.php line 172:
Fatal error: Call to undefined function Symfony\Component\VarDumper\Dumper\iconv_strlen()
```
Answers:
username_1: you need to install the iconv extension. It's surprising that it's not already installed on you PHP.
Alternatively, you can `composer require symfony/polyfill-iconv`
Status: Issue closed
username_0: oh i installed php70-iconv and it works fine now. thanks for replay. |
ssbc/scuttlebutt-protocol-guide | 940641056 | Title: Order of JSON keys
Question:
username_0: Possible resolution:
- Do nothing: "we know what we mean by JSON, it's not what RFC says but we don't care"
- Explicitly say that this is "ordered JSON" and some additional constraints apply to the ones specified in the RFC
- Specify how the keys are to be ordered for the canonical serialization and remove the constraint regarding the position of the signature. This could be done with a transition phase or exception for old content in which the order provided in the message is used. |
labstack/echo | 776104120 | Title: "any" routing bug if path and any route share prefix
Question:
username_0: ### Issue Description
#1563 introduced a bug where the wrong route is selected if there's an any route for a folder e.g. `/users/*` and a requested path has the folder as a prefix e.g. `/users_prefix/`.
Having the following two any routes registered:
`/*`
`/users/*`
### Expected behaviour
GET `/users` => `/*`
GET `/users/` => `/users/*`
GET `/users_prefix` => `/*`
GET `/users_prefix/` => `/*`
### Actual behaviour
GET `/users` => `/*`
GET `/users/` => `/users/*`
GET `/users_prefix` => `/*`
GET `/users_prefix/` => `/users/*` which is wrong
### Version/commit
6119aecb16b39eae121ef620d665ca0ff00b21ba<issue_closed>
Status: Issue closed |
weibocom/motan | 155891513 | Title: motan是否支持异步调用,有示例代码么?
Question:
username_0: 看motan的配置手册,似乎是支持异步调用的,有示例代码么?谢谢
Answers:
username_1: 传输层是异步的,不需要配置。但是业务调用的代码是阻塞的。
比如说`hello.say("world");`这个方法耗时200ms,那么通过motan做了rpc化之后,client的调用代码也会耗费200ms(不考虑网络开销);但当遇到网络超时等情况时,client会在超时之后抛出异常终止请求,而不是一直阻塞。
我们也在考虑如何更优雅的实现业务代码调用client方法的异步化,如果有想法欢迎交流。
username_0: 我想你们一定研究过dubbo,dubbo是这样的写法,
```
demoService.hello("world");
fooFuture = RpcContext.getContext().getFuture();
```
motan关于这个问题有什么考虑么?
username_1: @username_0 see #112
Status: Issue closed
|
cfpb/hmda-frontend | 736342079 | Title: Add link for "For data collected in 2018 incorporating the 2018 HMDA rule"
Question:
username_0: https://s3.amazonaws.com/cfpb-hmda-public/prod/help/2018-hmda-fig-2018-hmda-rule.pdf
Answers:
username_0: Hey @chynnakeys @username_1, what do you think of the following options? Also happy to draft more options if neither of these feels right.
## Option 1
<img width="443" alt="Screen Shot 2020-11-05 at 10 08 44 AM" src="https://user-images.githubusercontent.com/2592907/98284779-dcfeb900-1f5e-11eb-81d6-8f74b64a1e33.png">
## Option 2

username_1: I prefer Option 2, since it more clearly shows they both fall under 2018 data collection.
Status: Issue closed
|
GEOS-ESM/MAPL | 1036561479 | Title: Load imbalance from new profiler?
Question:
username_0: Query for @tclune or @weiyuan-jiang, not sure. The old MAPL timers are by default the MINMAX style:
```
Times for TURBULENCE
TOTAL : 4.086 4.948 5.400
-RUN1 : 2.614 3.203 3.570
--DIFFUSE : 1.360 1.677 1.858
--REFRESHKS : 1.224 1.494 1.683
---PRELIMS : 0.083 0.103 0.123
...
```
Where I think the three columns are min/mean/max (@bena-nasa might know if I'm right). So we can get a rough idea of load imbalance.
The new timers show:
```
Time for TURBULENCE
Inclusive Exclusive
================ ================
Name #-cycles T (sec) % T (sec) %
-------- --------- ------ --------- ------
TURBULENCE 579 5.416 100.00 0.001 0.03
--SetService 1 0.008 0.15 0.008 0.15
----GenSetService 1 0.000 0.00 0.000 0.00
--Initialize 1 0.056 1.04 0.000 0.00
----GenInitialize 1 0.056 1.04 0.000 0.00
------GenInitialize_self 2 0.056 1.04 0.056 1.04
--Record 192 0.001 0.02 0.001 0.02
----Record_self 192 0.000 0.00 0.000 0.00
--Run 192 3.573 65.98 3.573 65.98
--Run2 192 1.561 28.82 1.561 28.82
--Finalize 1 0.215 3.97 0.000 0.00
----Final_self 1 0.215 3.97 0.215 3.97
```
Which seems to be just the max node. I was wondering is there a flag/option one can pass to the profiler to get some sense of load imbalance?
Answers:
username_0: The new timers seem to provide this with min/mean/max. Closing
Status: Issue closed
|
sequelize/sequelize | 624045308 | Title: There's no way to know alias for nested entity field
Question:
username_0: ## Issue Description
### What are you doing?
I need to use mysql CONCAT to my nested entity fields, but when I try to use col() function to pass fields like arguments, they don't have aliases and query craches.
Here is small demo for the situation:
```js
const sequelize = new Sequelize('sqlite::memory:');
class A extends Model {}
A.init({
a1: DataTypes.STRING,
a2: DataTypes.STRING,
}, { sequelize, modelName: 'A' });
class B extends Model {}
B.init({
b1: DataTypes.STRING,
b2: DataTypes.STRING
}, { sequelize, modelName: 'B' });
class C extends Model {}
C.init({
c1: DataTypes.STRING,
c2: DataTypes.STRING
}, { sequelize, modelName: 'C' });
class D extends Model {}
D.init({
d1: DataTypes.STRING,
d2: DataTypes.STRING
}, { sequelize, modelName: 'D' });
A.hasOne(B, { as: 'B' });
B.hasOne(C, { as: 'C' });
C.hasOne(D, { as: 'D' });
sequelize.sync()
.then(() => A.findAll({
include: [{
model: B,
as: 'B',
include: [{
model: C,
as: 'C',
include: [{
model: D,
as: 'D',
where: {
[Op.and]: [{ d1: 'test' }, fn('CONCAT', col('d1'), col('d2'))]
}
}]
}]
}]
}))
.then(result => {
console.log(result);
});
[Truncated]
## Issue Template Checklist
<!-- Please answer the questions below. If you don't, your issue may be closed. -->
### How does this problem relate to dialects?
<!-- Choose one. -->
- [ ] I think this problem happens regardless of the dialect.
- [ ] I think this problem happens only for the following dialect(s): <!-- Put dialect(s) here -->
- [x] I don't know, I was using mysql2, with connector library version 2.1.0 and database version 8.0.18
### Would you be willing to resolve this issue by submitting a Pull Request?
<!-- Remember that first contributors are welcome! -->
- [ ] Yes, I have the time and I know how to start.
- [ ] Yes, I have the time but I don't know how to start, I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [x] No, I don't have the time and I wouldn't even know how to start.
Answers:
username_1: Hello. Question: `CONCAT` returns a string, right? Why are you performing an `AND` on it in a `where` clause, as if you were checking a condition?
username_0: Thanks @username_1, updated my query to make it more understandable
username_2: Duplicate of https://github.com/sequelize/sequelize/issues/10732
Status: Issue closed
|
holoviz/holoviews | 505343599 | Title: datashaded image not zooming/panning properly
Question:
username_0: Using https://daac.ornl.gov/daacdata/cms/CMS_DARTE_V2/data/onroad_2017.tif and the following code, I can show a geotiff:
```
import xarray as xr, holoviews as hv, colorcet as cc
from holoviews.operation.datashader import datashade
hv.extension('bokeh', logo=False)
emissions = xr.open_rasterio('CMS_DARTE_V2_1735/data/onroad_2017.tif').load()[0]
datashade(hv.Image(emissions, ['x','y']), cmap=cc.fire).opts(width=500, invert_yaxis=True)
#hv.Image(emissions, ['x','y']).opts(cmap=cc.fire, width=500, invert_yaxis=True)
```

However, zooming and panning don't work properly; the image jumps around and gets cut off, and in some scenarios shows only garbage:



Panning in the x direction seems to work fine, so I'm suspecting that it's to do with the inverted y axis; panning in y leaves the data in the same spot in the plot axes but the bounding box of the data going up. If I uncomment the non-datashaded `hv.Image` line, zooming and panning work fine, but the image is only barely visible and of course it's quite slow.
I'm on git master for the various HoloViz projects:
holoviews=1.13.0a10.post6+gc2f0c7bf6
datashader=0.8.0a1.post4+ge9a2889
geoviews=1.6.4a3
hvplot=0.5.0a1.post6+g8784838
xarray=0.13.0
Answers:
username_1: Hello simple reproducer if it can help
```python
import numpy as np
import holoviews as hv
from holoviews.operation.datashader import rasterize
hv.extension('bokeh')
ls = np.linspace(0, 10, 2000)
xx, yy = np.meshgrid(ls, ls)
bounds=(-1,-1,1,1) # Coordinate system: (left, bottom, right, top)
img = hv.Image(np.sin(xx)*np.cos(yy), bounds=bounds)
rasterize(img).opts(hv.opts.Image(invert_axes=True))
````
username_2: At least the simple reproducer example appears to be working now.
username_2: Don't have access to the geotiff either, so please confirm @username_0.
username_3: With the latest holoviews release, this seems broken again. 😢
It's broken with the simple reproducer.
It's also broken with my usual demos like
https://nbviewer.jupyter.org/gist/username_3/df96011574d2bc86b04a8e9f7c4393a2

```
# Name Version Build Channel
holoviews 1.13.5 pyh9f0ad1d_0 conda-forge
datashader 0.11.1 pyh9f0ad1d_0 conda-forge
geoviews-core 1.8.2 py_0 conda-forge
hvplot 0.6.0 pyh9f0ad1d_0 conda-forge
bokeh 2.2.3 py38h32f6830_0 conda-forge
```
username_2: That's super, bizarre, the latest release didn't touch any of that code. It really just fixed a very small number of things.
username_2: Looking into it now.
username_2: Can't reproduce the issue for some reason, but the example works if you "pre-project" it with `project=True`, it doesn't seem to like projecting the image after the fact.
```python
ds.Significant_height_of_combined_wind_waves_and_swell_surface[-42:,:,:].hvplot(x='lon', y='lat',
cmap='rainbow', rasterize=True, projection=crs, project=True, coastline=True,
widget_type='scrubber', widget_location='bottom')
````
username_2: Also I don't believe the simple reproducer ever worked correctly, as this issue was never closed, but your example does not use `invert_axes` or `invert_yaxis`.
username_3: @username_2 , okay, I didn't realize these were separate issues.
It does look like the simple reproducer worked on June 7? https://github.com/holoviz/holoviews/issues/4052#issuecomment-640336446
username_2: @username_3 I don't believe what you are seeing is a regression from previous version, the example you posted never seems to have worked correctly, almost certainly due to projection issues in GeoViews. The only way of handling that correctly for the time being is to use `project=True` which projects the mesh before it is rasterized with datashader. I've gone back to master on the day I reported the simple reproducer working and it didn't in fact work, so I was lying :/
Handling inverted axes correctly does not seem like it would be hard so I'll assign to the next milestone.
username_2: Actually there's one mystery left, which is that I was not able to reproduce the axes jumping around like in your gif.
username_3: @username_2, do you have access to https://staging.aws-uswest2.pangeo.io?
This is the environment I'm testing in.
username_0: The original geotiff file is from https://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1735 , which is freely accessible but requires registering and logging in, so I sent it privately to @username_2.
username_3: [](https://aws-uswest2-binder.pangeo.io/v2/gh/reproducible-notebooks/HRRR_Dashboard/binder?urlpath=git-pull?repo=https://github.com/reproducible-notebooks/HRRR_Dashboard%26amp%3Bbranch=master%26amp%3Burlpath=lab/tree/HRRR_Dashboard/wavewatch3.ipynb%3Fautodecode)
username_0: I've just tested the original issue again with holoviews=1.13.5, hvplot=0.6.0, datashader=0.10.0, and found no change; still pans properly in x but pans "invertedly" in y.
Status: Issue closed
username_2: Using https://daac.ornl.gov/daacdata/cms/CMS_DARTE_V2/data/onroad_2017.tif and the following code, I can show a geotiff:
```
import xarray as xr, holoviews as hv, colorcet as cc
from holoviews.operation.datashader import datashade
hv.extension('bokeh', logo=False)
emissions = xr.open_rasterio('CMS_DARTE_V2_1735/data/onroad_2017.tif').load()[0]
datashade(hv.Image(emissions, ['x','y']), cmap=cc.fire).opts(width=500, invert_yaxis=True)
#hv.Image(emissions, ['x','y']).opts(cmap=cc.fire, width=500, invert_yaxis=True)
```

However, zooming and panning don't work properly; the image jumps around and gets cut off, and in some scenarios shows only garbage:



Panning in the x direction seems to work fine, so I'm suspecting that it's to do with the inverted y axis; panning in y leaves the data in the same spot in the plot axes but the bounding box of the data going up. If I uncomment the non-datashaded `hv.Image` line, zooming and panning work fine, but the image is only barely visible and of course it's quite slow.
I'm on git master for the various HoloViz projects:
holoviews=1.13.0a10.post6+gc2f0c7bf6
datashader=0.8.0a1.post4+ge9a2889
geoviews=1.6.4a3
hvplot=0.5.0a1.post6+g8784838
xarray=0.13.0
username_2: I'm suspecting this issue is actually a bug in datashader but ran out of time investigating it.
username_4: using dask.array raise an error in datashader with invert_axes. When not using dask, no error is raised, but the output is incorrect.
This example is perhaps not strictly minimal, but it raise the error:
```python
import xarray as xr
import dask
import holoviews as hv
hv.extension('bokeh')
from holoviews.operation.datashader import rasterize
da = xr.DataArray(dask.array.ones((838,1259)), dims=['atrack','xtrack'])
da = da.assign_coords(atrack=da.atrack.astype(float), xtrack = da.xtrack.astype(float))
dasel = da.isel(atrack=slice(323, 519), xtrack=slice(529, 729))
rasterize(hv.Image(dasel).opts(invert_axes=False)) # OK
rasterize(hv.Image(dasel.compute()).opts(invert_axes=True)) # no error, incorrect output
rasterize(hv.Image(dasel).opts(invert_axes=True)) # ValueError: range() arg 3 must not be zero
WARNING:param.dynamic_operation: Callable raised "ValueError('range() arg 3 must not be zero')".
Invoked as dynamic_operation(height=400, scale=1.0, width=400, x_range=(528.5, 728.5), y_range=(322.5, 518.5))
WARNING:param.dynamic_operation: Callable raised "ValueError('range() arg 3 must not be zero')".
Invoked as dynamic_operation(height=400, scale=1.0, width=400, x_range=(528.5, 728.5), y_range=(322.5, 518.5))
Traceback (most recent call last):
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/plotting/util.py", line 275, in get_plot_frame
return map_obj[key]
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 1341, in __getitem__
val = self._execute_callback(*tuple_key)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 1110, in _execute_callback
retval = self.callback(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 714, in __call__
ret = self.callable(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/util/__init__.py", line 1042, in dynamic_operation
key, obj = resolve(key, kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/util/__init__.py", line 1031, in resolve
return key, map_obj[key]
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 1341, in __getitem__
val = self._execute_callback(*tuple_key)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 1110, in _execute_callback
retval = self.callback(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/spaces.py", line 714, in __call__
ret = self.callable(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/util/__init__.py", line 1043, in dynamic_operation
return apply(obj, *key, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/util/__init__.py", line 1035, in apply
processed = self._process(element, key, kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/util/__init__.py", line 1017, in _process
return self.p.operation.process_element(element, key, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/operation.py", line 194, in process_element
return self._apply(element, key)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/operation.py", line 141, in _apply
ret = self._process(element, key)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/operation/datashader.py", line 1477, in _process
element = element.map(op, predicate)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/data/__init__.py", line 205, in pipelined_fn
result = method_fn(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/data/__init__.py", line 1222, in map
return super(Dataset, self).map(*args, **kwargs)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/dimension.py", line 710, in map
return map_fn(self) if applies else self
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/operation.py", line 214, in __call__
return self._apply(element)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/core/operation.py", line 141, in _apply
ret = self._process(element, key)
File "/home/username_4/anaconda3/envs/xsar/lib/python3.9/site-packages/holoviews/operation/datashader.py", line 939, in _process
rarray = cvs.raster(xarr, upsample_method=interp,
File "/windows_shared/Develop/datashader/datashader/core.py", line 1058, in raster
data = resample_2d_distributed(
File "/windows_shared/Develop/datashader/datashader/resampling.py", line 243, in resample_2d_distributed
chunk_map = map_chunks(src.shape, (h, w), temp_chunks)
File "/windows_shared/Develop/datashader/datashader/resampling.py", line 112, in map_chunks
xchunks = list(range(0, outx, cxs)) + [outx]
ValueError: range() arg 3 must not be zero
```
username_2: @username_4 Thanks for the example, I can reproduce. For now I'd recommend just switching to an hv.QuadMesh instead of the Image but I'll make the appropriate fixes as well. |
googleapis/python-access-context-manager | 909797302 | Title: Generate proto-plus types for this library
Question:
username_0: This library currently generates [`_pb2.py`](https://github.com/googleapis/python-access-context-manager/blob/master/google/identity/accesscontextmanager/v1/access_level_pb2.py) types via protoc.
I would like to move this set of protos to be generated via bazel. The repository will look like a full GAPIC library but only have types (no services). If Access Context Manager adds a service in the future the change will be additive (non-breaking). See [this repo](https://github.com/googleapis/python-iam-logging/tree/master/google/cloud/iam_logging_v1) for an example of the desired end state.
This is a breaking change, but I believe the blast radius will be small - this library is only installed as a dependency of `google-cloud-asset` (see [setup.py](https://github.com/googleapis/python-asset/blob/27ac4fb2456c2ff7da3b69b6e7657f7db1dfc8d5/setup.py#L33)).
CC @username_2 @username_1
Answers:
username_1: Sounds like the right direction to me.
For clarity, will this be a breaking change for _this_ package only, or will also require a break in Cloud Asset?
A break here only seems fine, but google-cloud-asset is published stable so we should think more about that if necessary.
username_0: Now that I think about this more some `google-cloud-asset` will probably be impacted (it depends on how they instantiate the message). Perhaps there's a way to hide the change from end users. 🤔
Will hold on this for now, and address when/if a service is created for Access Context Manager.
username_2: @username_0 A service appears to be available for the v1 client [here](https://github.com/googleapis/googleapis-gen/tree/master/google/identity/accesscontextmanager/v1/identity-accesscontextmanager-v1-py/google/identity/accesscontextmanager_v1/services/access_context_manager)
username_0: 👍 let's discuss how to proceed tomorrow. |
pubkey/rxdb | 427422759 | Title: (future) bug / Server plugin: incompatibility with Express 5 alpha
Question:
username_0: ## Case
(future) Bug / Server plugin: incompatibility with Express 5 (still alpha at the time of writing)
## Issue
I faced some issues trying to run the RXDB server on Express 5.
The same happened with _pouchdb-express-router_ and _express-pouchdb_, the latest being used by the RxDB server plugin.
For reference, please see the tickets I opened on the PouchDB repos:
- [for _pouchdb-express-router_](https://github.com/pouchdb/pouchdb-express-router/issues/13)
- [for _express-pouchdb_](https://github.com/pouchdb/pouchdb-server/issues/381)
Express 5 is still alpha, and the issue seems to be located at the _express-pouchdb_ level, not RxDB.
But it impacts the RxDB server plugin, so I thought you may want to be informed about it, in order to anticipate if needed.
## Info
- Environment: Electron / NodeJS running Express 5
- Adapter: any
- Stack: any
- RxDB component: server plugin
## Investigations
Starting from Express 5, `req.query` now becomes read-only.
Both pouchdb libs are overwriting the `req.query` object (after parsing and other stuff), and this object is directly passed to the PouchDB methods.
This is convenient, and works with Express <=4.x, but will not with Express >= 5.x
If `req.query`is read-only, weird things start happening: API calls with an array params will fail, DB changes will not be detected and propagated to clients, ...
I quickly patched the _pouchdb-express-router_ files, and it seems to work fine now. That was quite straightforward, as this lib is very light.
I couldn't do the same for _express-pouchdb_, so I guess you will have to wait for the PouchDB team to correct it.
## Correction
Wait for _express-pouchdb_ to be corrected...
For people just willing to have a simple RxDB Express router, with no Fauxton UI and so, here how I did:
- create your own plugin _rxdb-express-router_, derived from the server plugin
- remove any link to _express-pouchdb_, and replace with a patched version _pouchdb-express-router_.
I'm not very experienced with RxDB / PouchDB, and don't use the RxDBserver plugin myself, but I'll be happy to help if someone needs more information on all of this.
Hope this will help!
And, most of all: thanks for your work, I'm new to RxDB but I'm already quite impressed I must say ;)
Cheers.
Answers:
username_1: Hi @username_0
Thank you for opening this.
I normally do not have the time to fix things in the dependency-tree. So lets hope someone at pouchdb fixes this :)
I will leave this open so others can find it when they have a problem with express@5
Please ping here when this issue is fixed. |
netlify/build | 493804986 | Title: use jest-validate to print friendly messages for configuration screwups
Question:
username_0: **- Do you want to request a *feature* or report a *bug*?**
feature
**- What is the current behavior?**
hard crash on misconfiguration
**- What is the expected behavior?**
https://www.npmjs.com/package/jest-validate<issue_closed>
Status: Issue closed |
square/leakcanary | 184433111 | Title: NullPointerException
Question:
username_0: Device Nexus 9
OS : Android N [7.1]
```
In com.philips.platform.appframework:0.3.0:1.
* FAILURE:
java.lang.NullPointerException: Attempt to invoke virtual method 'boolean java.lang.String.equals(java.lang.Object)' on a null object reference
at com.squareup.leakcanary.HeapAnalyzer.findLeakingReference(HeapAnalyzer.java:160)
at com.squareup.leakcanary.HeapAnalyzer.checkForLeak(HeapAnalyzer.java:95)
at com.squareup.leakcanary.internal.HeapAnalyzerService.onHandleIntent(HeapAnalyzerService.java:57)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:67)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)
* Reference Key: 9e849138-524d-49ba-b652-bf44accd0ef0
* Device: htc google Nexus 9 volantisg
* Android Version: 7.0 API: 24 LeakCanary: 1.3.1
* Durations: watch=5022ms, gc=146ms, heap dump=2921ms, analysis=4695ms
```
Answers:
username_1: Please upgrade to the [latest release](https://github.com/square/leakcanary/blob/master/CHANGELOG.md) of LeakCanary.
Status: Issue closed
|
Cloudkibo/ChatBot | 218401154 | Title: Create a sample BOT on API.AI
Question:
username_0: ok. yes the credentials are also not working for me
Answers:
username_1: I have signed up api.ai
please login with the following Google login
<EMAIL>
Cloudkib0Cloudkib0
username_2: These credentials didn't work for me. It said invalid credentials.
@username_0 see if you can export your current bot or add me to your team there.
username_0: ok. yes the credentials are also not working for me
username_2: The sample bot for banking can be used here on our cloudkibo page and also on their own demo page. I have used the widget given by them here. Putting it here for record:
https://api.cloudkibo.com/#/login
https://bot.api.ai/f18e49e2-e74d-4f93-a20c-ff533a810cb6
@username_0 is working on the widget that I created yesterday. That widget doesn't contain powered by api.ai. She would soon put it here.
Next, I am integrating slack with our bot.
username_1: Thanks for the update.
For the credentials to work you have to login into google account and then sign into API.ai with Google authentication
username_1: 
username_2: without creating slack app, for quick test, i have integrated the test bot with slack.. you can find it in direct messages room with name apiai_bot
@apiai_bot
When i create slack app, it would be ready to export to other teams and will appear in slack integrations gallery. Therefore, I used this quick integration method.
<img width="918" alt="screen shot 2017-04-04 at 1 13 30 pm" src="https://cloud.githubusercontent.com/assets/5811465/24647264/ae302f9e-1938-11e7-849c-6b46738929cb.png">
username_1: @username_0
Two things
1) The text does not wrap
2) Can we name the bot some thing like "Bearbal" or some wise man. He should do a welcome message and ask name and we use these tow names for the conversation.
username_0: Worked on this.
I have resolved aforementioned issues. The revised code is pulled in production
username_1: @username_0 wrapping txt is still an issue

username_1: is this still an issue?
I have signed up api.ai
please login with the following Google login
<EMAIL>
Cloudkib0Cloudkib0
username_0: please check now. Its fixed now

username_1: thanks |
zhengnianli/zhengnianli.github.io | 459539390 | Title: 【C语言笔记】如何查看数据类型范围? | 正念君的博客
Question:
username_0: https://username_0.github.io/2018/12/07/c-yu-yan-bi-ji-ru-he-cha-kan-shu-ju-lei-xing-fan-wei/
知识点一:查看整数范围当前的编译环境下,你可能不知道int的数据范围是多少,或者记不清无符号短整型的范围是0~65535还是0~65536?这时候就可以按照如下程序进行输出查看:
#include <stdio.h>
#include |
DHTMLX/scheduler-helper-php | 164507200 | Title: Problem when "count" is "NaN" for the rec_type field
Question:
username_0: For an currently unknown reason we have database entries with `rec_type = week_NaN___1,2,3,4#` (just an example).
The consistent part for those broken is that `count` is always `NaN`.
I tried to figrue out the best way to fix this but somewhat lost.
Here is one row that causes the error:
start_date|end_date|rec_type|event_pid|event_length
-------------|------------|-----------|------------|---------------
|2016-06-29 00:00:00|9999-02-01 00:00:00|week_NaN___1,2,3,4#|0|86400|
Thanks for your help in advance!
Answers:
username_1: Hello,
I'm not sure whether it's the best solution, at least while php-helper does not have any error logging.
NaN values come from the client side, that may mean that there is something wrong with the recurring control of the lightbox ('rec_type' is generated there).
Skipping such records on a backend will hide this issue until it pops up somewhere else.
rec_type string is generated in this method of dhtmlxscheduler_recurring.js
https://github.com/DHTMLX/scheduler/blob/v4.3.1/codebase/sources/ext/dhtmlxscheduler_recurring.js#L152
and week recurring from your example is created here
https://github.com/DHTMLX/scheduler/blob/v4.3.1/codebase/sources/ext/dhtmlxscheduler_recurring.js#L268
Here is the line which gets NaN:
`code.push(Math.max(1, get_value(els, "week_count")));`
'get_value' does not guarantee to return a numeric value, probably that's the case - when it return undefined value (form does not have 'week_count' control), you'll get NaN in rec_type string -
```
Math.max(1, undefined);
--> NaN
```
There should be checking like following
`code.push(Math.max(1, (get_value(els, "week_count") * 1) || 0));`
There are several other places where such checking might be useful
https://github.com/DHTMLX/scheduler/blob/v4.3.1/codebase/sources/ext/dhtmlxscheduler_recurring.js#L303
https://github.com/DHTMLX/scheduler/blob/v4.3.1/codebase/sources/ext/dhtmlxscheduler_recurring.js#L253
https://github.com/DHTMLX/scheduler/blob/v4.3.1/codebase/sources/ext/dhtmlxscheduler_recurring.js#L256
We'll make a correction to a dev version of dhtmlxScheduler. If you can open a ticket in our support system we'll provide you the latest of the scheduler. Or you can patch dhtmlxscheduler_recurring.js locally, the next version of the component will contain a fix
username_0: Hello @username_1!
Thanks for the fast reply!
My ticket id is "XJK-721971".
I'm closing my pull request #29 now because this solution is only temporary (like you said).
Status: Issue closed
|
spaghettidba/WorkloadTools | 487444415 | Title: Replay doesn't work with Azure SQL
Question:
username_0: I tested SqlWorkload against a standard Azure SQL database today and found something weird. None of the commands replayed because it kept giving the error that "USE" statements are not allowed in Azure SQL. When I filtered through the events in the SQLite replay db, none of them actually contained a "USE" statement.
Is SqlWorkload inserting the "USE" statement? If so, it is a bug.
Answers:
username_1: I'll look into it, thanks
Status: Issue closed
username_1: I tested SqlWorkload against a standard Azure SQL database today and found something weird. None of the commands replayed because it kept giving the error that "USE" statements are not allowed in Azure SQL. When I filtered through the events in the SQLite replay db, none of them actually contained a "USE" statement.
Is SqlWorkload inserting the "USE" statement? If so, it is a bug.
username_1: Can't reproduce
Status: Issue closed
|
opencontainers/image-tools | 192704551 | Title: ReadMe should include description of how to install
Question:
username_0: The ReadMe should include a description (or link to a description) for how to install and run the OCI Image Tools.
Answers:
username_0: Opening per discussion on OCI Dev ConCall. /cc @username_4
username_1: Can I copy some examples from the existing markdown to ReadMe?
[oci-create-runtime-bundle](https://github.com/opencontainers/image-tools/tree/master/cmd/oci-create-runtime-bundle)
[oci-image-validate](https://github.com/opencontainers/image-tools/blob/master/cmd/oci-image-validate/oci-image-validate.1.md)
[oci-unpack](https://github.com/opencontainers/image-tools/blob/master/cmd/oci-unpack/oci-unpack.1.md)
username_2: I'd rather link to those man pages to stay DRY, although very brief
examples being copied over are ok (like runtime-tools did in
opencontainers/runtime-tools#47). Another approach to this portion of
getting-started is an all-command man page (linked from the README)
like the one I floated in #5 [1].
#92 looks useful too.
[1]: https://github.com/opencontainers/image-tools/pull/5/files#diff-fc4edae5c4391cf1575e6630b1ba33b5R1
username_3: Definitely something should be done, either way
username_2: I think #92 covers everything folks want from this issue.
username_3: It might, when it's closed. Thanks for pointing it out, anyway.
username_2: Closable with #92 landed?
Status: Issue closed
|
vrkansagara/LaraOutPress | 403406595 | Title: Question on how to use this package
Question:
username_0: Hi, just a quick question on how to use this package. Do I need to set middleware, in what middleware? Do I need to add ->middleware('laraoutpress') in the route web.php ? Do I need to modify anything in app/http/kernel.php? etc etc.
Thanks
Answers:
username_1: Hello can you follow the readme file first carefully ! And implement. If not works I will definitely will help !
username_1: @username_0 Hope this issue is solved !
username_1: @username_0 , It's look like you solved your problem so closing this issue!
Status: Issue closed
|
danielkrupinski/Osiris | 688798930 | Title: cheat not working.
Question:
username_0: injecting through process hacker 2.
cheat won't start
Answers:
username_1: Have you tried using another Injector? Perhaps ExtremeInjector with manual map and secure mode. https://github.com/master131/ExtremeInjector/releases/tag/v3.7.3
username_0: csgo crashed.
username_2: process hacker not injecting right now (or injecting when u type -insecure on starts options)
use injector!
u can try this (I'm not the author) https://www.cshacked.pl/csh-injector-09072020-t156933/?tab=comments#comment-1022060
to inject u must:
1. add cheat and injector to one folder
2. change name of cheat to "cshacked.pl.dll"
3. open the injector as administrator
username_3: Do we need to use VACbypass first?
username_4: use sazzinjector, ive been cheating on osiris for 160 hours with sazz, no ban. |
LLNL/llnl.github.io | 447326599 | Title: RADIUSS Page
Question:
username_0: This is a top-level issue for tracking integration of RADIUSS documentation / resources.
Current idea:
- RADIUSS overview computation.llnl.gov "Computation stands behind RADIUSS"
- Sub-page under software.llnl.gov that has all the dynamic details for "how to integrate / user with RADIUSS" (software.llnl.gov/radiuss ?)
- RADIUS project / team details on internal dev.llnl.gov site.
Answers:
username_1: Do we want the overview and the details to be at the same URL?
username_2: seems we could have a `llnl/radiuss` project to host website source and website, etc?
username_0: I was thinking it would just live in this repo.
username_2: hmm, i think the future we will have sphinx docs, code examples - for things that showcase using multiple radiuss products. They may even need CI checks, etc. Not sure it would make sense to have all of that in this repo.
username_0: True. Some of those I would think might live in the "real" repos (like blt, spack, etc).
username_0: @username_3 --
https://github.com/LLNL/llnl.github.io/blob/master/explore/github-data/labReposInfo.json is the "master" json file that gets updated from @LRWeber 's script based on all of the orgs and repos from https://github.com/LLNL/llnl.github.io/blob/master/_explore/input_lists.json
I also created https://github.com/LLNL/llnl.github.io/tree/radiuss with a new `radiuss/index.html` that you should be able to modify to make work with the static data for RADIUSS projects.
username_2: @username_0 many projects have those types of materials, however we will need a place that hosts integration examples that will be more cross cutting. We can still create that repo for that purpose of course.
username_3: And reminder to myself to update the nav
username_3: @username_4 would you be able to help me this week or next? i have this partially figured out.
username_4: FYI: a repo can have up to 20 tags.
username_3: Bullet 2, PR #157 will result in software.llnl.gov/radiuss
Bullet 1, RADIUSS overview computation.llnl.gov "Computation stands behind RADIUSS" - I drafted content for comp external, comp internal, & hpc.llnl.gov (linking to software.llnl.gov/radiuss) - sending to Rob for review
In other words, the portal and "marketing" pieces are nearly done. That leaves Bullet 3 re: dev.llnl.gov or new repo or however it should be handled.
username_3: Bullet 1 is done:
https://hpc.llnl.gov/radiuss
https://computation.llnl.gov/projects/radiuss
username_3: Revised plan as of 6/19 (see also issue #17):
1. Redirect radiuss.llnl.gov to software.llnl.gov/radiuss - thus keeping radiuss work on the portal and within this repo
2. Update Comp & HPC pages with new URL
3. Update /radiuss "home page" with new design (6 categories but no "open source" header)
4. Pull in 'radiuss'-tagged repos by category
5. Build out radiuss sub-pages according to LC Confluence content (e.g., About RADIUSS, Best Practices)
username_0: @username_3 -- Looks like we got all the topics added.
username_5: In the intermediate/long term, the page referenced by bullet #2 (software.llnl.gov/radius) will need to be modified to somehow reference specific RADIUSS releases. It's handy to have links to the various repos, but there's no way to guarantee that arbitrary versions of all RADIUSS products will work together.
Relevant examples are CEED (https://ceed.exascaleproject.org/ceed-2.0/) and xSDK (https://xsdk.info/releases/)
As @username_2 points out, a separate repo will likely be needed to manage releases.
username_0: @username_5 - I'm still not convinced that a separate repo is necessary or a good idea for this. I really feel strongly that we should start by building those examples (and versioning how RADIUSS as a collection fits together) on this site under the `/radiuss/` subtree.
For instance you could build a page like that CEED 2.0 page at something like `https://software.llnl.gov/radiuss/v1/` showing how all the RADIUSS products fit together.
username_6: If this is helpful, the CEED 2.0 is just one markdown file in the CEED web repo, which is generated by MkDocs, similarly to https://github.com/mfem/web.
I can give you access if you want to see the source.
username_5: @username_0 - I agree, let's use the existing repo unless things evolve and a need to create a new one arises.
username_2: @username_0 @username_5 that sounds good - especially if this starts as documentation.
For context, here is the dream scenario that I think would req using an other repo:
- clone a radiuss repo
- build all tpls with magic script that works on lc platforms
- hop into dirs with ready made examples, build and go
This would involve revision controlling many non-documentation things and also extensive CI testing. All which wouldn't make sense to hoist upon the software portal website repo.
username_7: Hi all,
Thank you for participating in this debate, as it provides me with useful information for helping designing the RADIUSS guidelines and release plan.
Different aspects of the project have been mentioned so far, and I think this is why it may seem complicated to find an immediate and satisfying solution.
The short (though far from complete) version of the following comments could be :
**I agree with the idea of having the policy and project status versioned here, and the work done building and testing the coherency in another place.**
Now the long version
_Keypoints :_
K1. As of today, **some of the RADIUSS member projects are mature in there development and release processes** (let’s take Spack and MFEM as examples).
These projects may have their own policies, or **conform to existing policies** (xSDK/IDEAS).
These projects may be part of sdks/toolkits/bundles providing **convenient and coherent installations** (xSDK/E4S/CEED).
K2. One of the hot feature expected with the RADIUSS project is to enhance continuous integration with the opportunity to **automate testing on LC clusters**.
A related goal is to ease the deployment on LC clusters.
K3. Another important goal is to provide help to projects with less mature development and release processes so that they can benefit from and share good practices.
K4. Tools within the RADIUSS scope may not share the same usage perimeter (development, benching, research) or the same build context (station, cluster, python, compiler), and focusing on an unique RADIUSS instance built under the almighty coherency constraint may have us missing our point.
_Remarks :_
R1. There are several practice examples that could easily be shared and have a positive impact.
Projects relying on BLT use **host-config files in development context**.
**In a deployment context, Uberenv** is an interesting attempt in automating the usage of host-config files (also in a spack installation context).
_Suggestions :_
S1. In my opinion and based of the little knowledge I have from the situation, the actions taken now should aim at **ease and document the process of getting some projects into GitLab CI** in order to complete their Development and Release mechanisms with architecture aware automated testing.
S2. The informations display on the RADIUSS pages should reflect **the status of the project** rather than the final goal. Status information is useful in order to identify the next steps to focus on, and give practical information on the tools and libs.
In this sense I like the "categories" on the RADIUSS page and the "pull request" and "issues" statistics of the explore page.
I personally think that build system, packaging, membership in a public sdk, and dependency to other RADIUSS tools and libs would be useful information to have.
Of course, automated gathering of these information would be a plus.
Status: Issue closed
|
erickt/rust-zmq | 175804337 | Title: Creating context in scope hangs program when scope ends
Question:
username_0: The following minimal example hangs and blocks execution of the program; the message “Socket returned” is never printed. I suspect this is because the context gets destroyed at the end of the scope, while the socket build on top of that context still lives on. No idea how to fix it, but it would be great if this would throw some serious error message.
```rust
extern crate zmq;
fn main() {
let address = Some("tcp://127.0.0.1:5556".to_string());
// let mut context; // Uncomment to make it work;
let mut publisher = match address.as_ref() {
Some(address) => {
println!("Setting up socket");
let mut context = zmq::Context::new(); // Comment to make it work;
// context = zmq::Context::new(); // Uncomment to make it work
let mut publisher = context.socket(zmq::PUB).unwrap();
assert!(publisher.bind(address).is_ok());
println!("Returning socket");
Some(publisher)
}
None => None
};
println!("Socket returned");
let data: u8 = 1;
loop {
let s = format!("data {}", data);
println!("Sending {}", s);
publisher.as_mut().unwrap().send_str(&s, 0).unwrap();
}
}
```
Answers:
username_1: There was a PR quite a while ago that tied a socket's lifetime to the context: https://github.com/erickt/rust-zmq/pull/65. In this case, your code would not compile as the publisher outlives its context.
If this is a deal breaker, I'd suggest forking and patching your fork with jdpage's PR.
username_2: There's also #96, in which @tcosprojects suggests a solution involving using Arc<> inside the zmq bindings itself, which seems to be the most general/practical solution to me.
username_2: I've now merged #96, which makes this example code both compile and run.
Status: Issue closed
|
rust-openvr/rust-openvr | 182318249 | Title: DLL placement
Question:
username_0: Where exactly do I need to be placing the DLL?
I've placed it in the project root (where Cargo.toml is), where the .exe is, and everywhere else I can think of and I'm still getting a "test.exe has stopped working" from Windows when calling `openvr::init()` =/
Answers:
username_1: Can you maybe also check the openvr logs? You can find them in the SteamVR developer settings. There should be log for your exe in that folder (for instance teste.exe.log)
username_0: There doesn't appear to be a SteamVR log for any of the executables I tried
username_0: This includes any of the examples, eg `cargo run --example test`
username_1: Now thats weird, I just realized that actually the dll is now (or should be) built into the openvr-sys crate.
Even without an hmd plugged in, the test example starts but prints and error and exits.
username_1: Maybe you are using and dll that is too new? It is supposed to be compatible but who knows? The dll that I'm using is part of openvr-sys
username_0: I tried switching it out with both 64 and 32 bit versions from openvr-sys, didn't work. I get the same message if I remove the dll entirely too.
username_1: Are you sure you are using the MSVC abi version of rust?
username_0: How would I confirm that? I'm using whatever the rustlang installer gives me `rustc 1.11.0 (9b21dcd6a 2016-08-15)`
username_1: https://www.rust-lang.org/en-US/downloads.html there are 2 versions, openvr will only work using MSVC ABI. Unforunately I don't know how to check what you currently have.
username_0: Oh, looks like the default is the GNU abi version, I'll try installing the MSVC one instead and see if that works
Status: Issue closed
username_0: Yep, that did it, thanks for the help!
username_1: Yeah well, sorry for the confusion. I need to add it to the readme |
ray-project/ray | 224645951 | Title: Spurious error message about killed worker when script exits
Question:
username_0: I sometimes see the following message when a script exits (despite the fact that the script runs to completion without any issues).
```
A worker died or was killed while executing a task.
You can inspect errors by running
ray.error_info()
If this driver is hanging, start a new one with
ray.init(redis_address="127.0.0.1:25182")
```
I think the problem is that the script, e.g.,
```python
@ray.remote
def f():
return 1
ray.get([f.remote() for _ in range(10)])
```
is returning as soon as the object is put in the object store, which may happen before the local scheduler realizes that the task has finished, so the local scheduler will still push an error message about killing a worker that is running a task.
One simple fix would be to suppress this message during cleanup.
Answers:
username_0: This should be addressed by #492.
Status: Issue closed
|
digitalcoyote/NuGetDefense | 818973775 | Title: NuGetDefense.Tool throws an exception on a `Solution Items` project from the solution file
Question:
username_0: **Describe the bug**
NuGetDefense.Tool throws an exception on a `Solution Items` project from the solution file.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a project file with a `Solution Items` folder
2. Install the tool: `dotnet tool install NuGetDefense.Tool -g`
3. Run the tool: `nugetdefense .\MySolution.sln Release`
4. Fails with exception message
## Example project file part that causes this:
```
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{DAC32E4B-F605-4584-A629-554E7523B1FD}"
ProjectSection(SolutionItems) = preProject
.editorconfig = .editorconfig
.gitlab-ci.yml = .gitlab-ci.yml
common\GenerateBuildInfo.sh = common\GenerateBuildInfo.sh
README.md = README.md
EndProjectSection
EndProject
```
## Exception:
```
`dotnet list` Errors:
`dotnet list` Errors:
`dotnet list` Errors:
: Error : Encountered a fatal exception while checking for Dependencies in . Exception: System.IO.FileNotFoundException: Could not find file 'C:\Users\RobBos\source\repos\MyRepoName\Solution Items'.
File name: 'C:\Users\RobBos\source\repos\MyRepoName\Solution Items'
at System.IO.FileStream.ValidateFileHandle(SafeFileHandle fileHandle)
at System.IO.FileStream.CreateFileOpenHandle(FileMode mode, FileShare share, FileOptions options)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize)
at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials, IWebProxy proxy)
at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn)
at System.Xml.XmlTextReaderImpl.FinishInitUriString()
at System.Xml.XmlTextReaderImpl..ctor(String uriStr, XmlReaderSettings settings, XmlParserContext context, XmlResolver uriResolver)
at System.Xml.XmlReaderSettings.CreateReader(String inputUri, XmlParserContext inputContext)
at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings)
at System.Xml.Linq.XDocument.Load(String uri, LoadOptions options)
at System.Xml.Linq.XDocument.Load(String uri)
at ByteDev.DotNet.Project.DotNetProject.Load(String projFilePath)
at NuGetDefense.Program.LoadMultipleProjects(String TopLevelProject, String[] projects, Boolean specificFramework, String targetFramework, Boolean solutionFile) in /home/codingcoyote/Code/NuGetDefense/Src/NuGetDefense/Program.cs:line 193 at NuGetDefense.Program.Main(String[] args) in /home/codingcoyote/Code/NuGetDefense/Src/NuGetDefense/Program.cs:line 73
```
**Expected behavior**
Not fail :-). Perhaps detect the type of project and skip it.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Tools (please complete the following information):**
- dotnet global toos
- Windows 10 19042
**Additional context**
Found the link to the source code here:
https://github.com/username_1/NuGetDefense/blob/4f01373b1bddb84688d5d239ca1436fdf03cffb8/Src/NuGetDefense/Program.cs#L193
Unfortunately you only push in the names of the projects to this method.
I see three options:
1. Eat up the exception on Project load and log it
2. Only load valid projects on [solution file load](https://github.com/username_1/NuGetDefense/blob/4f01373b1bddb84688d5d239ca1436fdf03cffb8/Src/NuGetDefense/Program.cs#L66)
3. Add option to skip certain projects during the analysis
Answers:
username_0: Seems like I found how to fix this. Setting up a PR for this fix
Status: Issue closed
username_1: v2.1.1-pre0001 has the fix for this, but I didn't have time on lunch to test the packaging, so I threw it in a pre-release. |
vslavik/poedit | 208709106 | Title: Checkboxes in Replace dialog should be enabled
Question:
username_0: Poedit 2.0beta3
Windows 10

The checkboxes are enabled in the Search dialog, but not in the Replace dialog. To me, this doesn't make sense since when replacing text, I also want to restrict the search to certain source fields.
Answers:
username_1: You can't replace in source text — it's *read-only.* That's why the checkbox is disabled, it makes no sense.
You could, in theory, replace in (translator's) comments, but it's not currently supported. Do you have a *practical*, real-life, actually encountered need to replace text *in comments*? If so, could you elaborate?
username_0: No, I don't want to replace text in comments.
My practical use case is: The GCC translation switched from using `»%s«` to using `%qs`. Therefore I want to find all instances where the source text contains `%qs` and the translated text contains `»%s«`, and replace that with `%qs`. The dialog doesn't allow this currently, so the best approach I could think of was to _search for `»%s«`, but only in the target text_.
Do you mean the checkboxes to the right are useless? Then they should not be shown at all.
But the _Case insensitive_ checkbox to the left should be enabled.
username_2: I disagree. Showing and hiding widgets is more confusing for users, who have to work out why a control has vanished in certain situations. This is the case in e.g. LibreOffice and it annoys me greatly. Another disadvantage is that it makes dialog layouting much more difficult.
username_0: Point taken. I had not noticed that in Poedit there is a unified Find/Replace dialog. I thought they were two separate dialogs. Still, the checkboxes suggest that the text is searched for in the source text and in the comments, since they are checked.
When selecting `Replace` in the dropdown:
* the checkbox for `Search in source texts` should be disabled and unchecked
* the checkbox for `Search in translations` should be disabled and checked
* the checkbox for `Search in comments` should be disabled and unchecked
* the state of these checkboxes should not be remembered for the next search
username_1: Let me put it this way: you're welcome to submit patches to that effect, but I'm unlikely to address it anytime soon: it's just not worth spending time on. The controls are *disabled*, it's obvious that don't do anything, and I plan to redo Find & Replace to be inline anyway. |
gems-uff/sapos | 57588423 | Title: Permitir associação de uma disciplina com MAIS DE UMA área de pesquisa
Question:
username_0: Para acomodar mudanças no currículo, precisamos agora que uma disciplina possa ser associada a mais de uma área de pesquisa.
Answers:
username_1: Devo adicionar o arquivo db/migrate/201502... para realizar o commit?
username_2: Não sei se entendi a pergunta, mas o migrate é fundamental. Se eu estiver falando algo óbvio, me avise: o migrate é que migra os dados do esquema antigo para o novo. Sem ele, teríamos que ajustar na mão cada uma das disciplinas, já que elas deixarão de ter o ID da área (chave estrangeira) e passarão a se relacionar com N áreas (no BD surgirá uma nova tabela de relacionamento para fazer o meio de campo). A função desse migrate é justamente fazer automaticamente a migração dos dados. Ele terá que, para cada disciplina, pegar a área existente e colocar essa área no conjunto de áreas. Tente fazer essa migração e colar aqui para vermos. Como isso é propenso a erro, apesar de simples, vou chamar o @JoaoFelipe para palpitar.
username_2: Ah, apesar de ser uma issue importante, pode deixar para depois do carnaval! Aproveite esse momento para descansar um pouco!
username_1: Desculpe-me. Acho foi a minha pergunta que não ficou tão clara.
Para concluir essa issue, tive que criar um novo arquivo com o nome "201502..." no diretório "db/migrate/", além de editar outros arquivos na pasta "app/".
A minha dúvida é: Devo adicionar também este arquivo de migração que foi criado, quando for realizar o commit? Isto é: "git add db/migrate/201502...". Além, é claro, dos demais arquivos.
username_2: Sim, pois sem ele não seria possível migrarmos o BD de produção. A minha dúvida é se essa sua migração "só" altera o esquema ou se ela de fato migra os dados. Mas siga em frente! Qq coisa vc dá outros commits depois ajustando.
Status: Issue closed
|
saltstack/salt | 115111965 | Title: FX2 proxy minion regression
Question:
username_0: Sometime after revision ``a25ce38`` the FX2 proxy minion broke. I don't know where, but it went from returning detailed output to empty data.
Working: (``a25ce38``)
```
cptbc15en1.da2.cpt.adobe.net:
----------
chassis:
----------
Main Board:
----------
fqdd:
System.Chassis.1#Infrastructure.1
fw_version:
1.10.A00.201410066
name:
Main Board
cmc:
----------
cmc:
----------
cmc_version:
1.30.200.201508062451
name:
cmc
updateable:
Y
server:
----------
server-1:
----------
blade_type:
PowerEdge FC630
gen:
iDRAC8
idrac_version:
2.20.20.20 (41)
name:
server-1
updateable:
Y
server-2:
----------
blade_type:
PowerEdge FC630
gen:
iDRAC8
idrac_version:
172.16.17.32 (41)
name:
server-2
updateable:
Y
server-3:
----------
blade_type:
PowerEdge FC630
gen:
iDRAC8
idrac_version:
172.16.17.32 (41)
[Truncated]
name:
switch-2
```
Not working: (seen in ``2015.8.1-848-g3a729c2`` and ``2015.8.1-846-gfed4c6f``)
```
cptbc1en1.da2.cpt.adobe.net:
----------
chassis:
----------
cmc:
----------
server:
----------
switch:
----------
```
Looks like a job for @username_3 .
Answers:
username_1: Thanks @username_0! We'll get this fixed up.
username_2: ZD-509
username_3: We reverted at least one bad PR today. Proxies work fine for me in my test configuration with the current head of 2015.8 after that reversion. Are you in a position to test the current head to see if it's fixed for you now?
username_0: I just installed ``2015.8.1-980-gdecc31a`` using bootstrap (should be HEAD of 2015.8) and the problem persists. I installed using:
```
sh /tmp/install_salt.sh -S -M -P git 2015.8
```
I restarted master, minion, syndic and proxies.
```
[[email protected] ~]# salt cptbc1en1.sin2.cpt.adobe.net chassis.cmd inventory
cptbc1en1.sin2.cpt.adobe.net:
----------
chassis:
----------
cmc:
----------
server:
----------
switch:
----------
```
username_0: The primary issue(s) outlined here was fixed with 2015.8.2, yes. We can close this issue.
Status: Issue closed
|
YutaroOgawa/pytorch_advanced | 984412086 | Title: 【第8章】BERTの事前学習について
Question:
username_0: 小川様
Masked Language Modelについて、本文p.398には「複数単語をマスクし、文章の残りのマスクされていない単語全てを使用してマスクされた単語を推定し、その単語の特徴量ベクトルを獲得するタスク」とありますが、以下の3点について質問させて下さい。長文で申し訳ありません。
質問①
Masked Language Modelによる学習を実行するにあたって、BERTへの最初の入力は、特定の単語IDが「"[MASK]"の単語ID」に置き換わると思います。
また、続くEmbbedingsモジュールでも、30522の語彙の中から「"[MASK]"の単語IDに相当する分散表現」が割り当てられると思います。
その場合、「Embbedingsモジュールが保持する、"マスクされる前の単語"の分散表現」はその順伝播において登場しないことになります。例えば、「犬」という単語をマスクした場合、「Embbedingsモジュールが保持する、"犬"の分散表現」はその順伝播に出てこないと思います。
そう考えると、逆伝播で微分を求める際に、順伝播において登場していない「Embbedingsモジュールが保持するマスクされる前の単語の分散表現」の微分は求められない(よってパラメータ調整も行われない)ことになると思います。
つまり、話を「Embbedingsモジュールが持つパラメータ」に限定すると、ある単語をマスクした順伝播では、その推定対象の単語に相当するパラメータは更新されず、むしろ「マスクされた単語を推定するのに使用した周辺単語」に相当するパラメータが更新される、と考えても良いのでしょうか?
質問②
学習によって獲得される「マスクされた単語の特徴量ベクトル」とはどの部分を指すことになるのでしょうか?
上記質問①の理解が正しい場合、「Embbedingsモジュールが保持する、マスクされた単語に相当する"行"」でなく、「Bert Layerの出力(512,748)のうち、マスクされた単語に相当する"行"」が特徴量ベクトルということになると思います(マスクする対象を変えて何度も実行することで、最終的にはEmbbedingモジュールのパラメータも特徴量ベクトルを獲得していくと思いますが)。
この出力の「マスクされた単語に相当する行の分散表現」はAttentionを通過する過程で、「他の単語の分散表現」を特定の係数に基づいて反映した分散表現になるため、単語の意味が他の単語の意味によって決まるという分布仮説に基づくと、適切な分散表現が獲得出来ていることが期待できると思います(逆に考えると、マスクした部分以外の周辺単語が正しい分散表現を保持していないとこの考えは成り立たない?)。間違いないでしょうか?
質問③
最終的な損失関数で使用されるのは、Masked Language Modelモジュールを通過した出力(512×30522)のうち、「マスクした単語に相当する行のみ」との理解です。
マスクした単語に相当する行のみ使用するにも関わらず、「文章の残りのマスクされていない単語全てを使用してマスクされた単語を推定」しているタスクと言えるのは、上記質問②の通り、Bert Layerの出力(512,748)のうち「マスクされた単語に相当する行の分散表現」が、Attentionを通過する過程で、「他の単語の分散表現」を特定の係数に基づいて反映した分散表現になっているから、と考えても良いのでしょうか?
以上、ご確認よろしくお願い致します。
Answers:
username_1: はい、まったくその通りです。
<br>
非常に丁寧に理解を試みていただき、本件も誠にありがとうございます。
他の読者の皆様の参考にもなり、大変ありがたいです。
上記、どうぞよろしくお願いいたします。
username_0: 小川様
丁寧かつ迅速に回答頂き誠にありがとうございます。
実は、おすすめ頂いた本にも目は通してみたのですが、事前学習にはそこまで触れられておらず、いつも親切に対応頂ける小川様を頼らせて頂いた次第です、、、
質問①
私の質問の仕方が悪かったと思うのですが、 MaskedWordPredictionsモジュールが持つパラメータ(748×30522)が、マスクされた単語の出力値の逆伝播によって更新されることに異論はありません。
その一方で、「Embeddingモジュール」のパラメータは話が異なると思います。
Embbedingモジュールのパラメータは(30522×748)ですが、このパラメータが全て順伝播の際に使用される訳ではなく、特定の単語IDに対応する行(748要素ベクトル)が抜き出されると思います。
例えば、["犬","猫","人"]という文章があった場合に、Embbedingモジュールのパラメータにおいて順伝播で使用されるのは(3×748)行列のみだと思います。
文章の一部を例えば["mask","猫","人"]のように置き換えてしまうと、"mask"に相当するEmbbedingモジュールのパラメータ(748要素ベクトル)が順伝播に使用され、その結果、"犬"に相当するEmbbedingモジュールのパラメータ(748要素ベクトル)は順伝播にそもそも出てこないので、逆伝播時にそのパラメータに対する微分も求められないはずです。
よって、話を「Embbedingsモジュールが持つパラメータ」に限定すると、ある単語(例えば"犬")をマスクした順伝播では、その推定対象の単語に相当するパラメータは更新されず、むしろ「マスクされた単語を推定するのに使用した周辺単語(例えば"猫"や"人")」に相当するパラメータが更新されるのでは?と考えた次第です。
質問②
上記質問①の通り考えると、ある単語(例えば"犬")を推定するタスクで、その単語をマスクに置き換えてしまうと、その単語に相当するEmbbedingsモジュールが持つパラメータを学習させることは出来ないと思います。
発想を変え、マスクしたある単語(例えば"犬")を推定するには何が必要か?を考えると、以下の2つになると思います。
(1)マスクした単語の「周辺にある単語(例えば"猫"や"人")」の適切な分散表現=Embbedingsモジュールが持つ、周辺単語に相当する適切なパラメータ。
(2) BERTモジュールの出力(512,748)において、推定対象の単語(例えば"犬")に相当する「行」に、周辺単語(例えば"猫"や"人")の分散表現を適切に反映する仕組み。BERTではAttention。
これを逆に考えると、マスクした単語が正しく推定出来ているならば、上記(1)と(2)は満たしていると考えられます。
やはり、マスクしたある単語(例えば"犬")を正しく予測出来るようにするためのタスクは、その単語自身の分散表現の調整でなく、「Embbedingsモジュールが持つ、周辺単語に相当するパラメータ(例えば"猫"や"人")」の調整と考えた方が整合的ではありませんか?
username_1: が正しいと、私も思いました。
前回の回答がきちんと疑問への回答になっておらず申し訳ございません。また誤った解釈を私がしていました。
ポイントは、MaskedLMをする際に、いつMASKをかけているのかだと判断しました。
入力時点なのか、BERT-baseを通り抜けたあとなのか。。。
実装を確認すると、
[1] HuggingFaceのMaskedLMの実装および入出力 [[link]](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm)
[2] 私の実装でのMaskedLMの実装 [[8-2-3_bert_base.ipynbの最後の方]](https://github.com/username_1/pytorch_advanced/blob/master/8_nlp_sentiment_bert/8-2-3_bert_base.ipynb)
ともにBERT-baseへの入力時点で[mask]に置き換わっているので、結果、BERTへの入力にはembeddingモジュールの該当単語のベクトルを取り出す情報がありません。
よって、ご指摘の頂いた通り、
「maskされた単語をうまく当てられるように周辺単語のベクトル、そしてBERTのAttentionの重みなどが学習されていく」
が正しいと思いました。
確かに、マスクされた単語を当てるように「マスクされた単語の重みのみ」を学習する場合、
ボキャブラリーの全単語をくまなくマスクする必要があり、さらに、
1回のforward-backwardで学習できる単語数も非常に少なくなります。
これは非効率ですので、このような形になっているのだと思います。
貴重な機会とご質問をありがとうございます。
私は上記のように解釈しました。
username_0: 小川様
本件、認識が一致して安心しました。
お忙しい中ご対応頂きありがとございました!
username_1: @username_0 さま
はい、こちらこそありがとうございます。
私もとても学びになりました。
今後もどうぞよろしくお願いいたします。 |
berndonline/debian-router-vagrant | 721570314 | Title: cisco-iosxe hinting?
Question:
username_0: Hi,
Any tips how to obtain a rightfull Cisco-iosxe image? Maybe via VIRL?
How to provision that image on the Box provider Virtualbox nicely?
~~
Bringing machine 'cisco-iosxe' up with 'virtualbox' provider...
==> cisco-iosxe: Box 'iosxe' could not be found. Attempting to find and install...
cisco-iosxe: Box Provider: virtualbox
cisco-iosxe: Box Version: >= 0
==> cisco-iosxe: Box file was not detected as metadata. Adding it directly...
==> cisco-iosxe: Adding box 'iosxe' (v0) for provider: virtualbox
cisco-iosxe: Downloading: iosxe
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
Couldn't open file src/debian-router-vagrant/iosxe |
chromebookdatascience/images | 232341486 | Title: Dropbox
Question:
username_0: 
Answers:
username_1: 
username_0: 
username_0: 

username_0: 
username_0:  |
linjam/linjam | 62313133 | Title: ALSA config char* is ugly and cumbersome
Question:
username_0: NOTE: this is mainly a libninjam issue
the function create_audioStreamer_ALSA() prototype as declared in audiostream.h requires a char* config_string - this is intended to be passed in verbatim via command line args to the curses client so runtime configuration was not considered - gNinjam implemented runtime configuration simply as a single textbox - this is obviously not the best implementation one could imagine - also JUCE is not happy exposing its innerds like that and its String methods only produce const char*
possible remedies:
* modify the existing initializer in linbninjam to require a const char* // <-- current dev solution
* override initializer in linbninjam to accept separate args like its cousins // <-- better solution
Answers:
username_0: NJClient accepts the following config_strings (original cli params)
```
win =>
-noaudiocfg
-jesusonic <path to jesusonic root dir>
mac =>
-audiostr device_name[,output_device_name]
nix =>
-audiostr "option value [option value ...]"
ALSA audio options are:
in hw:0,0 -- set input device
out hw:0,0 -- set output device
srate 48000 -- set samplerate
nch 2 -- set channels
bps 16 -- set bits/sample
bsize 2048 -- set blocksize (bytes)
nblock 16 -- set number of blocks
```
username_0: done 7a5ff18
using tentative overridden create_audioStreamer_ALSA()
```c
audioStreamer *create_audioStreamer_ALSA(SPLPROC on_samples_proc ,
const char* input_device = "hw:0,0" ,
const char* output_device = "hw:0,0" ,
int n_channels = 2 ,
int sample_rate = 44100 ,
int bit_depth = 16 ,
int n_buffers = 16 ,
int buffer_size = 1024 ) ;
```
Status: Issue closed
|
joachimrussig/GI_Forum_2017_Paper | 201834810 | Title: Write the main part (Routing / Optimal Time)
Question:
username_0: # Structure / Content
The main part should cover the key ideas of our approach as well how obtained our goals:
- Routing
- modelling the routing network
- edge weighting
- complexity
- mapping raster to the road network (later?)
- Finding the optimal point in time
- modelling as optimzation problem / constrains
- objective function
- optimization
Answers:
username_1: seems good.
Status: Issue closed
|
matmodlab/matmodlab2 | 245571665 | Title: Prescribed deformation gradient
Question:
username_0: When you prescribe the deformation gradient, it's immediately recast in terms of strain (defined by the current kappa) and then that strain is linearly interpolated over the step. At each point along the interpolation, the strain is converted to deformation gradient.
This gives a non-linear behavior when you look at the deformation gradient components in the output. When you would expect a nice straight line over the step, you get a curve.
See the test in /tests/swan/test_defgrad_basic.py. |
soldair/node-s3-npm | 451173865 | Title: Erorr when installing locally
Question:
username_0: Hi, I've encountered this issue when I've tried to install it locally.
Don't know if it is intentional or not :)
Reproduction steps:
1. `npm install s3npm -D`
2. `./node_modules/.bin/s3npm configure`
internal/modules/cjs/loader.js:583
throw err;
^
Error: Cannot find module 'internal/util/types'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:581:15)
at Function.Module._load (internal/modules/cjs/loader.js:507:25)
at Module.require (internal/modules/cjs/loader.js:637:17)
at require (internal/modules/cjs/helpers.js:22:18)
at evalmachine.<anonymous>:44:31
at Object.<anonymous> (/home/pavel/git-repos/nate/nate.chromeextension/node_modules/npm/node_modules/graceful-fs/fs.js:11:8)
at Module._compile (internal/modules/cjs/loader.js:689:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
at Module.load (internal/modules/cjs/loader.js:599:32)
at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
Answers:
username_1: this is a super old project so if you figure it out feel free to make a pull request. otherwise i'm unlikely to work on it. |
lesshint/lesshint | 228248908 | Title: Variables in class names picked up as qualifying elements
Question:
username_0: <!--
Please include as much information as possible about your issue/request.
Not all headers below are relevant for all issues, if something's irrelevant, just remove it.
-->
**Which version of `lesshint` are you using?**
`3.3.1`
**How are you running `lesshint`? CLI, Node.js API, Grunt/Gulp plugin?**
CLI:
`lesshint 'src/less/'`
**What's your `.lesshintrc` configuration?**
Only part that's relevant:
```json
"qualifyingElement": {
"enabled": true,
"severity": "error",
"allowWithAttribute": false,
"allowWithClass": false,
"allowWithId": false
},
```
**If you're reporting a bug, please show us some code that's failing.**
```less
.generate-columns (@prefix, @counter: 1) when (@counter < @grid-divisions) {
&.@{prefix}-@{counter} {
width: floor(percentage(1 / @grid-divisions * @counter) * 100) / 100;
}
&.@{prefix}-offset-@{counter} {
margin-left: floor(percentage(1 / @grid-divisions * @counter) * 100) / 100;
}
&.@{prefix}-fill-@{counter} {
margin-right: floor(percentage(1 / @grid-divisions * @counter) * 100) / 100;
}
&.@{prefix}-push-@{counter} {
left: floor(percentage(1 / @grid-divisions * @counter) * 100) / 100;
}
&.@{prefix}-pull-@{counter} {
left: floor(percentage(1 / @grid-divisions * @counter) * 100) / 100 * -1;
}
.generate-columns(@prefix, @counter + 1);
}
.column {
.generate-columns(xs);
[Truncated]
**What's the actual result?**
Appears to pick up variables in class names as qualifying elements. As a side note, it also appears to print the errors twice, but this may be for the different sub-rules e.g. `allowWithAttribute: false` and `allowWithClass: false`.
```shell
Error: grid.less: line 4, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 4, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 8, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 8, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 12, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 12, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 16, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 16, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 20, col 4, qualifyingElement: Class selectors should not include a qualifying element.
Error: grid.less: line 20, col 4, qualifyingElement: Class selectors should not include a qualifying element.
```
**Would you be interested in submitting a PR for this issue?**
Yup. No problem, though this is likely another issues with the `postcss-less` plugin.
Answers:
username_1: Thanks for the issue!
It looks like a issue in `postcss-less`, yes. I'll try to create a reduced test case and post there if you don't beat me to it, @username_0.
username_1: Actually, when investigating closer I think this is an issue with [postcss-selector-parser](https://github.com/postcss/postcss-selector-parser) and not `postcss-less`.
username_0: Ah, kay. Thanks for doing some research. If I have some time I'll take a look into this, but I'm pretty swamped at the mo.
username_1: Created an issue for it over at postcss-selector-parser: https://github.com/postcss/postcss-selector-parser/issues/109
username_1: Hi!
I know it's a bit late, but we finally got a new release of `postcss-selector-parser` where this issue is fixed. I've just tagged a `[email protected]` including that.
Status: Issue closed
username_0: @username_1 Awesome news. Looking forward to trying it out. :) |
dsietz/test-data-generation | 1052789027 | Title: Feature Request: Be able to specify delimiter
Question:
username_0: Right now, the delimiter is set to ",".
Would it be possible to have an additional parameter in `generate_csv` and `analyze_csv` to specify an alternate optional delimiter?
If the option argument is not set, then "," is used, otherwise, it uses the alternate delimiter for the csv::ReaderBuilder.
Answers:
username_1: I'll see what I can do.
username_1: All tests passing in development. Will be added to version 0.3.0
Status: Issue closed
username_1: version 0.3.0 has been published to [crates.io](https://crates.io/crates/test-data-generation)
This enhancement is resolved. |
calmPress/calmpress | 391979774 | Title: Deprecate press-this related code
Question:
username_0: in 4.9.0 press-this was removed from core into a plugin, but some code was left behind for backward compatibility.
about 2 years later and the plugin has only 10k user which shows that people are not interested in the concept of easily sharing content from the web in the first place. Might be something to do with the implementation being lacking, egg and chicken problem....
Whatever is the reason the code as it is right now is almost pointless bloat.<issue_closed>
Status: Issue closed |
pburls/dewey | 182464211 | Title: Deploy command fails with for components with multiple dependencies
Question:
username_0: ```
Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.InvalidOperationException: Collection was modified; enumeration operation may not execute.
at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource)
at System.Collections.Generic.List`1.Enumerator.MoveNextRare()
at System.Collections.Generic.List`1.Enumerator.MoveNext()
at Dewey.Deploy.DeployCommandHandler.Execute()
at Dewey.Deploy.DeployCommandHandler.Execute(DeployCommand command)
--- End of inner exception stack trace ---
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Dewey.Messaging.CommandProcessor.Execute(ICommand command)
at Dewey.CLI.Program.Main(String[] args)
```<issue_closed>
Status: Issue closed |
jlippold/tweakCompatible | 342284312 | Title: `TapTapFolder` partial on iOS 11.3.1
Question:
username_0: ```
{
"packageId": "me.qusic.taptapfolder",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "me.qusic.taptapfolder",
"deviceId": "iPhone8,4",
"url": "http://cydia.saurik.com/package/me.qusic.taptapfolder/",
"iOSVersion": "11.3.1",
"packageVersionIndexed": true,
"packageName": "TapTapFolder",
"category": "Tweaks",
"repository": "BigBoss",
"name": "TapTapFolder",
"packageIndexed": true,
"packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 87% with 7 working reports.",
"id": "me.qusic.taptapfolder",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.7",
"shortDescription": "Single tap folder to open first app",
"latest": "0.3.8",
"author": "Qusic",
"packageStatus": "Working"
},
"base64": "<KEY>
"chosenStatus": "partial",
"notes": "iP SE on 11.3.1, Opening the first app through given options is broken, can only double tap to open folder (vice-versa)"
}
``` |
kubernetes/ingress-nginx | 623197766 | Title: Documentation: Multiple Ingress Controllers
Question:
username_0: **NGINX Ingress controller version**: master/0.32.0
**Kubernetes version** (use `kubectl version`): 1.18/NA
**What happened**:
The Multiple Ingress Controllers documentation hasn't been updated with the new ingressClass specs.
https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
/kind documentation
Answers:
username_1: For other people stumbling upon this: the `ingressclass.spec.controller` field is a well-known, domain-prefixed value that is hard-coded into `nginx-ingress`. [See here](https://github.com/kubernetes/ingress-nginx/pull/5410/files#diff-885a515d98c2c89a7f6a2f10768a21d8R130).
Here's an example ingressclass. note that the `name` must match `--ingress-class` flag in your deployment.
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
# this name must match the --ingress-class flag
name: my-example-class
annotations:
# optional: flag this as default ingclass
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: "k8s.io/ingress-nginx" # this is a hard-coded into nginx-ingress
```
username_2: By the way, it is still not clear how to configure IngressClass for multiple nginx controllers. I assume it should somehow be done with linking actual controller with `spec.parameters` section:
```
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com
kind: IngressParameters
name: external-lb
```
But is there some more clear example? Linking nginx-ingress deployments didn't work for me:
```
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx-internal
spec:
controller: k8s.io/ingress-nginx
parameters:
apiGroup: apps
kind: Deployment
name: nginx-ingress-controller-internal
```
```
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx-public
spec:
controller: k8s.io/ingress-nginx
parameters:
apiGroup: apps
kind: Deployment
name: nginx-ingress-controller-public
```
username_3: Looking for same clarification as @username_2
username_4: parameters are not supported.
```
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx-public
spec:
controller: k8s.io/ingress-nginx
```
Status: Issue closed
username_3: How do you define for multiple ingresses/ingress classes? Pre-v19 I have an internal ingress and an external ingress with ingress classes `nginx-internal` and `nginx-external` respectively. With annotations this meant using `kubernetes.io/ingress.class: nginx-internal` and `kubernetes.io/ingress.class: nginx-external`
Does this simply translate to two `IngressClass` resources, one with `metadata.name: nginx-internal` and one with `metadata.name: nginx-external` both with `spec.controller: k8s.io/ingress-nginx`?
username_1: Hey @username_3! yes, that's how it works. I have the exact same use-case. The `metadata.name` and `--ingress-class` flag in your ingress deployment must match, see https://github.com/kubernetes/ingress-nginx/issues/5593#issuecomment-647538272.
username_5: @username_1 is the creation of the IngressClass supported in the nginx ingress helm chart ?
username_6: Same question. Can this please be added to the Helm chart to avoid followup manual configuration? |
symfony/symfony | 863297208 | Title: [RFC] Branching in Symfony 6
Question:
username_0: ### Status Quo
Following the discussion in #37331 we have decided to rename our `master` branch to `5.x`. The main argument was that this model has been tried out successfully on the Twig repository and in fact it has served us well. However, with 5.4 being the last minor version of the 5.x series, the end of the `5.x` branch is approaching and I'd like to take the opportunity to revisit that decision.
### Problems with `5.x`
The `5.x` strategy had one big drawback: We were unable to target 5.3 in version constraints.
* Composer was unable to resolve version constraints like `~5.3.0` and `5.3.*`. Especially when introducing experimental components, we sometimes need the ability to pin a minor version.
https://github.com/symfony/symfony/blob/d4844ef28f79bec9fe4e2306e4cfc52e219b0cf5/src/Symfony/Component/Notifier/Bridge/Slack/composer.json#L22
* Flex allows us an application to pin all Symfony packages to a certain branch:
https://github.com/symfony/skeleton/blob/5215ec9738586590ae5ffa62f13aa823e3d090d7/composer.json#L57-L62
If a developer wanted to try out their application with the latest 5.3 snapshots, the obvious eay to do that would be to change `"symfony": "5.2.*"` into `"symfony": "5.3.*"`. But that did not work for the same reason. A workaround was to use `>=5.3` instead.
### Twig Vs. Symfony
Twig's development process works differently than Symfony's. A new minor releas is tagged when one or more new features have been merged. For bugfixes, a patch release is tagged. Once a new minor version is tagged, the previous minor version is obsolete. The minor version changes often and irregularly. And there is (at least in theory) no upper boundary for the minor version number.
I can totally understand that the `1.x`/`2.x`/`3.x` branches work so well for Twig.
In Symfony however, it's a different story. We do have one branch where we develop the features for the next biannual minor release. And we pretty much know, what the version number will be. The minor version is bumped twice a year only and after a minor release has been tagged, it is maintained in a dedicated branch while the development for the next minor release continues.
To put it in a nutshell: I don't think that the current branching model matches our development model well.
### Proposal
* Next month (exact date is TDB), we will branch out `5.3` to prepare and stabilize the 5.3.0 release.
* Shortly after, we rename `5.x` to `5.4`. Since there won't be a 5.5 release, we don't need a `5.x` branch anymore.
* For Symfony 6, we create a `6.0` branch.
* Around 7 months from now, when 5.4 and 6.0 enter the stabilization phase, we create a `6.1` branch.
* And so on. There won't be a `6.x` branch.
### Advantages
* An upcoming minor release can be targeted properly again.
* When visiting a merged feature PR, GitHub will tell us in which branch it has been merged into, which will match the target version of the feature. Currently, we only see that it was some 5.x version:
<img width="502" alt="Bildschirmfoto 2021-04-21 um 00 36 38" src="https://user-images.githubusercontent.com/1506493/115472265-d24c8d80-a239-11eb-9e92-212ac924f5ce.png">
* **All** branch names match a version that people can look up on https://symfony.com/releases
Answers:
username_1: In order to make this happen, we need to make the CI interdependent from the `.x`-suffix convention.
Right now, we have a few `$TRAVIS_BRANCH = *.x` (or similar) checks there. We might rely on the existing API instead, see either https://flex.symfony.com/versions.json, https://symfony.com/all-versions.json, https://symfony.com/versions.json or https://symfony.com/maintained-versions.json
PR welcome :)
There is another thing we must validate before: retargetting PRs.
Now that GitHub allows renaming branches, creating a new branch might be:
1. merge all PRs that are really for 5.3 into 5.3
2. rename 5.3 to 5.4
2. create branch 5.3
We might also need to decide how we'll use milestones. Using `5.x` as a milestone has the advantage to make it clearer that the PR might not be merged in the very next feature set, but maybe the one after. If we go with 5.4 instead of 5.x, step 1. above will also require the milestone to be updated. Maybe something for carsonbot? PR welcome also :)
Then, I agree that 5.x not being compatible with 5.3.* is a pain that I'd like to remove from our processes.
username_2: I fully agree. However, this does require some sort of automation to retarget PRs to the new feature branch. I believe that was one of the reasons to use the 5.x naming last year. However, this automation doesn't have to be very difficult, we might be able to borrow the implementation from https://github.com/laminas/automatic-releases . Optionally, we can also let GitHub do all the work by renaming the branch (https://github.com/github/renaming#renaming-existing-branches), which is arguably a bit more hacky.
Anyway, :100: for `6.0`
username_2: PR welcome :)
Let me, based on https://github.com/laminas/automatic-releases, make another suggestion: Use `6.0.x` as the naming strategy
Status: Issue closed
|
grommet/grommet | 825265117 | Title: Attempted import error: 'Spinner' is not exported from 'grommet'.
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
I cannot use the Spinner component because the module is not found
### Expected Behavior
Show a Spinner component
<!--- Tell us what should happen -->
### Actual Behavior
Importing Spinner throws an error
<!--- Tell us what happens instead -->
### URL, screen shot, or Codepen exhibiting the issue
here's the screenshot of error from my app

and here's the error from CodeSandbox when you click on CodeSandbox from the documentation page

<!--
-- Here's a CodeSandbox template that serves as a nice starting point
-- for demonstrating an issue: https://codesandbox.io/s/m7mml8l0zj
-->
[CodeSandbox using the template](https://codesandbox.io/s/grommet-v2-template-forked-nj0dq?file=/index.js)
### Steps to Reproduce
1. Import Spinner on your app
or
1. Go to [v2.grommet.io./spinner](https://v2.grommet.io/spinner)
2. Click CodeSandbox
### Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
- Grommet version: 2.16.3
- Browser Name and version: Google Chrome 89.0.4389.82
- Operating System and version (desktop or mobile): Ubuntu 18.04
Answers:
username_1: @username_0 The FileInput, Pagination and Spinner components will be officially released on the next grommet v2.17.0. At the moment we are still exploring those and its open to the community to send feedback on https://github.com/grommet/grommet/wiki/What-is-grommet-stable-and-how-to-use-it%3F. You will need to just change your version of Grommet! Hope that helps
Status: Issue closed
|
Garderoben/MudBlazor | 991366680 | Title: Tabs in dialog have unexpected Behaviour
Question:
username_0: When using tabs in a dialog. Behaviour is very fragile and inconsistent.

Here is a simple example:
https://try.mudblazor.com/snippet/cEmbuXuCgcRTQiXY |
mongojack/mongojack | 221238247 | Title: Release for Mongo Driver 3.4?
Question:
username_0: Since the last release is almost a year old and there's no current release available when using the mongodb-driver 3.4.2, could you please provide a new release with the bumped dependency as already available on the master branch?
Or is it safe to use version 2.6.1 together with mongodb-driver 3.4.2 in production?
Answers:
username_1: Done
Status: Issue closed
username_0: TYVM @username_1 ! |
CocoaPods/CocoaPods | 189969149 | Title: [tvOS] The key UIDevicesRequiredCapabilities is invalid for frameworks
Question:
username_0: It seems Apple changed the validation process and PR #4539 now causes the error `ITMS-90689 - The Key UIRequiredDevicesCapabilities in bundle *** is invalid for frameworks`
Confirmed by https://forums.developer.apple.com/thread/68032
Answers:
username_1: I was able to upload a build to iTunes Connect by manually deleting the UIRequiredDevicesCapabilities key out of all of the Info.plists included by the Pods project.
username_2: I managed to upload the build making use of this post_install workaround in my Podfile
```
post_install do |installer|
plist_buddy = "/usr/libexec/PlistBuddy"
installer.pods_project.targets.each do |target|
plist = "Pods/Target Support Files/#{target}/Info.plist"
puts "Delete arm64 to #{target} to make it pass iTC verification."
`#{plist_buddy} -c "Delete UIRequiredDeviceCapabilities" "#{plist}"`
end
end
```
username_0: Almost the same here :)
```ruby
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
# Temporary fix for https://github.com/CocoaPods/CocoaPods/issues/6193
# TODO should only run once per target
# WARN use carefully if your pods really have UIRequiredDeviceCapabilities
if target.platform_name.to_s == "tvos"
plist = plist = "Pods/" + config.build_settings['INFOPLIST_FILE']
puts "Found a tvOS target #{target}"
plist_path = plist.gsub('Target Support Files', 'Target\ Support\ Files')
CoreUI.puts "Patching UIDevicesRequiredDeviceCapabilities in plist: #{plist_path}"
PLISTBUDDY = '/usr/libexec/PlistBuddy'
`#{PLISTBUDDY} -c "Delete :UIRequiredDeviceCapabilities" #{plist_path}`
end
end
end
end
```
username_3: Thanks for letting us know! The fix here seems pretty straight forward, and we'd appreciate if someone closer to the root issue could open a PR to ensure the PR is well-tested. Thanks!
Status: Issue closed
|
nuke-build/nuke | 1069580044 | Title: Nuke support plugin with version 2021 2.0 crashes in rider build version 212.5284.64
Question:
username_0: ### Usage Information
Nuke support version 2021 2.0 and rider version 212.5284.64
### Relevant Code / Invocations
_No response_
### Expected Behavior
I expect to to be able to run Nuke target via the plugin
### What actually happened?
The plugin crashes with the above exception.
### Stacktrace / Log
```code
java.lang.IllegalArgumentException: Collection contains more than one matching element.
at com.jetbrains.rider.plugins.nuke.runConfigurations.NukeRunConfigurationUtilKt.createAndAddConfiguration(NukeRunConfigurationUtil.kt:62)
at com.jetbrains.rider.plugins.nuke.runConfigurations.NukeRunConfigurationUtilKt.findOrCreateConfiguration(NukeRunConfigurationUtil.kt:21)
at com.jetbrains.rider.plugins.nuke.runConfigurations.NukeRunConfigurationManager$1.invoke(NukeRunConfigurationManager.kt:18)
at com.jetbrains.rider.plugins.nuke.runConfigurations.NukeRunConfigurationManager$1.invoke(NukeRunConfigurationManager.kt:14)
at com.jetbrains.rd.util.reactive.Signal.fire(Signal.kt:32)
at com.jetbrains.rd.framework.impl.RdSignal.onWireReceived(RdSignal.kt:42)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:56)
at com.jetbrains.rd.framework.MessageBroker$invoke$2$2.invoke(MessageBroker.kt:11)
at com.jetbrains.rd.framework.impl.ProtocolContexts.readMessageContextAndInvoke(ProtocolContexts.kt:151)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:55)
at com.jetbrains.rd.framework.MessageBroker$invoke$2.invoke(MessageBroker.kt:11)
at com.jetbrains.rdclient.protocol.RdDispatcher.pumpProtocolQueue(RdDispatcher.kt:72)
at com.jetbrains.rdclient.util.idea.ExtensionsKt.callSynchronously(Extensions.kt:147)
at com.jetbrains.rdclient.util.idea.ExtensionsKt.callSynchronously$default(Extensions.kt:130)
at com.jetbrains.rider.util.idea.ExtensionsKt.syncFromBackend(Extensions.kt:95)
at com.jetbrains.rider.intentions.altEnter.ReSharperPopupModel.executeItem(ReSharperPopupModel.kt:135)
at com.jetbrains.rider.intentions.altEnter.PopupModel$attachTo$2.invoke(PopupModel.kt:23)
at com.jetbrains.rider.intentions.altEnter.PopupModel$attachTo$2.invoke(PopupModel.kt:9)
at com.jetbrains.rd.util.reactive.Signal.fire(Signal.kt:32)
at com.jetbrains.rider.services.popups.nova.impl.DefaultPopupViewModel$initViewModel$3.invoke(DefaultPopupViewModel.kt:77)
at com.jetbrains.rider.services.popups.nova.impl.DefaultPopupViewModel$initViewModel$3.invoke(DefaultPopupViewModel.kt:16)
at com.jetbrains.rd.util.reactive.Signal.fire(Signal.kt:32)
at com.jetbrains.rider.services.popups.nova.impl.DefaultPopupListModel$executeSelectedItem$executeDelegate$1.invoke(DefaultPopupListModel.kt:137)
at com.jetbrains.rider.services.popups.nova.impl.DefaultPopupListModel$executeSelectedItem$executeDelegate$1.invoke(DefaultPopupListModel.kt:15)
at com.jetbrains.rider.services.popups.nova.ui.PopupListView$sam$java_lang_Runnable$0.run(PopupListView.kt)
at com.intellij.openapi.application.TransactionGuardImpl.performUserActivity(TransactionGuardImpl.java:94)
at com.intellij.ui.popup.AbstractPopup.lambda$dispose$18(AbstractPopup.java:1503)
at com.intellij.util.ui.EdtInvocationManager.invokeLaterIfNeeded(EdtInvocationManager.java:101)
at com.intellij.ide.IdeEventQueue.ifFocusEventsInTheQueue(IdeEventQueue.java:186)
at com.intellij.ide.IdeEventQueue.executeWhenAllFocusEventsLeftTheQueue(IdeEventQueue.java:140)
at com.intellij.openapi.wm.impl.FocusManagerImpl.doWhenFocusSettlesDown(FocusManagerImpl.java:175)
at com.intellij.openapi.wm.impl.IdeFocusManagerImpl.doWhenFocusSettlesDown(IdeFocusManagerImpl.java:36)
at com.intellij.ui.popup.AbstractPopup.dispose(AbstractPopup.java:1500)
at com.intellij.openapi.util.ObjectTree.runWithTrace(ObjectTree.java:136)
at com.intellij.openapi.util.ObjectTree.executeAll(ObjectTree.java:166)
at com.intellij.openapi.util.Disposer.dispose(Disposer.java:155)
at com.intellij.ui.popup.AbstractPopup.cancel(AbstractPopup.java:779)
at com.jetbrains.rider.services.popups.nova.ui.PopupViewImpl.cancel(PopupViewImpl.kt:85)
at com.intellij.ui.popup.AbstractPopup.cancel(AbstractPopup.java:727)
at com.intellij.ui.popup.AbstractPopup.dispose(AbstractPopup.java:1447)
[Truncated]
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.java:885)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.java:754)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$6(IdeEventQueue.java:441)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:825)
at com.intellij.ide.IdeEventQueue.lambda$dispatchEvent$7(IdeEventQueue.java:440)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:794)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.java:486)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
```
### Anything else we should know?
_No response_
Status: Issue closed
Answers:
username_2: Hey @username_1 , I've got the same error when I try to debug my builds. The configuration I'm using is the current Nuke Support Plugin, version 2021.3.0, together with Rider 2021.3.3. As there are no informations on how the bug was resolved: Could you help me out here? Thank you very much!!
username_3: I got the same bug. The plugin never worked for me in different versions of Rider and Nuke. Now at Rider 2021.3.3 and NUKE Support 2021.3.0 and still getting the same error.
I have no clue what to tweak to get this working. |
pints-team/pints | 279363526 | Title: Add Particle MC
Question:
username_0: @username_2 is this different from #120 ?
Answers:
username_0: @username_2 is this different from #120 ?
username_1: @username_2 @sanmitraghosh: I have been thinking of trying out particle filtering on an electrochemistry model (have experimental data where one of the parameters varies over time, we currently fit a polynomial to it for optimisation). Will the SMC implementation planned for pints help me here?
username_2: Yep, possibly. But you would need a modelled process for the parameter rather than just fitting it with a spline.
So, for example,
theta_t = rho theta_t-1 + e_t
where e_t ~ N(0,sigma).
>
username_0: Closing this as duplicate of #120
Status: Issue closed
|
ScarletStudy/DGS1-Android-Release | 430552025 | Title: Case 5: Scene causes game to crash
Question:
username_0: ### About your device
**Device Manufacturer / Model or Emulator / Version??**
Moto G5S Plus
**Android Version?**
8.1.0
**Are you using a Custom ROM?** (If you don't know what this means, then most likely no)
No
**Is your device rooted?** (If you don't know what this means, then most likely no)
No
### Your issue
**Please describe the issue you are experiencing:**
Hi all, a specific scene in the game seems to be causing my game to crash.
During Case 5 on the first day when you visit Hachi's shop and talk about the Automatic Crime Recording Device, right before Hachi shows the photograph taken by the ACDR the game immediately crashes and loads the Scarlet Study disclaimer screen and then the DGS title screen.
I've tried restarting my phone multiple times and ensuring that no other apps are running whilst playing DGS but nothing has worked. I've also tried saving the game right before the specific scene and kept trying to play through it in hopes that the game won't crash, but unfortunately the scene never seems to load correctly and the game continues to crash. No error message is generated and I haven't changed any game or phone settings recently either. It seems like at the moment I can't progress within the game.
(Apart from this I've had a relatively bug-free and enjoyable game experience - thanks for all you guys' hard work on this project!)
Answers:
username_1: Had the same exact problem. I'm not very tech savvy, so thanks username_0 for finding that workaround! I was also able to circumvent the crash by using BlueStacks and transferring the .dat file back to my phone.
Beside that, my experience has been great so far, and I haven't had any other major issues. As a fan of the Ace Attorney series, I've been having a blast with the game. Thanks guys for all your hard work!
For reference, here's my device info:
Model: Xiaomi Mi A1
Android Version: 9
Custom ROM: No
Rooted: No
username_2: can you please share me your save file
I cant patch it with bluebox
username_3: I would love to have the save file as well, I can't get through this bug :(
Thanks a lot in advance!
username_4: I'm having the same issue described as OP with the exact same phone model, but I'm having a hard time getting the game set up in Bluestacks in the first place. Could someone walk me through how to do this? I'm not sure what to transfer from my phone...
username_5: Facing the same issue. Will throw in my details, just in case.
Device Manufacturer / Model or Emulator / Version??
Xiaomi Mi A2 Lite
Android Version?
9
Are you using a Custom ROM? (If you don't know what this means, then most likely no)
No
Is your device rooted? (If you don't know what this means, then most likely no)
No
As for the workaround, using BlueStacks to get past the crashing point worked perfectly. If anyone still wants a save, here. https://files.catbox.moe/wh942r.rar (Second save slot is just after the crashing point.) |
koaning/calmcode-feedback | 912213986 | Title: Missing `grid` import
Question:
username_0: Hi, thanks again for very interesting resources. The `grid` function doesn't seem to be imported [here](https://calmcode.io/memo/grid.html):

Thanks!
Answers:
username_0: Same issue in https://calmcode.io/memo/runner.html
username_1: Made the changes ready for the next PR.
Thanks for reporting!
Status: Issue closed
|
bshaffer/oauth2-server-php | 232403423 | Title: Redirect on Error with Hash/fragment when in implicit mode
Question:
username_0: Hi,
when I am using implicit flow and I run
```
$server->handleAuthorizeRequest($request, $response, $authorized, $subject);
$response->send();
```
with `authorized=FALSE` I am redirected but the error message etc. is appended to the url as a query parameter. But this should happen via fragment/hash. A positive answer uses correctly the fragment way...
I think the problem is that `AccessToken->getAuthorizeResponse()` correctly uses explicit the fragment but `response->setRedirect()` uses always the query part without noticing the implicit mode.
Is there any way to quickly fix this?
Thankx
Lukas |
frappe/frappe | 918302409 | Title: [v13]Email sent from wrong account.
Question:
username_0: Suppose I have 2 email accounts configured in my system: <EMAIL> and <EMAIL>.
Out going is enabled for both and <EMAIL> is set as default outgoing. And I have given a user _Inbox User_ role and added <EMAIL> in the child table User Email.
Now, when that user tries to send an email, he/she doesn't get to choose from email(obvious since they have only one option). To, CC, BCC are all visible. The user populates available fields and send the email.
The email will be sent from default out going account, ie, from <EMAIL> while it should have gone from <EMAIL>.
ERPNext: v13.4.1 (version-13)
Frappe Framework: v13.3.0 (version-13)
P.S The last time I tried to fix an issue, it turned out to be a blunder, so before I try to send a pull request to fix this, it would be nice of anybody else to confirm this and comment or give me a thumbs up so that I can proceed. Thanks. |
TutorTemple/tutor-temple | 349387715 | Title: SyntaxError: /home/deemak/tutor-temple/app/views/profiles/_work_experience_fields.html.slim:8: syntax error, unexpected '='
= link_to_remove_association t...
^
Question:
username_0: View details in Rollbar: [https://rollbar.com/TutorTemple/tutor-temple/items/42/](https://rollbar.com/TutorTemple/tutor-temple/items/42/)
```
SyntaxError: /home/deemak/tutor-temple/app/views/profiles/_work_experience_fields.html.slim:8: syntax error, unexpected '='
= link_to_remove_association t...
^
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 309, in module_eval
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 309, in compile
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 259, in block (2 levels) in compile!
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/notifications.rb", line 170, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 350, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 258, in block in compile!
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 246, in synchronize
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 246, in compile!
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 158, in block in render
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/notifications.rb", line 170, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 354, in instrument_render_template
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/template.rb", line 157, in render
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/partial_renderer.rb", line 344, in block in render_partial
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/abstract_renderer.rb", line 44, in block in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/notifications.rb", line 168, in block in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/notifications/instrumenter.rb", line 23, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/activesupport-5.2.0/lib/active_support/notifications.rb", line 168, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/abstract_renderer.rb", line 43, in instrument
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/partial_renderer.rb", line 333, in render_partial
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/partial_renderer.rb", line 312, in render
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/renderer/renderer.rb", line 49, in render_partial
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/rendering_helper.rb", line 37, in render
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/cocoon-1.2.11/lib/cocoon/view_helpers.rb", line 52, in block in render_association
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in block in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 205, in with_output_buffer
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 2314, in block in fields_for_nested_model
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in block in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 205, in with_output_buffer
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 1006, in fields_for
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 2313, in fields_for_nested_model
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 2299, in block in fields_for_with_nested_attributes
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 2293, in each
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 2293, in fields_for_with_nested_attributes
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/form_helper.rb", line 1938, in fields_for
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/simple_form-4.0.1/lib/simple_form/action_view_extensions/builder.rb", line 28, in simple_fields_for
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/cocoon-1.2.11/lib/cocoon/view_helpers.rb", line 50, in render_association
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/cocoon-1.2.11/lib/cocoon/view_helpers.rb", line 98, in link_to_add_association
File "/home/deemak/tutor-temple/app/views/profiles/_form.html.slim", line 19, in block in _app_views_profiles__form_html_slim__4112519249194911656_70200565839340
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in block in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 205, in with_output_buffer
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionview-5.2.0/lib/action_view/helpers/capture_helper.rb", line 41, in capture
File "/home/deemak/.rbenv/versions/2.5.1/lib/ruby/gems/2.5.0/gems/actionvi |
airbnb/lottie-android | 397231800 | Title: Unable to parse composition
Question:
username_0: Hi, gays. I have a crash on my release app. Crashes rarely happen, I can't reproduce myself. Lottie version is v2.7.0. and it's only happened in android 8.0 devices.
` Crashed: main
at com.airbnb.lottie.LottieAnimationView$2.onResult(LottieAnimationView.java:68)
at com.airbnb.lottie.LottieAnimationView$2.onResult(LottieAnimationView.java:66)
at com.airbnb.lottie.LottieTask.notifyFailureListeners(LottieTask.java:167)
at com.airbnb.lottie.LottieTask.access$000(LottieTask.java:26)
at com.airbnb.lottie.LottieTask$1.run(LottieTask.java:142)
at android.os.Handler.handleCallback(Handler.java:790)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:198)
at android.app.ActivityThread.main(ActivityThread.java:7015)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:521)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:837)
--
Fatal Exception: java.lang.IllegalStateException: Unable to parse composition
at com.airbnb.lottie.LottieAnimationView$2.onResult(LottieAnimationView.java:68)
at com.airbnb.lottie.LottieAnimationView$2.onResult(LottieAnimationView.java:66)
at com.airbnb.lottie.LottieTask.notifyFailureListeners(LottieTask.java:167)
at com.airbnb.lottie.LottieTask.access$000(LottieTask.java:26)
at com.airbnb.lottie.LottieTask$1.run(LottieTask.java:142)
at android.os.Handler.handleCallback(Handler.java:790)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:198)
at android.app.ActivityThread.main(ActivityThread.java:7015)
at java.lang.reflect.Method.invoke(Method.java)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:521)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:837)
Caused by java.util.concurrent.ExecutionException: java.lang.AssertionError
at java.util.concurrent.FutureTask.report(FutureTask.java:123)
at java.util.concurrent.FutureTask.get(FutureTask.java:193)
at com.airbnb.lottie.LottieTask$2.run(LottieTask.java:189)
Caused by java.lang.AssertionError
at android.util.JsonReader.peek(JsonReader.java:363)
at android.util.JsonReader.expect(JsonReader.java:308)
at android.util.JsonReader.beginObject(JsonReader.java:293)
at com.airbnb.lottie.parser.LottieCompositionParser.parse(LottieCompositionParser.java:42)
at com.airbnb.lottie.LottieCompositionFactory.fromJsonReaderSync(LottieCompositionFactory.java:229)
at com.airbnb.lottie.LottieCompositionFactory.fromJsonInputStreamSync(LottieCompositionFactory.java:163)
at com.airbnb.lottie.LottieCompositionFactory.fromJsonInputStreamSync(LottieCompositionFactory.java:157)
at com.airbnb.lottie.LottieCompositionFactory.fromRawResSync(LottieCompositionFactory.java:129)
at com.airbnb.lottie.LottieCompositionFactory$2.call(LottieCompositionFactory.java:116)
at com.airbnb.lottie.LottieCompositionFactory$2.call(LottieCompositionFactory.java:114)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
`
Thanks a lot!
Answers:
username_1: @username_0 Please attach the animation
username_2: I have the same crash,this's my animation
`{"v":"5.4.1","fr":30,"ip":0,"op":20,"w":60,"h":42,"nm":"1","ddd":0,"assets":[],"layers":[{"ddd":0,"ind":1,"ty":4,"nm":"不动腿","parent":5,"sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[44.125,468.375,0],"ix":2},"a":{"a":0,"k":[0,0,0],"ix":1},"s":{"a":0,"k":[100,100,100],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[10.242,10.242],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.76862745098,0.035294117647,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-40.32,-442.248],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[50,50],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 5","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[10.242,10.242],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.76862745098,0.035294117647,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-49.536,-441.277],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[50,50],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 4","np":2,"cix":2,"ix":2,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[10.242,10.242],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.76862745098,0.035294117647,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-58.276,-441.832],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[50,50],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 3","np":2,"cix":2,"ix":3,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[10.242,10.242],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.76862745098,0.035294117647,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-30.331,-444.918],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 6","np":2,"cix":2,"ix":4,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[10.242,10.242],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.76862745098,0.035294117647,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-68.666,-446.491],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 2","np":2,"cix":2,"ix":5,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":21,"st":0,"bm":0},{"ddd":0,"ind":2,"ty":4,"nm":"saihong","parent":5,"sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[12,-5.25,0],"ix":2},"a":{"a":0,"k":[0,0,0],"ix":1},"s":{"a":0,"k":[100,100,100],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"d":1,"ty":"el","s":{"a":0,"k":[19.113,7.044],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"nm":"椭圆路径 1","mn":"ADBE Vector Shape - Ellipse","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.352941176471,0.352941176471,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[-5.362,3.373],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"椭圆 1","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":300,"st":0,"bm":0},{"ddd":0,"ind":3,"ty":4,"nm":"eye","parent":5,"sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[-10,-8.75,0],"ix":2},"a":{"a":0,"k":[-8.375,-475,0],"ix":1},"s":{"a":0,"k":[100,100,100],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"ind":0,"ty":"sh","ix":1,"ks":{"a":1,"k":[{"i":{"x":0.833,"y":0.833},"o":{"x":0.167,"y":0.167},"n":"0p833_0p833_0p167_0p167","t":7,"s":[{"i":[[1.75,-0.018],[0.172,-0.828],[0,0],[0.031,-0.031],[-0.266,-0.344],[-0.25,-0.188],[-1.656,0.043],[-0.19,0.163],[0,0],[0,0],[0,0],[0.282,0.672]],"o":[[-2.141,0.022],[-0.156,0.75],[0,0],[0.062,0.141],[0,0],[0.234,0.094],[1.203,-0.031],[0.142,-0.141],[0,0],[0,0],[0,0],[-0.39,-0.688]],"v":[[-8.656,-482.029],[-11.75,-479.969],[-11.906,-477.969],[-11.828,-477.562],[-11.172,-476.469],[-10.516,-476.016],[-8.281,-475.641],[-6.298,-476.266],[-5.813,-476.797],[-5.516,-477.428],[-5.36,-478.375],[-5.657,-480.375]],"c":true}],"e":[{"i":[[2.875,-0.062],[-0.062,-0.812],[0,0],[-0.5,0],[0,0],[-0.062,0.688],[-2.062,0.062],[-0.062,-0.562],[0,0],[0,0],[0,0],[0.125,0.75]],"o":[[-3.249,0.071],[0.062,0.812],[0,0],[0.5,0],[0,0],[0.062,-0.688],[2.062,-0.062],[0.062,0.562],[0,0],[0,0],[0,0],[-0.125,-0.75]],"v":[[-8.625,-483.625],[-12.875,-480.062],[-12.812,-475.688],[-12,-475.438],[-11.125,-475.688],[-11.188,-479.312],[-8.625,-481.625],[-6.062,-479.375],[-5.875,-475.562],[-4.937,-475.35],[-4.188,-475.625],[-4.188,-479.875]],"c":true}]},{"t":17}],"ix":2},"nm":"路径 1","mn":"ADBE Vector Shape - Group","hd":false},{"ty":"fl","c":{"a":0,"k":[0.164705882353,0.117647058824,0.078431372549,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[0,0],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"形状 1","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":281,"st":-19,"bm":0},{"ddd":0,"ind":4,"ty":4,"nm":"zhuangshi","parent":5,"sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[0,0,0],"ix":2},"a":{"a":0,"k":[0,0,0],"ix":1},"s":{"a":0,"k":[100,100,100],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"ind":0,"ty":"sh","ix":1,"ks":{"a":0,"k":{"i":[[1.844,0.062],[0,0],[0.002,-0.499],[1.834,-1.215],[4.477,-0.026],[0,0],[-6.375,0.125],[0,0],[0,0]],"o":[[-0.094,0],[0,0],[-0.12,1.052],[-10.541,1.41],[2.972,2.005],[0,0],[6.375,-0.125],[0,0],[0,0]],"v":[[9.062,-25.75],[5.625,-25.75],[5.623,9.624],[3.791,14.465],[-17.727,15.526],[-11.875,17.875],[5.5,17.75],[10.75,9.75],[10.75,-23.562]],"c":true},"ix":2},"nm":"路径 1","mn":"ADBE Vector Shape - Group","hd":false},{"ty":"fl","c":{"a":0,"k":[0.972549019608,0.666666666667,0.027450980392,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[0,0],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"形状 1","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":300,"st":0,"bm":0},{"ddd":0,"ind":5,"ty":4,"nm":"body","sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[29,40.125,0],"ix":2},"a":{"a":0,"k":[-5,27.5,0],"ix":1},"s":{"a":1,"k":[{"i":{"x":[0.833,0.833,0.833],"y":[0.833,0.833,0.833]},"o":{"x":[0.167,0.167,0.167],"y":[0.167,0.167,0.167]},"n":["0p833_0p833_0p167_0p167","0p833_0p833_0p167_0p167","0p833_0p833_0p167_0p167"],"t":0,"s":[0,0,100],"e":[50,50,100]},{"t":14}],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"ind":0,"ty":"sh","ix":1,"ks":{"a":0,"k":{"i":[[2.438,0.062],[0,0],[-1.125,-0.875],[-1.625,-3.125],[0.125,-10.75],[0,0],[-6.375,0.125],[0,0],[0,0]],"o":[[-0.062,-0.062],[0,0],[1.125,0.875],[2.75,6.25],[0.75,13.25],[0,0],[6.375,-0.125],[0,0],[0,0]],"v":[[8.562,-25.75],[-33.125,-25.75],[-33,-24.375],[-24,-19],[-24.125,3.25],[-11.875,17.875],[5.5,17.75],[10.75,9.75],[10.75,-23.5]],"c":true},"ix":2},"nm":"路径 1","mn":"ADBE Vector Shape - Group","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.807843137255,0,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[0,0],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"形状 1","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":300,"st":0,"bm":0},{"ddd":0,"ind":6,"ty":4,"nm":"head","parent":5,"sr":1,"ks":{"o":{"a":0,"k":100,"ix":11},"r":{"a":0,"k":0,"ix":10},"p":{"a":0,"k":[12.75,-26.778,0],"ix":2},"a":{"a":0,"k":[14.5,-492,0],"ix":1},"s":{"a":0,"k":[100,100,100],"ix":6}},"ao":0,"shapes":[{"ty":"gr","it":[{"ty":"rc","d":1,"s":{"a":0,"k":[8,15],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"r":{"a":0,"k":2,"ix":4},"nm":"矩形路径 1","mn":"ADBE Vector Shape - Rect","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.292264093137,0.292264093137,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[15.94,-493.71],"ix":2},"a":{"a":0,"k":[0.042,8.474],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":45,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"矩形 3","np":2,"cix":2,"ix":1,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"ty":"rc","d":1,"s":{"a":0,"k":[7,17],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"r":{"a":0,"k":2,"ix":4},"nm":"矩形路径 1","mn":"ADBE Vector Shape - Rect","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.292264093137,0.292264093137,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[22.352,-495.252],"ix":2},"a":{"a":0,"k":[0,0],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":90,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"矩形 2","np":2,"cix":2,"ix":2,"mn":"ADBE Vector Group","hd":false},{"ty":"gr","it":[{"ty":"rc","d":1,"s":{"a":0,"k":[7,17],"ix":2},"p":{"a":0,"k":[0,0],"ix":3},"r":{"a":0,"k":2,"ix":4},"nm":"矩形路径 1","mn":"ADBE Vector Shape - Rect","hd":false},{"ty":"fl","c":{"a":0,"k":[1,0.292264093137,0.292264093137,1],"ix":4},"o":{"a":0,"k":100,"ix":5},"r":1,"nm":"填充 1","mn":"ADBE Vector Graphic - Fill","hd":false},{"ty":"tr","p":{"a":0,"k":[13.309,-500.115],"ix":2},"a":{"a":0,"k":[-4.025,0.121],"ix":1},"s":{"a":0,"k":[100,100],"ix":3},"r":{"a":0,"k":0,"ix":6},"o":{"a":0,"k":100,"ix":7},"sk":{"a":0,"k":0,"ix":4},"sa":{"a":0,"k":0,"ix":5},"nm":"变换"}],"nm":"矩形 1","np":2,"cix":2,"ix":3,"mn":"ADBE Vector Group","hd":false}],"ip":0,"op":281,"st":-19,"bm":0}],"markers":[]}`
username_3: I also have this crash,because I have two json files in raws and i load them in the same time,this will cause two threads operate one arrayList in the same time。so the solution is i load them by using LoadAnimationView.addLottieOnCompositionLoadedListeneadd to make sure load them in order!
username_1: @username_3 which ArrayList are you referring to?
username_1: Duplicate of #667
Status: Issue closed
username_4: I have the same usage with you--I have 3 LottieAnimationView loaded at the same time.But I didn't find the reason why crash happend. So can you give more details? Thanks a lot anyway.
username_5: I also want to no why it crashed.
username_6: Check if you have the file. It can occur when the JSON file doesn't exist. |
Coderockr/backstage | 595855896 | Title: Microserviços para o Frontend - Single Spa
Question:
username_0: Link: https://single-spa.js.org/
<img width="1228" alt="Screen Shot 2020-04-07 at 10 09 31" src="https://user-images.githubusercontent.com/2267327/78673017-e84e1100-78b7-11ea-855f-29feca07e427.png"> |
philipperemy/keras-tcn | 517595278 | Title: Code not working after update to new TCN
Question:
username_0: Hi,
After using the TCN code update 22 days back, all my code is not working.
Please check the attached text files:
same_code.py is the same code used in old and new TCN
old_model_summary.txt is the model summary before the update
new_model_summary.txt is the model summary after the update
tcn283_old.py is the last tcn which was working
Would you please give any clue to what is going on.
Thank you
[new_model_summary.txt](https://github.com/username_1/keras-tcn/files/3807703/new_model_summary.txt)
[old_model_summary.txt](https://github.com/username_1/keras-tcn/files/3807704/old_model_summary.txt)
[same_code.txt](https://github.com/username_1/keras-tcn/files/3807707/same_code.txt)
[tcn283_old.txt](https://github.com/username_1/keras-tcn/files/3807708/tcn283_old.txt)
Answers:
username_1: @username_0 is it a compilation issue or convergence issue?
username_0: It is a convergence issue. New TCN can not go more 0.3 accuracy while the old one can go to more than 0.97 accuracy
Thank you for the quick response
username_1: @username_0 that's concerning. Did you try to run it several times?
username_0: Yes, I tried with different machines even.
The problem can be noticed easily from the model summary files
using the same code the old model generates a different CNN model than the current one.
New model Total params: 524
Old model Total params: 14,810,956
Were you able to check the text files attached? They use the same code and generate totally different networks.
Thank you for following up
username_1: Oh yeah the new model seems pretty empty. It's pretty weird. Will have a look this weekend!
username_0: Thank you
username_1: @username_2 maybe linked to the recent changes.
username_2: Oh no... will try and take a look this afternoon
username_1: Thank you so much!
username_2: @username_0 Are you using tensorflow as the backend or something else?
username_0: Yes I am using TensorFlow as backend
I put now everything including the data so you can reproduce the issue on the following google drive shared folder:
https://drive.google.com/drive/folders/1JYp7G8d34IOJhHWUyfIe_bUqsoRGhogt?usp=sharing
thank you for the quick response
username_2: I cannot access google drive right now for proxy reasons, but I am able to build your model just fine and have the same number of parameters as your old model. What version of Keras and tensorflow are you using?
username_0: I just run it on colab leaving it to defaults and still I noticed the issue:
keras version: 2.2.5
tf version: 1.15.0
username_2: It definitely seems to just be an issue on google colab. Unfortunately, I am completely new to that service and the jupyter notebook style of scripting, so it could take me awhile to find the issue.
I did find it will work on there with:
tensorflow 1.15 and 2.0
keras 2.3.1
So maybe try just updating keras. I think that should fix it for you.
username_0: Yes after upgrade to keras 2.3.1 it has the same number of parameters but still has the convergence issue. See below a comparison of old and new tcn for val_accuracy
Old TCN:
Total params: 14,810,956
Trainable params: 14,803,786
Non-trainable params: 7,170
Epoch 1/3000
100/100 [==============================] - 181s 2s/step - loss: 1.2263 - accuracy: 0.7991 - val_loss: 7.1074 - val_accuracy: 0.5445
Epoch 00001: val_loss improved from inf to 7.10735, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 2/3000
100/100 [==============================] - 154s 2s/step - loss: 0.3821 - accuracy: 0.8993 - val_loss: 1.4262 - val_accuracy: 0.8534
Epoch 00002: val_loss improved from 7.10735 to 1.42625, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 3/3000
100/100 [==============================] - 154s 2s/step - loss: 0.2848 - accuracy: 0.9201 - val_loss: 4.8484 - val_accuracy: 0.6977
Epoch 00003: val_loss did not improve from 1.42625
Epoch 4/3000
100/100 [==============================] - 154s 2s/step - loss: 0.2012 - accuracy: 0.9396 - val_loss: 0.5044 - val_accuracy: 0.9443
Epoch 00004: val_loss improved from 1.42625 to 0.50444, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 5/3000
100/100 [==============================] - 154s 2s/step - loss: 0.1407 - accuracy: 0.9539 - val_loss: 0.6171 - val_accuracy: 0.9385
Epoch 00005: val_loss did not improve from 0.50444
Epoch 6/3000
100/100 [==============================] - 154s 2s/step - loss: 0.1199 - accuracy: 0.9585 - val_loss: 0.0913 - val_accuracy: 0.9611
Epoch 00006: val_loss improved from 0.50444 to 0.09128, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 7/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0989 - accuracy: 0.9648 - val_loss: 0.1525 - val_accuracy: 0.9581
Epoch 00007: val_loss did not improve from 0.09128
Epoch 8/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0854 - accuracy: 0.9697 - val_loss: 0.1131 - val_accuracy: 0.9637
Epoch 00008: val_loss did not improve from 0.09128
Epoch 9/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0768 - accuracy: 0.9716 - val_loss: 0.1820 - val_accuracy: 0.9677
Epoch 00009: val_loss did not improve from 0.09128
Epoch 10/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0682 - accuracy: 0.9744 - val_loss: 0.1400 - val_accuracy: 0.9539
New TCN:
[Truncated]
Epoch 00005: val_loss did not improve from 13.77543
Epoch 6/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0932 - accuracy: 0.9665 - val_loss: 14.2479 - val_accuracy: 0.1177
Epoch 00006: val_loss did not improve from 13.77543
Epoch 7/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0792 - accuracy: 0.9709 - val_loss: 14.3323 - val_accuracy: 0.1167
Epoch 00007: val_loss did not improve from 13.77543
Epoch 8/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0775 - accuracy: 0.9712 - val_loss: 13.2020 - val_accuracy: 0.1875
Epoch 00008: val_loss improved from 13.77543 to 13.20198, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 9/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0699 - accuracy: 0.9742 - val_loss: 14.6189 - val_accuracy: 0.1015
Epoch 00009: val_loss did not improve from 13.20198
Epoch 10/3000
100/100 [==============================] - 154s 2s/step - loss: 0.0667 - accuracy: 0.9752 - val_loss: 14.4819 - val_accuracy: 0.0966
username_2: I will take a look this weekend when I can access your training data.
username_2: If I had to take a guess right now, the issue lies with the batch normalization because it is only the validation data that doesn’t converge.
username_0: Thank you for that, the dataset is at:
https://drive.google.com/drive/folders/1JYp7G8d34IOJhHWUyfIe_bUqsoRGhogt?usp=sharing
username_2: I'm really at a loss as to why the validation is having issues. I took a look this weekend and found that we should be passing the training keyword to the normalization layer, but that didn't seem to fix it. It is tedious to debug this on colab. @username_0 Do you have a smaller network that is having the same problem so I may debug locally?
username_0: To have smaller network, we just change the parameters at the top of the file.
New Parameters:
n_filters=32
kernel_size=32
dilations=3
n_stacks=3
Also, this smaller network produce the same issue:
Old TCN:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 1000, 1) 0
batch_normalization_1 (BatchNor (None, 1000, 1) 4 input_1[0][0]
conv1d_1 (Conv1D) (None, 1000, 32) 64 batch_normalization_1[0][0]
conv1d_2 (Conv1D) (None, 1000, 32) 32800 conv1d_1[0][0]
batch_normalization_2 (BatchNor (None, 1000, 32) 128 conv1d_2[0][0]
activation_1 (Activation) (None, 1000, 32) 0 batch_normalization_2[0][0]
spatial_dropout1d_1 (SpatialDro (None, 1000, 32) 0 activation_1[0][0]
conv1d_3 (Conv1D) (None, 1000, 32) 32800 spatial_dropout1d_1[0][0]
batch_normalization_3 (BatchNor (None, 1000, 32) 128 conv1d_3[0][0]
activation_2 (Activation) (None, 1000, 32) 0 batch_normalization_3[0][0]
spatial_dropout1d_2 (SpatialDro (None, 1000, 32) 0 activation_2[0][0]
conv1d_4 (Conv1D) (None, 1000, 32) 1056 conv1d_1[0][0]
add_1 (Add) (None, 1000, 32) 0 conv1d_4[0][0]
spatial_dropout1d_2[0][0]
activation_3 (Activation) (None, 1000, 32) 0 add_1[0][0]
conv1d_5 (Conv1D) (None, 1000, 32) 32800 activation_3[0][0]
batch_normalization_4 (BatchNor (None, 1000, 32) 128 conv1d_5[0][0]
activation_4 (Activation) (None, 1000, 32) 0 batch_normalization_4[0][0]
spatial_dropout1d_3 (SpatialDro (None, 1000, 32) 0 activation_4[0][0]
[Truncated]
Epoch 00005: val_loss did not improve from 12.08278
Epoch 6/3000
100/100 [==============================] - 29s 295ms/step - loss: 0.1889 - accuracy: 0.9342 - val_loss: 12.4054 - val_accuracy: 0.2432
Epoch 00006: val_loss did not improve from 12.08278
Epoch 7/3000
100/100 [==============================] - 30s 296ms/step - loss: 0.1700 - accuracy: 0.9400 - val_loss: 11.9768 - val_accuracy: 0.2491
Epoch 00007: val_loss improved from 12.08278 to 11.97675, saving model to /content/drive/My Drive/colab01/wlogs06/weights.h5
Epoch 8/3000
100/100 [==============================] - 30s 295ms/step - loss: 0.1534 - accuracy: 0.9459 - val_loss: 12.3316 - val_accuracy: 0.2452
Epoch 00008: val_loss did not improve from 11.97675
Epoch 9/3000
100/100 [==============================] - 30s 295ms/step - loss: 0.1392 - accuracy: 0.9503 - val_loss: 14.3819 - val_accuracy: 0.1122
Epoch 00009: val_loss did not improve from 11.97675
Epoch 10/3000
100/100 [==============================] - 30s 296ms/step - loss: 0.1357 - accuracy: 0.9521 - val_loss: 14.1681 - val_accuracy: 0.1122
username_3: hava a same problem, the val_accuracy cannot imporve with the accuracy, is there any suggestion?
username_2: Unfortunately, I have not had much of a chance to look at this again lately. And probably won‘t until later this week. :(
@username_3 Can you post your TCN layer configuration? (ie parameters you built the layer with)
username_3: I use TCN as language model , and config as follow:
`def build_model(vocab_size):
model = Sequential()
model.add(Embedding(vocab_size, input_embedding_size))
model.add(TCN(nb_stacks=4, kernel_size=3, dropout_rate=0.45, return_sequences=True,dilations=(1, 2, 4, 8), use_batch_norm=True, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(vocab_size, activation='softmax'))
return model`
username_3: I found where the problem is..Caused by `use_batch_norm=True`, while set `use_batch_norm=False`, the val_acc will improve with acc, maybe its due to keras batch_normalization_layer trainable param. However, even if set `model.trainable=True`, the problem won't be fixed. So I just set `use_batch_norm=False`.
username_2: this is what I was playing with on my forked copy. We should figure this out. the layer should be set as trainable, but even when I passed „training“ to the call to put the layer in inference mode (note there is a significant difference between „trainable“ and „training“), it still didn‘t work. From looking online, it seems like the batch_norm layer can be a pain. I just don‘t know why it worked before and not now. That‘s mostly what concerns me.
username_1: @username_2 @username_3 Hum this batch norm is a tricky thing. Weight norm was released for tensorflow so maybe we should move to https://github.com/username_1/keras-tcn/issues/91
username_1: Btw I could reproduce the convergence issue on my GPU server:
With pip install keras-tcn==2.8.3
https://pastebin.com/P4A7PFj7
https://pastebin.com/w0AcW5zr
With master
https://pastebin.com/YaiqtZ7Q
https://pastebin.com/hiy99ZtY
=> Convergence problem identified.
Maybe it’s due to a structure change in the model or initialization of the weights?
username_4: I just checked with the mnist example with tf.__version__ = '2.0.0', current github version:
# no dropout, batchnorm=True after 10 epochs: 97s 2ms/sample - loss: 0.0495 - accuracy: 0.9842 - val_loss: 0.0470 - val_accuracy: 0.9842
# 0.05 dropout, batchnorm=True after 10 epochs: 100s 2ms/sample - loss: 0.0512 - accuracy: 0.9834 - val_loss: 0.0483 - val_accuracy: 0.9853
I see no problem.
Btw.: Nice package, great work!
username_0: Yes for me too now.
Nice package, great work!
username_1: Thank you everyone! We can close this issue now!
Status: Issue closed
|
Axosoft/nsfw | 921473488 | Title: build error 'CoreServices/CoreServices.h' file not found
Question:
username_0: (node:43666) [DEP0150] DeprecationWarning: Setting process.config is deprecated. In the future the property will be read-only.
(Use `node --trace-deprecation ...` to show where the warning was created)
CXX(target) Release/obj.target/nsfw/src/NSFW.o
In file included from ../src/NSFW.cpp:1:
In file included from ../src/../includes/NSFW.h:12:
In file included from ../includes/./NativeInterface.h:8:
In file included from ../includes/../includes/osx/FSEventsService.h:4:
../includes/../includes/osx/RunLoop.h:7:10: fatal error: 'CoreServices/CoreServices.h' file not found
#include <CoreServices/CoreServices.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../includes/../includes/osx/RunLoop.h:7:10: note: did not find header 'CoreServices.h' in framework 'CoreServices' (loaded from '/System/Library/Frameworks')
1 error generated.
make: *** [Release/obj.target/nsfw/src/NSFW.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (node:events:394:28)
gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
gyp ERR! System Darwin 20.2.0
gyp ERR! command "/usr/local/lib/node_modules/node/lib/node_modules/node/lib/node_modules/node/lib/node_modules/node/bin/node" "/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/maksimkozlov/apt/my-ide/node_modules/nsfw
gyp ERR! node -v v16.3.0
gyp ERR! node-gyp -v v8.1.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
```
Google didn't help me with stackoverflow either. You are my last hope =) Plz help
Answers:
username_1: Version of OSX and version of XCode installed?
username_0: MacOs 11.1
xcode not installed, but i have command line tools with Apple clang version 12.0.5 (clang-1205.0.22.9)
username_1: https://github.com/nodejs/node-gyp#on-macos Try following this guide and rebuilding.
username_0: used the link above. reinstalled everything three times. nothing helped =( |
trailofbits/polytracker | 765911838 | Title: 株洲火车西站哪有特殊服务的洗浴▋╋薇/芯:10771909▋
Question:
username_0: 株洲火车西站哪里有真实大保健(找特色服务妹子【+V:10771909】转眼就已经放寒假了而且不到十天就要过年啦酷我音乐为此早就准备好了小朋友们的“精神食粮”——经典动画片《小猪佩奇》全部正版中英文有声节目音频。对于喜欢动画片的小朋友来说这绝对是一个莫大的惊喜说起《小猪佩奇》那绝对是现在的小朋友们最爱的动画片之一。这部英国学前电视动画片于年月全球发行首播央视年月引进随后登陆各大视频网站仅一年时间播放量就超过了亿次。极简的动画风格幽默的对话语调深具教育意义的故事情节都是小朋友们热爱它的理由。酷我音乐上线的《小猪佩奇》有声节目因系动画片原声正版录制故事线索和风格与动画片完全相同也是幽默、有趣和搞笑兼备。与动画片不同的是在酷我音乐孩子们不需要用眼睛看可以在不损伤视力的情况下通过听声音展开想象的翅膀在动画世界里自由驰骋。在线收听更大的好处是大家还可以通过酷我音乐发表评论和留言让阅读不孤单《小猪佩奇》登陆酷我音乐后得到了不少网友的支持和留言其中有不少都是小朋友。有宝宝听了之后说“好听好听好听”还有酷我网友听后沉浸在故事情节中留言“小猪佩奇的尾巴在哪里我要找到它”可见粉丝们都在这个奇妙的小猪佩奇世界里收获了无限欢乐对于家长朋友们来说酷我音乐《小猪佩奇》音频作品的及时上线也有积极作用和意义。这部作品围绕小猪佩奇与家人的愉快经历展开故事开心搞笑之余又富有教育意义在宣扬传统家庭观念与友情的同时还鼓励小朋友们体验生活因此也是历年来最具潜力的学前儿童教育品牌。难怪有网友留言“好东西”也有网友欣慰表示“睡前听这个故事终于能静下来了”。马上就要过年啦在这个漫长的寒假里《熊出没》《火影忍者》《猫和老鼠》等众多经典动画片已成为孩子们度假过年必备而酷我音乐《小猪佩奇》正版音频无疑也是值得小朋友们收听的必选节目之一。想要小朋友享受故事又不伤眼想要宝宝们在快乐中安然入睡登陆酷我音乐搜索“小猪佩奇”它将成为孩子们这个寒假的“最佳伙伴”声明:中华娱乐网刊载此文出于传递更多信息之目的,并非意味着赞同其观点或证实其描述。版权归作者所有,更多同类文章敬请浏览:综合资讯蒙融两撩景裳督律灸俣沿饶什卦乃https://github.com/trailofbits/polytracker/issues/3092 <br />https://github.com/trailofbits/polytracker/issues/3043 <br />https://github.com/trailofbits/polytracker/issues/2996 <br />https://github.com/trailofbits/polytracker/issues/2884 <br />https://github.com/trailofbits/polytracker/issues/2838 <br />https://github.com/trailofbits/polytracker/issues/2936 <br />https://github.com/trailofbits/polytracker/issues/1836 <br />https://github.com/trailofbits/polytracker/issues/1788 <br />srcaaamjeyoqktjnfcqifexir |
marariyan/dummy-repo | 483787516 | Title: HTTP2: Huffman decoding implementation is inefficient
Question:
username_0: We are currently implementing Huffman decoding by looping over a table with a list of encoded values for each length. (See Huffman.cs)
This doesn't seem particularly efficient.
Instead, we should implement Huffman decoding by doing table lookups, using keys of 8 bits or something along those lines.
Note that Huffman encoding from servers seems to be pretty common, so this may have real-world perf impact. (Or maybe not... as always, measuring is the best way to know.)<issue_closed>
Status: Issue closed |
apache/trafficcontrol | 509384693 | Title: Server Capability get by name - returns blank response array when the requested item doesn't exist
Question:
username_0: <!--
************ STOP!! ************
If this issue identifies a security vulnerability, DO NOT submit it! Instead, contact
the Apache Software Foundation Security Team at <EMAIL> and follow the
guidelines at https://www.apache.org/security/ regarding vulnerability disclosure.
-->
<!--
- For *SUPPORT QUESTIONS*, use the
[Traffic Control slack channels](https://traffic-control-cdn.slack.com) or [Traffic Control mailing lists](http://trafficcontrol.apache.org/mailing_lists/).
- Before submitting, please **SEARCH GITHUB** for a similar issue or PR. -->
## I'm submitting a ...
<!-- (check all that apply with "[x]") -->
<!--- security vulnerability (STOP!! - see above)-->
- [X] bug report
- [ ] new feature / enhancement request
- [ ] improvement request (usability, performance, tech debt, etc.)
- [ ] other <!--(Please do not submit support requests here - see above)-->
## Traffic Control components affected ...
<!-- (check all that apply with "[x]") -->
- [ ] CDN in a Box
- [ ] Documentation
- [ ] Grove
- [ ] Traffic Control Client
- [ ] Traffic Monitor
- [X] Traffic Ops
- [ ] Traffic Ops ORT
- [ ] Traffic Portal
- [ ] Traffic Router
- [ ] Traffic Stats
- [ ] Traffic Vault
- [ ] unknown
## Current behavior:
<!-- Describe how the bug manifests / how the current features are insufficient. -->
returns
```
{
"response": []
}
```
## Expected / new behavior:
<!-- Describe what the behavior would be without the bug / how the feature would improve Traffic Control -->
should return alert
```
Text: "no server capability with that key found"
Level: "error"
```
## Minimal reproduction of the problem with instructions:
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and include the applicable TC version.
-->
When GET https://{{TO_BASE_URL}}/api/{{api_version}}/server_capabilities?name=DOESNT_EXIST
where DOESNT_EXIST - is sever capability which is not in the list.
[Truncated]
<!-- e.g. stacktraces, related issues, suggestions how to fix -->
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
Answers:
username_1: This is a matter of opinion. The URI in question does exist, it's just that a list of all the server capabilities filtered by a non-existent name is an empty list.
If a change was made to consider identifying fields special cases of filters that would cause a 404 error to be returned, it would need to be made globally, as this is the behavior of many (all?) endpoints.
I think that'd be a fine change to make - so that it would match the behavior of other request methods - but I also don't think that what we're currently doing is extremely wrong.
username_2: I don't think this is a bug, it's a fundamental change to the API.
For a 404 vs an empty array for objects that don't exist, I don't think either is the "right" solution. A 404 is probably more expected. But to say "the API returns an array of objects, and if your filter didn't have any, you get a successful empty array" is not a violation of HTTP.
Moreover, the TO API has returned 200 [] for ages. It isn't documented behavior, but it's most certainly a specific behavior, that clients almost certainly rely on in places. Even if it doesn't violate the SemVer API Promise, we can't reasonably change this without breaking tons of people.
Likewise, IMO we should continue to make new endpoints follow the same pattern as everything else.
It wouldn't be a bad idea to do this in the next major version of the API (2.0), but we simply can't safely do it to 1.x.
username_3: For what it's worth, to me this makes sense:
`GET /foos/{id-does-not-exist}` returns a 404 as the URI does NOT exist
and
`GET /foos?id=id-does-not-exist` returns an empty array as i see the query parameter as a filter of a perfectly valid URI as @username_1 mentioned.
username_3: IMO this should simply be closed as this is expected behavior.
username_1: It's not a bug but it can stay open as an enhancement request |
tsbonev/nharker | 363905135 | Title: articles: implement implicit linking on a partial basis
Question:
username_0: Currently, automatic linking works by searching through full link titles, but some names might only be mentioned as a first name e.g. <NAME>'s article's link title is "john-doe" but if the entry contains only "John" then that link won't be made.
Possibility 1)
Search through the article full titles and find matches based on the content (tokenized to remove stop words)
Possibility 2) Add a global map of shorthands that can be appended to the list of link titles so that
"john" and "john-doe" can point to the article of <NAME>.
Answers:
username_0: A global map is more efficient.
Status: Issue closed
|
NCAR/ParallelIO | 195022333 | Title: Split C and Fortran unit tests into different directories
Question:
username_0: The C and Fortran unit tests are mixed together, which makes the build files needlessly complicated.
By putting each in their own directory we will get two simple build files instead of one complex one, and the C-only build will be simplified.<issue_closed>
Status: Issue closed |
dwyl/app | 523524712 | Title: Basic labelling/tagging
Question:
username_0: As a person who believes that structure is key to organisation and trust in an app
I'd like to be able to tag my inputs with tags I make up myself
So that I can later have a better understanding of what is required from that capture at a glance and filter by the topics I am most interested in.
Tagging and projects are some of the features that I expect to most evolve as part of the app's user experience as they are the most critical but here is a starting point.
To be clear, I don't think that this will be _useable_ as-is, but will certainly be a place to evolve from. We will have a much better UX for this within the next sprint but this is the starting point.
## Acceptance Criteria
+ [ ] When I edit an existing item in my list of captures, I see a text field where I can input tags
+ [ ] For stage 1, tags are input as a comma-separated list (a space required after each comma)
+ [ ] A submit button is clicked to save them
+ [ ] Each tag is checked against the saved tags (case-insensitive) and:
+ [ ] If a tag already exists, save the capture against that tag
+ [ ] If a tag does not exist, create a new tag and save the capture to it
+ [ ] Do not worry about leading and trailing spaces for now, this will not be the final UX MVP so we shouldn't waste time on fool-proofing for leading and trailing spaces yet - this version will only be used inside the dwyl team
+ [ ] Once tags are saved, take human back to the list of captures
Answers:
username_1: I think it shouldn't be too complicated to trim the spaces on tags. Another question, do we want tags to be case insensitive? Might be good to avoid creating duplicated tags.
Singular vs plural tags might also be a case where duplicated can be created (eg `idea` vs `ideas` tag)?
username_0: @username_1 All good questions!
On case sensitivity - definitely case-*in*sensitive for now.
I wouldn't worry about the plurals yet because we will very closely get to an auto-complete solution to this where a person will type in a couple of letters and get the various suggestions given to them (which should deal with the plurals).
username_2: @username_0 are you working on creating the UI for this story or am I'?
It's been in Sprint 1 milestone https://github.com/dwyl/app/milestone/3 for _way_ too long ...
Can we either _remove_ it from the milestone or provide some sort of update?
username_2: The UX of this is pretty important. This is a mini-project in itself that is worth brainstorming _several_ UIs for to ensure that we have an API that can cater to all UI/UXs.
@username_0 do you have time to sit down and sketch ideas with me e.g: tomorrow? |
saltstack/salt | 314494709 | Title: Salt version 2018.3.0.1.el7 broken on RHEL 7
Question:
username_0: ### Description of Issue/Question
I keep getting this error when trying to connect a minion to the master. Key error. I have deleted teh key on the client and restarted it. Also updated the salt master and the minion's key seems to be accepted by salt.
=======
[ERROR ] Error while bringing up minion for multi-master. Is master at salt responding?
[DEBUG ] Connecting to master. Attempt 1 of 1
[DEBUG ] Master URI: tcp://10.100.32.67:4506
[DEBUG ] Re-using AsyncAuth for (u'/etc/salt/pki/minion', u'vuwunicoedwdbd2.ods.vuw.ac.nz', u'tcp://10.100.32.67:4506')
[DEBUG ] Generated random reconnect delay between '1000ms' and '11000ms' (3635)
[DEBUG ] Setting zmq_reconnect_ivl to '3635ms'
[DEBUG ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG ] Initializing new AsyncZeroMQReqChannel for (u'/etc/salt/pki/minion', u'vuwunicoedwdbd2.ods.vuw.ac.nz', u'tcp://10.100.32.67:4506', 'clear')
[DEBUG ] Connecting the Minion to the Master URI (for the return server): tcp://10.100.32.67:4506
[DEBUG ] Trying to connect to: tcp://10.100.32.67:4506
[DEBUG ] salt.crypt.get_rsa_pub_key: Loading public key
[DEBUG ] Decrypting the current master AES key
[DEBUG ] salt.crypt.get_rsa_key: Loading private key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] salt.crypt.get_rsa_pub_key: Loading public key
[CRITICAL] The Salt Master server's public key did not authenticate!
The master may need to be updated if it is a version of Salt lower than 2018.3.0, or
If you are confident that you are connecting to a valid Salt Master, then remove the master public key and restart the Salt Minion.
The master public key can be found at:
/etc/salt/pki/minion/minion_master.pub
[ERROR ] Error while bringing up minion for multi-master. Is master at salt responding?
[DEBUG ] Connecting to master. Attempt 1 of 1
======
### Setup
(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
### Steps to Reproduce Issue
(Include debug logs if possible and relevant.)
### Versions Report
(Provided by running `salt --versions-report`. Please also mention any differences in master/minion versions.)
[root@vuwunicorhsat01 salt]# rpm -q salt-master
salt-master-2018.3.0-1.el7.noarch
[root@vuwunicorhsat01 salt]#
[root@vuwunicoedwdbd2 ~]# rpm -q salt-minion
salt-minion-2018.3.0-1.el7.noarch
[root@vuwunicoedwdbd2 ~]#
[root@vuwunicorhsat01 salt]# salt --versions-report
Salt Version:
Salt: 2018.3.0
Dependency Versions:
cffi: 1.6.0
cherrypy: Not Installed
dateutil: 1.5
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
[Truncated]
pygit2: Not Installed
Python: 2.7.5 (default, May 3 2017, 07:55:04)
python-gnupg: 0.3.7
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: redhat 7.4 Maipo
locale: UTF-8
machine: x86_64
release: 3.10.0-693.17.1.el7.x86_64
system: Linux
version: Red Hat Enterprise Linux Server 7.4 Maipo
[root@vuwunicorhsat01 salt]#
Answers:
username_0: I am getting a padding check failed?
======
[root@vuwunicorhsat01 salt]# systemctl status salt-master
● salt-master.service - The Salt Master Server
Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-04-16 16:01:44 NZST; 8min ago
Docs: man:salt-master(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltstack.com/en/latest/contents.html
Main PID: 17185 (salt-master)
CGroup: /system.slice/salt-master.service
├─17185 /usr/bin/python /usr/bin/salt-master
├─17190 /usr/bin/python /usr/bin/salt-master
├─17195 /usr/bin/python /usr/bin/salt-master
├─17197 /usr/bin/python /usr/bin/salt-master
├─17199 /usr/bin/python /usr/bin/salt-master
├─17200 /usr/bin/python /usr/bin/salt-master
├─17201 /usr/bin/python /usr/bin/salt-master
├─17202 /usr/bin/python /usr/bin/salt-master
├─17203 /usr/bin/python /usr/bin/salt-master
├─17204 /usr/bin/python /usr/bin/salt-master
├─17205 /usr/bin/python /usr/bin/salt-master
├─17206 /usr/bin/python /usr/bin/salt-master
├─17223 /usr/bin/python /usr/bin/salt-master
├─17224 /usr/bin/python /usr/bin/salt-master
├─17225 /usr/bin/python /usr/bin/salt-master
├─17226 /usr/bin/python /usr/bin/salt-master
├─17227 /usr/bin/python /usr/bin/salt-master
├─17228 /usr/bin/python /usr/bin/salt-master
├─17229 /usr/bin/python /usr/bin/salt-master
├─17230 /usr/bin/python /usr/bin/salt-master
├─17231 /usr/bin/python /usr/bin/salt-master
├─17233 /usr/bin/python /usr/bin/salt-master
└─17234 /usr/bin/python /usr/bin/salt-master
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: pub = salt.crypt.get_rsa_pub_key(pubfn)
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: File "/usr/lib/python2.7/site-packages/salt/crypt.py", line 210, in get_...ub_key
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: key = RSA.load_pub_key(path)
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: File "/usr/lib64/python2.7/site-packages/M2Crypto/RSA.py", line 406, in ...ub_key
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: return load_pub_key_bio(bio)
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: File "/usr/lib64/python2.7/site-packages/M2Crypto/RSA.py", line 422, in ...ey_bio
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: rsa_error()
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: File "/usr/lib64/python2.7/site-packages/M2Crypto/RSA.py", line 302, in rsa_error
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: raise RSAError, m2.err_reason_error_string(m2.err_get_error())
Apr 16 16:05:35 vuwunicorhsat01.ods.vuw.ac.nz salt-master[17185]: RSAError: padding check failed
Hint: Some lines were ellipsized, use -l to show in full.
[root@vuwunicorhsat01 salt]#
username_0: [root@vuwunicorhsat01 salt]# salt-key -L |grep edwdbd2
vuwunicoedwdbd2.ods.vuw.ac.nz
[root@vuwunicorhsat01 salt]#
So the 2 can talk, this problem suggests a bug.
username_1: We are getting the same error when updated minion to 2018.03 on RedHat 7.4.
Master is also 2018.03 but on CentOS 7.4, CentOS 2018.03 minions are working fine.
username_2: This is a duplicate of #46868
Daniel
Status: Issue closed
|
flutter/flutter | 865632600 | Title: Scrollbar thumbs aren't selectable when semi-transparent.
Question:
username_0: When I tried to drag a scrollbar thumb on a desktop app (Linux), grabbing the thumb is pretty hard in the first place, but I was surprised that when I dragged the thumb, it seemed to go the opposite direction from what I expected (the thumb ran away from my mouse pointer), until I realized that the click was instead just going straight through the thumb and the click was being interpreted as a drag of the list contents instead.
Probably when we click on a thumb, it should grab the thumb regardless of whether it is transparent or not, so that it won't be interpreted as a drag of the list contents below it.
https://user-images.githubusercontent.com/8867023/115801150-ea82f080-a390-11eb-8da0-fe7989f2df96.mp4
Sample code from the video (not really specific to this code, however):
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Scroll Bug',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Scroll Bug'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: ListView(
children: List<Widget>.generate(100, (int index) {
return Text('Item $index');
}),
),
),
);
}
}
```
cc @username_3
Answers:
username_1: May be a duplicate of https://github.com/flutter/flutter/issues/79235.
<details>
<summary>flutter doctor -v</summary>
```bash
[✓] Flutter (Channel master, 2.2.0-11.0.pre.245, on macOS 11.2.3 20D91 darwin-x64, locale en-AO)
• Flutter version 2.2.0-11.0.pre.245 at /Users/pedromassango/Code/flutter_master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4825c639b6 (15 minutes ago), 2021-04-22 23:26:15 -0700
• Engine revision 6fa0fb0059
• Dart version 2.14.0 (build 2.14.0-18.0.dev)
[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.2)
• Android SDK at /Users/pedromassango/Library/Android/sdk
• Platform android-30, build-tools 30.0.2
• ANDROID_HOME = /Users/pedromassango/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 12.4, Build version 12D4e
! CocoaPods 1.9.3 out of date (1.10.0 is recommended).
CocoaPods is used to retrieve the iOS and macOS platform side's plugin code that responds to your plugin usage on the Dart side.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/platform-plugins
To upgrade see https://guides.cocoapods.org/using/getting-started.html#installation for instructions.
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 4.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
[✓] IntelliJ IDEA Community Edition (version 2021.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 55.1.5
• Dart plugin version 211.6693.108
[✓] VS Code (version 1.55.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.16.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 11.2.3 20D91 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 90.0.4430.85
! Doctor found issues in 1 category.
```
</details>
username_2: CC @username_3
username_3: In the attached video, it looks like the Scrollbar fade out animation has completed by the time the mouse tries to grab it and drag. When it's not visible it's not interactive currently.
https://github.com/flutter/flutter/blob/f55b2665e92aea59645b3b964bc9e2254bff7d1f/packages/flutter/lib/src/widgets/scrollbar.dart#L488
AFAIK users expect when the scrollbar is not visible that it should not interfere with scrolling the scroll view normally, but we could see about making this configurable.
If the scroll view has already scrolled, meaning the scrollbar has metrics it can lay out with, then we could add support for hit testing when it is transparent. I think it would be nice to support it coming back into view no matter what if the mouse hovers over the scrollbar area.
The trouble is that if the scrollbar is not visible, and has not laid out yet, it does not have any information to inform hit testing, or how/where to paint.
This is really similar to https://github.com/flutter/flutter/issues/80262, which aims to solve the fact that the scrollbar cannot paint without the scroll metrics.
username_0: If I missed it because it faded out, I feel like it's just REALLY hard to select, then. It very much felt like I grabbed it and then when I moved the mouse, it went in the entirely opposite direction from the way I expected (because I missed it and dragged the background).
It fades out so fast that I can't grab it with the pointer in time, and I feel like people who aren't as aware of the game they have to play to "catch" the thumb will just never get there in time (I'm thinking about my parents trying to use this interface, for instance). It should feel smooth and easy, not like some kind of video game challenge. :-)
Should it maybe not continue to fade out if the mouse is even close to it (like some kind of padding around it)?
username_3: I totally get that. Just watching that video looks like a frustrating experience. This is actually the default behavior for scrollbars on chrome/mac, and it can drive me nuts when I try to grab the thumb. 🤣
The scrollbar was originally padded like you suggested, but we removed it because it caused issues for nearby drag gestures (PR: https://github.com/flutter/flutter/pull/77755 Issue: https://github.com/flutter/flutter/issues/77354 ) I wonder if we can find a happy middle ground to fine tune this, to remove the padding when trying to drag, but add it when evaluating hover.
username_0: I think it would help a lot to just stop fading out when the pointer is over the scrollbar region, or event in fact to start fading _in_ when it's over the scrollbar. Although I suppose that might be annoying if you were trying to drag the background near the scrollbar. Maybe it only starts fading in again if it's not already transparent?
The fundamental problem, I think, is that these scrollbars were designed for touch screens, and meant to be indicators, not draggable controls. They're too small, they disappear far too easily/quickly, and aren't easily discoverable. How do I really feel? :smile:
username_3: Oh, this is the new scrollbar designed for desktop by Material Design. 😓
username_4: Probably the bar should always show on the desk platform by default.
username_3: So I've got a fix almost ready for this, it will make the **_hover_** hit testing larger when using a mouse, but not the when actually clicking on it, since that was previously reported and fixed in #77354. It will also make it possible for hover to be detected and the animation to bring the Opacity back up to 1 even when it is transparent - the caveat being that it must have already painted once to know where at least the scrollbar track area is.
#80262 covers the case of when we don't have any data from the scrollable yet. I've taken a couple cracks at it trying to solve both of these at the same time, but have not found a solution for that one yet.
username_0: That sounds like it will definitely help. |
woocommerce/woocommerce | 1093129183 | Title: Packagist does not list v6.0.0
Question:
username_0: ### Prerequisites
- [ ] I have carried out troubleshooting steps and I believe I have found a bug.
- [ ] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
### Describe the bug
Hello!
Packagist says _Last update: 2022-01-04 08:24:25 UTC_ but there is no v6.0
Could you contact them?
### Expected behavior
list v6.0.
### Actual behavior
v6.0 is not listed
### Steps to reproduce
Got to https://packagist.org/packages/woocommerce/woocommerce
### WordPress Environment
n/a
### Isolating the problem
- [ ] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [ ] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/).
- [X] I can reproduce this bug consistently using the steps above.
Answers:
username_0: https://github.com/woocommerce/woocommerce/issues/31475#issuecomment-1004345153
Status: Issue closed
|
watertribekhaleesi/prj-rev-bwfs-dasmoto | 259347698 | Title: classes vs. id's
Question:
username_0: https://github.com/watertribekhaleesi/prj-rev-bwfs-dasmoto/blob/master/DasmotoArts/index.html#L10-L18
It seems you assigned classes to your brush, frames and paint elements, however since these are only used once and thus individual, unique elements they should be assigned id's rather than classes. Classes are more suitable for when you want to group multiple elements and give them all the same styling. Here is a great resource on classes vs. ids:
https://css-tricks.com/the-difference-between-id-and-class/
Also, it may be a good idea to give your id/class names more suitable names such as "frames", "paint", "brush" rather than "mediumspring" etc. |
YurySorokin/Civil3D_Templates_parts | 642542065 | Title: Горизонтали по гост
Question:
username_0: Неплохо было иметь метки горизонталей которые распологались бы с верхней стороны и имели бы штрих указывающий направление уклона, но есл штрих можно сделать в стиле самой горизонтали (но это не очень), то расположение текста я так понимаю поменять нельзя? хотя может быть я ошибаюсь. |
tlswg/tls13-spec | 31765097 | Title: Erratum: missing )
Question:
username_0: (TLSCompressed.length) is 61 bytes, and the MAC length is 20 bytes,
then the length before padding is 82 bytes (this does not include the
IV**)**. Thus, the padding length modulo 8 must be equal to 6 in order to
make the total length an even multiple of 8 bytes (the block length).
The padding length can be 6, 14, 22, and so on, through 254. If the
padding length were the minimum necessary, 6, the padding would be 6
bytes, each containing the value 6. Thus, the last 8 octets of the
GenericBlockCipher before block encryption would be xx 06 06 06 06 06
06 06, where xx is the last octet of the MAC.<issue_closed>
Status: Issue closed |
facebook/react-native | 805512046 | Title: if <StatusBar hidden={true} /> then scrollview stops working
Question:
username_0: ## Description
I had to use the fullScreen, so I used the ```<StatusBar hidden={true} />``` , with that where I had a ScrollView, the slip
stopped working.
## React Native version: 0.63.4
## Steps To Reproduce
Provide a detailed list of steps that reproduce the issue.
1. set hidden on StatusBar whit true
## Expected Results
if set hidden on StatusBar whit true, ScrollView keep working
```
return (
<SafeAreaView
style={{
flex: 1,
backgroundColor: colors.primary,
}}>
<StatusBar hidden={true} />
<Provider store={store}>
<Routes
ref={setNavigator}
onNavigationStateChange={detectTransitionScreen}
/>
<SocketController />
<ModalWarning />
</Provider>
</SafeAreaView>
);
```
##Register
```
return (
<Container>
<ContainerLogo />
<Content>
<ContainerForm>
<ContainerAvatar>
<ContentAvatar
source={{
uri: user_avatar
? user_avatar.uri
: 'https://www.freeiconspng.com/thumbs/profile-icon-png/profile-icon-9.png',
}}
/>
<DotAvatar
onPress={() => dispatch(AppActions.set_option_images(true))}>
<Icon
size={metrics.normalize(23)}
name={'camera'}
color={colors.gray}
/>
</DotAvatar>
</ContainerAvatar>
<Form ref={formRef} onSubmit={handleNext}>
<Input
[Truncated]
}
}
export default {
base_size_logo: 170,
base_margin: 10,
base_padding: 15,
base_radius: 8,
SCALE_BIGER,
SCALE_SMALL,
window: {
width,
height,
},
widthPercentageToDP,
heightPercentageToDP,
normalize,
};
```
Answers:
username_0: For some reason, this was affecting the ScrollView.
I used:
```
export const Container = styled.KeyboardAvoidingView.attrs({
behavior: 'padding',
resetScrollToCoords: { x: 0, y: 0 },
scrollEnabled: true,
})`
flex: 1;
background: ${colors.primary};
`;
```
And it worked again :)
Status: Issue closed
|
openstreetmap/chef | 427343179 | Title: Enable TimedMediaHandler mediawiki extension (Videos)
Question:
username_0: Please enable [TimedMediaHandler](https://www.mediawiki.org/wiki/Extension:TimedMediaHandler) extension on OSM wiki. It allows videos from Commons to be played directly in the site, instead of showing as a link. Among other things, it will allow us to have instructional videos prominently shown with a snapshot image at the top. This extension should have negligible affect on the server performance (most of the work is done on the client).<issue_closed>
Status: Issue closed |
spring-cloud/spring-cloud-kubernetes | 575088894 | Title: sping-cloud-kubernetes compatibility with kubernetes version
Question:
username_0: should be list like https://github.com/fabric8io/kubernetes-client#compatibility-matrix
Answers:
username_0: should be list like https://github.com/fabric8io/kubernetes-client#compatibility-matrix
username_1: We should link to the matrix since that is what we use as a client
username_0: problem occoured when i do the demo , reload config map , the error is "on project spring-cloud-kubernetes-example-reload: Failed to create Deployment from kubernetes.yml. io.fabric8.kubernetes.client.KubernetesClientException: Failure executin
g: POST at: https://192.168.99.100:8443/apis/extensions/v1beta1/namespaces/default/deployments. Message" ,
demo runtime environment is under below:
OS system : windows toolbox
kubernetes Version : minikube V1.7.3
use branch : 1.0.x
and i search the reason , look at the compatibility-matrix , url is https://github.com/fabric8io/kubernetes-client#compatibility-matrix;
and i modified the version of kubernetes client ,
but it also can not work ,so i suggest this ,
can u give me some suggestions , i will appreciate u .
username_0: 
error detail
username_0: request url https://192.168.99.100:8443/apis/extensions/v1beta1/namespaces/default/deployments
response is
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
},
"code": 404
}
username_2: @username_0 are these two separate issues?
username_0: today , i replace the minikube version (v1.7.3 the latest version )to v1.1.1 and kubectl client v1.1.1 , and error is encoureed , kubectl version print message , connot connect localhost:8080 , so i replace it with v1.4.13 and the problem is resolved , but minikube dashboard can not start , the message is X http://127.0.0.1:61097/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ is not responding properly: Temporary Error: unexpected response code: 503,
the main problem is not up , when i use the maven plugin command ,
mvn clean package fabric8:deploy -Pkubernetes and build success
then kubectl get pods is not running , the reason is can not pull image ,
so i package it with jar then build to image , then push to docker hub , then minikube ssh pull it ,
and tag it , it also can not work , it's status is restart , so i log it, then i don
t know next step what shoud i do , to resolver this problem ,
the best opertion i guess is : uninstall minikue , install the latest version v1.7.3 but it can not deploy to kubernetes automaticly , but other else can work better , or which version u can suggest me to install , or branch to checkout
username_0: no , i guess , the minikube 1.7.3 removed the rest api , so deployt error , so for make it better , yesterday , i change minikube 1.1.1 but the problems become more , which version of minikube can make it better with current branch 1.0.x
username_0: ooh , i get it , reload configmap demo shoud run 'minikube docker-env' , that i just run last line , not whole , but the next problem ..
`
[INFO] Using namespace: default
[INFO] Creating a Deployment from kubernetes.yml namespace default name spring-cloud-reload
[ERROR] Failed to create Deployment from kubernetes.yml. io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.99.100:8443/apis/extensions/v1beta1/namespaces/default/deployments. Message: the server could not find the requested resource. Received status: Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=the server could not find the requested resource, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={}).. Deployment(apiVersion=extensions/v1beta1, kind=Deployment, metadata=ObjectMeta(annotations={fabric8.io/git-commit=<PASSWORD>, fabric8.io/metrics-path=dashboard/file/kubernetes-pods.json/?var-project=spring-cloud-kubernetes-example-reload&var-version=1.0.6.BUILD-SNAPSHOT, fabric8.io/scm-con-url=scm:git:git://github.com/spring-cloud-incubator/spring-cloud-kubernetes.git/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/scm-url=https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/iconUrl=img/icons/spring-boot.svg, fabric8.io/git-branch=1.0.x, fabric8.io/scm-devcon-url=scm:git:ssh://[email protected]/spring-cloud-incubator/spring-cloud-kubernetes.git/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/scm-tag=HEAD, fabric8.io/docs-url=scp://static.springframework.org/var/www/domains/springframework.org/static/htdocs/spring-cloud/docs/spring-cloud-kubernetes-example-reload/1.0.6.BUILD-SNAPSHOT/spring-cloud-kubernetes/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload}, clusterName=null, creationTimestamp=null, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=spring-cloud-kubernetes-example-reload, provider=fabric8, version=1.0.6.BUILD-SNAPSHOT, group=org.springframework.cloud}, name=spring-cloud-reload, namespace=null, ownerReferences=[], resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=DeploymentSpec(minReadySeconds=null, paused=null, progressDeadlineSeconds=null, replicas=1, revisionHistoryLimit=2, rollbackTo=null, selector=LabelSelector(matchExpressions=[], matchLabels={app=spring-cloud-kubernetes-example-reload, provider=fabric8, group=org.springframework.cloud}, additionalProperties={}), strategy=null, template=PodTemplateSpec(metadata=ObjectMeta(annotations={fabric8.io/git-commit=faebe99cbcf16c33af75a0012328e364cbaab3d8, fabric8.io/metrics-path=dashboard/file/kubernetes-pods.json/?var-project=spring-cloud-kubernetes-example-reload&var-version=1.0.6.BUILD-SNAPSHOT, fabric8.io/scm-con-url=scm:git:git://github.com/spring-cloud-incubator/spring-cloud-kubernetes.git/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/scm-url=https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/iconUrl=img/icons/spring-boot.svg, fabric8.io/git-branch=1.0.x, fabric8.io/scm-devcon-url=scm:git:ssh://[email protected]/spring-cloud-incubator/spring-cloud-kubernetes.git/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload, fabric8.io/scm-tag=HEAD, fabric8.io/docs-url=scp://static.springframework.org/var/www/domains/springframework.org/static/htdocs/spring-cloud/docs/spring-cloud-kubernetes-example-reload/1.0.6.BUILD-SNAPSHOT/spring-cloud-kubernetes/spring-cloud-kubernetes-examples/spring-cloud-kubernetes-example-reload}, clusterName=null, creationTimestamp=null, deletionGracePeriodSeconds=null, deletionTimestamp=null, finalizers=[], generateName=null, generation=null, initializers=null, labels={app=spring-cloud-kubernetes-example-reload, provider=fabric8, version=1.0.6.BUILD-SNAPSHOT, group=org.springframework.cloud}, name=null, namespace=null, ownerReferences=[], resourceVersion=null, selfLink=null, uid=null, additionalProperties={}), spec=PodSpec(activeDeadlineSeconds=null, affinity=null, automountServiceAccountToken=null, containers=[Container(args=[], command=[], env=[EnvVar(name=KUBERNETES_NAMESPACE, value=null, valueFrom=EnvVarSource(configMapKeyRef=null, fieldRef=ObjectFieldSelector(apiVersion=null, fieldPath=metadata.namespace, additionalProperties={}), resourceFieldRef=null, secretKeyRef=null, additionalProperties={}), additionalProperties={})], envFrom=[], image=cloud/spring-cloud-kubernetes-example-reload:snapshot-200307-090313-0454, imagePullPolicy=IfNotPresent, lifecycle=null, livenessProbe=Probe(exec=null, failureThreshold=null, httpGet=HTTPGetAction(host=null, httpHeaders=[], path=/health, port=IntOrString(IntVal=8080, Kind=null, StrVal=null, additionalProperties={}), scheme=HTTP, additionalProperties={}), initialDelaySeconds=180, periodSeconds=null, successThreshold=null, tcpSocket=null, timeoutSeconds=null, additionalProperties={}), name=spring-boot, ports=[ContainerPort(containerPort=8080, hostIP=null, hostPort=null, name=http, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=9779, hostIP=null, hostPort=null, name=prometheus, protocol=TCP, additionalProperties={}), ContainerPort(containerPort=8778, hostIP=null, hostPort=null, name=jolokia, protocol=TCP, additionalProperties={})], readinessProbe=Probe(exec=null, failureThreshold=null, httpGet=HTTPGetAction(host=null, httpHeaders=[], path=/health, port=IntOrString(IntVal=8080, Kind=null, StrVal=null, additionalProperties={}), scheme=HTTP, additionalProperties={}), initialDelaySeconds=10, periodSeconds=null, successThreshold=null, tcpSocket=null, timeoutSeconds=null, additionalProperties={}), resources=null, securityContext=SecurityContext(capabilities=null, privileged=false, readOnlyRootFilesystem=null, runAsNonRoot=null, runAsUser=null, seLinuxOptions=null, additionalProperties={}), stdin=null, stdinOnce=null, terminationMessagePath=null, terminationMessagePolicy=null, tty=null, volumeMounts=[], workingDir=null, additionalProperties={})], dnsPolicy=null, hostAliases=[], hostIPC=null, hostNetwork=null, hostPID=null, hostname=null, imagePullSecrets=[], initContainers=[], nodeName=null, nodeSelector=null, restartPolicy=null, schedulerName=null, securityContext=null, serviceAccount=null, serviceAccountName=null, subdomain=null, terminationGracePeriodSeconds=null, tolerations=[], volumes=[], additionalProperties={}), additionalProperties={}), additionalProperties={}), status=null, additionalProperties={})
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://192.168.99.100:8443/apis/extensions/v1beta1/namespaces/default/deployments. Message: the server could not find the requested resource. Received status: Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=null, name=null, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=the server could not find the requested resource, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure (OperationSupport.java:470)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode (OperationSupport.java:409)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse (OperationSupport.java:379)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse (OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate (OperationSupport.java:226)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate (BaseOperation.java:773)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create (BaseOperation.java:356)
at io.fabric8.kubernetes.api.Controller.doCreateResource (Controller.java:1082)
at io.fabric8.kubernetes.api.Controller.applyResource (Controller.java:1071)
at io.fabric8.kubernetes.api.Controller.applyEntity (Controller.java:282)
at io.fabric8.kubernetes.api.Controller.apply (Controller.java:227)
at io.fabric8.maven.plugin.mojo.build.ApplyMojo.applyEntities (ApplyMojo.java:414)
at io.fabric8.maven.plugin.mojo.build.ApplyMojo.executeInternal (ApplyMojo.java:390)
at io.fabric8.maven.plugin.mojo.AbstractFabric8Mojo.execute (AbstractFabric8Mojo.java:74)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:13 min
[INFO] Finished at: 2020-03-07T09:03:21+08:00
[INFO] ------------------------------------------------------------------------
`
username_0: it can be work with specialfied with namespace default , but can not get configmap , ignoring , so i repakcage source config with print stacktrace , the message is , "User "system:serviceaccount:default:default" cannot get resource "configmaps" in API group "" in the namespace "default"."
Status: Issue closed
username_0: problem get complete well , just also can not automatic deploy app to minikube , the service account problem has been done , with docs/src/main/asciidoc/security-service-accounts.adoc
, so i closed this question , thanks a lot
half automatic is enough ,just work on develop version |
neutrinolabs/xrdp | 33516716 | Title: update http://www.xrdp.org/ to point to this repo
Question:
username_0: The website is confusing as it still points to sourceforge .
Answers:
username_1: Hello,
I agree, as user I had hard time to find this repository and could easily miss this project xrdp. (In fact I missed the project for many mouths)
iom, there is also some missing peace in some missing documentation, in particular for xorgxrdp xorg modules, which are mush shorter than building X11rdp for example.
Best regards.
username_2: I linked download on the website to github so it can be easy to find
Status: Issue closed
|
NuGet/NuGetGallery | 203123679 | Title: Problem with https
Question:
username_0: I also get the attached ISS Express Notification: The specified port is in use. Port 443 is reserved by URL ':https://+:sra_{ba....}/'
Any help is appreciated.

/Regards
Answers:
username_1: Hey @username_0, have you tried executing the `tools\Enable-LocalTestMe.ps1` scripts from an admin PowerShell window? This actually generates and trusts a self-signed certificate rather than using the localtest.me certificate.
I would recommend following the steps in the [README.md](https://github.com/NuGet/NuGetGallery/blob/master/README.md) at the root of the repository.
username_0: YES, it worked!
Thank you very much.
Status: Issue closed
|
legendary-acp/Basic_programs | 369820374 | Title: Categories Program according to languages
Question:
username_0: ---
name: Feature request
about: Suggest an idea for this project
---
**Is your feature request related to a problem? Please describe.**
Beauty of project ends when we see so many programs lying open in main project file.
**Describe the solution you'd like**
A simple solution to this problem can be categorizing the programs on bases of language.
**Additional context**
It would be great if anyone could categorize these programs on bases of language
Answers:
username_0: Final fix of issue commited
Status: Issue closed
|
raiden-network/raiden-services | 501586880 | Title: Add as many iterations into PFS imbalance fee calculation as we use in the Raiden client
Question:
username_0: Currently, the Raiden client iterates the imbalance fee calculation one time whereas the corresponding calculation in the pfs does not iterate.
We need to adjust the pfs fee calculation to the client calculation.
See here for reference https://docs.google.com/spreadsheets/d/1nHVCOhMcrM-L8brJj_C_Qb0Paws1qwozcNHU3VFcWEY/edit#gid=1257400215
Answers:
username_1: There is some code for this in https://github.com/raiden-network/raiden/pull/5024, this needs to be copied to the PFS or be used from the raiden codebase.
Status: Issue closed
|
jsonmaster/jsonmaster.github.io | 930335171 | Title: Optionals in Swift Codeable
Question:
username_0: **Is your feature request related to a problem? Please describe.**
There is a way in swift to create optional variables in codeable. This is helpful when the json-object and object-json conversion has some missing variables.
**Describe the solution you'd like**
I've done some changes by forking the repo, but I'm not sure how to write for different optional types. In my code I've defaulted to string. Please see my diff image below. I can make a pull request but as this is incomplete, can't do it.
**Describe alternatives you've considered**
Don't know any
**Additional context**
<img width="1638" alt="Screenshot 2021-06-25 at 10 05 26 PM" src="https://user-images.githubusercontent.com/1269220/123457790-eaf26e80-d601-11eb-8469-1f1fc637fd4d.png"> |
apollographql/react-apollo | 229152380 | Title: React Router 4 Link Not causing Apollo to fetch
Question:
username_0: <!--
Thanks for filing an issue on react-apollo!
Please look at the following checklist to ensure that your PR
can be accepted quickly:
-->
## Steps to Reproduce
Describe how to reproduce this issue.
1. Load your page (server side rendered)
2. Instantly click on a link `<Link to="/some/graphql/page">Click Me</Link>` which should render a page that requires GraphQL to fetch the data before it renders the component
### Buggy Behavior
The url properly updates, however no rendering occurs because this error appears:
`TypeError: undefined is not an object (evaluating 'data.query.value')`
* The component for this page is a react component surrounded by a gql, then by a graphql HOC.
* Also no HTTP request is sent to my graphql API when the `Link` is clicked
### Expected Behavior
The data is fetched for the component by firing the graphql query, and the page properly renders with all the data it requires.
### Version
- [email protected]
- [email protected]
---
This seems an issue that would be very common, so am assuming its something wrong in my project, but incase it isn't l raised an issue
More code samples for this issue can be provided.
Answers:
username_0: If i wrap a component in the react router helper `withRouter` and then use my own modified version of `Link` components (that is just calling this.props.history.push(<url)) then it works properly.
So this could be an issue of how `Link` and `react-apollo` work with each other
username_0: Okay so the `Link` component appears to work if you linking to an page that exists without the sub route you are at.
However if you are changing the top level route, then the `Link` will fail to make `react-apollo` properly fetch data.
E.g.
* `/some/top`
* `/sub/:route`
* `another/top/route`
* `sub/route/:page`
Clicking a `Link` from `/some/top` to `/sub/:route` will work, and `react-apollo` **will properly request the graphql data**
Clicking a `Link` from `sub/route/:page` to `/sub/:route` **will not work since it does not share the same top level route**
So i believe the issue is with React Apollo and React-Router 4 nested routes
username_0: Possible package to help?
https://github.com/ReactTraining/react-router/tree/master/packages/react-router-config
username_0: **Idea**
To have a static route config (like RR 2.x / 3.x required) onto of your standard route components (in RR4).
The on the client side, pass this route config into the `ApolloProvider`, which can analyze on a route change what data is required for the route change, to fetch data for this.
Have a custom `ApolloRR4Link` component that can intercept any link so that it halts a transition until data has arrived.
Status: Issue closed
|
algorithm005-class01/algorithm005-class01 | 568789574 | Title: 【0308 Week 01】第一周学习总结(补充)
Question:
username_0: 之前的学习总结写在 note.md 中,随作业一起同步到仓库了
学习笔记
### 第一周
#### 现状
算法 0 基础,语言仅有 Python 基础,学习过 HTML & CSS
#### 进度
学习 JAVA 语言。
学习数据结构中的线性表部分,学习数组、链表、栈和队列,在 JAVA 和 Python 中的实现。
学习复杂度分析,时间复杂度,空间复杂度,和几种数据结构的复杂度比较。
学习升维和空间换时间的核心思想。
#### 下周目标
巩固本周学习内容
PriorityQueue 的源代码分析
LeetCode 习题学习和复习(五毒神掌)
学习数据结构——树 |
timmonsryan/EmployeeActivityClub | 45680345 | Title: User Profiles
Question:
username_0: Need to add user profiles for signed-in members. These profiles will show at minimum what events the user is signed-up for, and what events that they have been to.
Also needs the option to edit/view their account information.
Answers:
username_0: This has been added. Also added a preview that shows the next event and prompts the user to sign-up for it.
Status: Issue closed
|
BNewing/geese-games | 356195716 | Title: Remove alert from quiz page
Question:
username_0: At the moment, the quiz page displays the correct answer using an alert. I'd like to remove this, but need to do a refactor. I think I probably need to get Redux involved to allow for the kind of component break down I'd like.
Answers:
username_0: Done!
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.