repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
cta-wave/common-media-client-data | 578569258 | Title: Overlap with Client Hints esp. client tracking considerations
Question:
username_0: clients will only send the request headers when they're actually
going to be used, and the privacy concerns of passive fingerprinting
by requiring explicit opt-in and disclosure of required headers by
the server through the use of the Accept-CH response header.
Answers:
username_1: See also https://github.com/whatwg/fetch/issues/1006
username_2: We did think through a Client-Hint type of opt-in approach initially. The idea was that instead of sending CMCD automatically on all requests, the media player would first wait for the CDN to signal via response header (could be using Client Hint) that it understood CMCD headers and then start sending data on subsequent requests. This avoids sending data to CDNs that are not going to to process it. However this approach has a number of problems:
1. The very first request would carry no CMCD data. This is a problem, as in many media playback scenarios, the first request is the playlist/manifest request, which is a critical component of the session to log and track.
2. The client would need to track state across multiple CDNs and also multiple server-IPs that it may be talking to within a single CDN. This client complexity is a barrier to adoption and leads to further data gaps on each new connection per [1]
On the whole we consider the burden of sending CMCD data to non-compliant CDNs to be light. CDNs already deal with many custom headers and arbitrary query args millions of times per second. Unknown headers and query args are robustly ignored. Additionally the payload size is small relative to the media traffic being delivered. (the same would not be true if we were proposing this spec for all objects in a web page for example).
@Lucas - we did think through the privacy and fingerprinting concerns behind sid, cid and did. Firstly, these are all optional fields. If the client does not want to send them, it should not. Session ID is randomly generated and it ties together the media object requests in a playback session. It does not identify the client as it is a GUID and is never repeated outside of that session. Device ID is intended to signal the player version and device type, as many instances of delivery problems are tied to specific device problems. Same is true for ContentID, which is a hash of the content being played. Both Device-ID and Content-ID could be used for fingerprinting, as they are invariant when the user plays the same content from the same device. To this end I have raised a new issue so add language to the document outlining this risk. see https://github.com/cta-wave/common-media-client-data/issues/45
@Mark - re your comments on whatwg/fetch#1006 - "_This doesn't seem like a good way to go; not only is the query string not intended for colonisation by standards documents, it's also going to make caching and other generic functions more difficult to interpose, and less efficient. Linking can become problematic too._". Initially CMCD purely used Headers for data transmission. However, as we looked in to MSE usage, which is a key target application space, the overhead of browser-based clients having to double their request rate due to the CORS pre-flight request was untenable, especially for low latency streaming applications such as LL-HLS in which multiple requests are made per second. CMCD is not a core protocol standard, as much of the IETF work. Rather it is an application convention, between adaptive segmented media players and the CDNs that deliver them data. To that extent, we felt that allowing the usage of a predefined query-arg was the next best alternate to enable widespread adoption among media players. All modern CDNs have the ability to ignore certain query args in their cache keys and hence the caching issue we feel is not a significant obstacle. If we do not allow query-arg transmission, what would a better solution for clients for which sending custom headers is expensive?
username_2: Valid questions being asked here. Question for group - do we get an OPTIONS request with every single object request from a CORS-restricted client with a custom header, or just the first request to that host on that connection? We need to resolve before discussing further.
username_2: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age
MAX-Age can be used to set TTL of OPTIONS responses. This may be a workaround if we enforce a header-only approach.
username_2: I built a little test mse player which added a custom header with each request. This header was advertised by the server in its access-control-allow-headers response header, so it replicates the use case we are considering here for CMCD. The player made an OPTIONS request with every single segment request, even though they were all against the same domain. Segment requests were 2s apart.

The server was not responding with an Access-Control-Max-Age header, although Chrome is meant to apply a default age of 5s and the requests were 2s apart.
username_2: I tested with MAX-AGE headers varying from 24hrs to 5min. All were ignored by Chrome, Firefox and Safari. The reason is that "_In order to reduce the number of preflight requests, CORS has the concept of a preflight cache. However the preflight information is cached for an origin/url pair. That means each unique url has its own preflight cache. An API with the same preflight response across all urls will still receive a preflight request for each unique request_.". **So preflight responses are cached per URL and not per HOST. Since each media object request URL is unique, each segment request is preceded by an options request.**
Also beware that latest Chrome network panel HIDES all options requests and does not show them. This leads to the false belief that they are not being made. https://httptoolkit.tech/blog/chrome-79-doesnt-show-cors-preflight
username_1: @username_3 fyi
username_3: I think you already pointed out the most relevant issue. I don't think there's opposition to having an origin-wide or connection-wide bypass-CORS-preflights flag, but getting it defined is another matter.
username_2: Agreed that OPTIONS should have a response mode from the server authorizing a hostname to be valid for subsequent requests over the TTL defined by the MAX-AGE response.
We will initiate a long term action to request an origin-wide or connection-wide bypass-CORS-preflights flag.
In the meanwhile, we will persist with our query-arg options as the least-worst alternative to continual preflight requests. We will also add language to the spec indicating that the preferred mode of transmission for HTTP requests is to use custom headers. We will also consider placement of the query arg payload in the path, as is done with https://tools.ietf.org/html/draft-ietf-cdni-uri-signing-19.
Status: Issue closed
username_2: Improved the definition of query arg transmission to include
CMCD=<URL_encoded_concatenation_of_key-value_pairs><reserved_character>
The reserved character is defined by [RFC3986]. This reserved character is optional at the end of the URL.
We will initiate a long term action to request an origin-wide or connection-wide bypass-CORS-preflights flag.
In the meanwhile, we will persist with our query-arg options as the least-worst alternative to continual preflight requests.
username_3: If by query-arg you mean the query part of the URL, why would that not generate a lot of CORS preflights?
username_2: @username_3 - by query-arg I mean to reference the query-arg mode of transmission within CMCD, which does not use custom-request headers. Adding a query arg to a GET/HEAD/POST request that is only using CORS-safelisted-request-headers to a CORS access controlled host will not result in pre-flight requests.
username_3: Ah, that's right. |
thx/iconfont-plus | 905013304 | Title: 希望修改已上传的图标名称
Question:
username_0: 图标名称是中文时,已上传后,命名错误时,需要修改,当前是如果错误,只能删掉重新上传。比较麻烦。
Answers:
username_1: +10010;
Status: Issue closed
username_2: 在「我上传的icon」中可以修改。http://localhost:4000/manage/index?spm=a313x.7781069.1998910419.10&manage_type=myicons&icontype=uploads&keyword=&page=1&icon_page_type=all
username_1: 能否增加个入口呢?
username_1: 能否增加个入口呢?
username_2: 入口在:
 |
operator-framework/kubectl-operator | 673689325 | Title: Doc: Install steps?
Question:
username_0: It'd be great to have some pointers for installing this in the Readme. Perhaps there is other documentation that could simply be linked?
Answers:
username_1: It will hopefully be installable via `krew` in the near future. See https://github.com/kubernetes-sigs/krew-index/pull/737
I've been waiting on that to merge to write the install steps. For now, clone the repo, and run `make install`. That will copy it into `$GOPATH/bin`, so if that's on your path, `kubectl` will find it and you can run `kubectl operator`
Status: Issue closed
|
ashishgkwd/ngx-mat-daterange-picker | 923462952 | Title: Error: Can't resolve all parameters for PickerOverlayComponent: (?, ?, ?)
Question:
username_0: core.js:6210 ERROR Error: Uncaught (in promise): Error: Can't resolve all parameters for PickerOverlayComponent: (?, ?, ?).
Error: Can't resolve all parameters for PickerOverlayComponent: (?, ?, ?).
at syntaxError (compiler.js:2966)
at CompileMetadataResolver._getDependenciesMetadata (compiler.js:23409)
at CompileMetadataResolver._getTypeMetadata (compiler.js:23304)
at CompileMetadataResolver.getNonNormalizedDirectiveMetadata (compiler.js:22910)
at CompileMetadataResolver._getEntryComponentMetadata (compiler.js:23504)
at compiler.js:23160
at Array.map (<anonymous>)
at CompileMetadataResolver.getNgModuleMetadata (compiler.js:23160)
at CompileMetadataResolver.getNgModuleSummary (compiler.js:22979)
at compiler.js:23061
at resolvePromise (zone.js:832)
at resolvePromise (zone.js:784)
at zone.js:894
at ZoneDelegate.invokeTask (zone.js:421)
at Object.onInvokeTask (core.js:28567)
at ZoneDelegate.invokeTask (zone.js:420)
at Zone.runTask (zone.js:188)
at drainMicroTaskQueue (zone.js:601)
Answers:
username_1: Can anyone please , let me know the solution for the issue |
excalidraw/excalidraw | 594536881 | Title: Add a shortcut to change background, stroke, fill colors
Question:
username_0: It would be nice to be able to open the color pickers for
- [ ] Canvas Background
- [ ] Stroke color
- [ ] Background color
It's partially part of #1104 but this could be done easier I think and we can have it in steps.
Answers:
username_1: Great idea, I am missing that feature a lot.
I would suggest `alt+1`, `alt+2` and `alt+3` from top to bottom (canvas, stroke, background) with the alternatives `alt+c`, `alt+s` and `alt+b`. There is already a shortcut with the `alt`-key (toggle zen mode).
username_2: I would additionally appreciate a hotkey to clear the style and reset to default (black stroke, transparent color).
username_3: I'm gonna close this in favor of https://github.com/excalidraw/excalidraw/issues/631. The canvas color shortcut isn't tracked there, but it is in https://github.com/excalidraw/excalidraw/issues/2649 (but ultimately I think canvas color is not used that often to necessitate a shortcut).
Status: Issue closed
|
graphql-java/graphql-java | 275082034 | Title: DataLoader batching doesn't work properly with lists
Question:
username_0: Hello,
There seems to be still some batching issues with DataLoader and DataLoaderDispatcherInstrumentation, even after #764 was merged.
I created a simple example in https://gist.github.com/username_0/183d40ab7e8eb2507917fe8c46730189 that demonstrates the issue.
In the example, there is a simple tree of 7 items, like so:
```
1
/ \
2 3
/ \ / \
4 5 6 7
```
Using following query `query Q { root { id childNodes { id childNodes { id childNodes { id }}}}}` (where 'root' is node 1) results in output:
```
BatchLoader called for [1] -> got [[2, 3]]
BatchLoader called for [2, 3] -> got [[4, 5], [6, 7]]
BatchLoader called for [4, 5] -> got [[], []]
BatchLoader called for [6, 7] -> got [[], []]
```
If I've understood correctly the idea behind DataLoader, there should only be three calls, one for each level in tree, and calls to [4, 5] and [6, 7] should have been merged into a single call.
This example doesn't actually do any true async execution, but replacing `CompletableFuture.completedFuture` with an async call doesn't affect the results.
Answers:
username_1: I have looked further into this and we have structural problems in the ExecutionStrategy that means that we dont descend in perfect sets of layers
So given your tree
```
1
/ \
2 3
/ \ / \
4 5 6 7
```
The code does this per field
```
CompletableFuture<ExecutionResult> result = fetchField(executionContext, parameters)
.thenCompose((fetchedValue) ->
completeField(executionContext, parameters, fetchedValue));
```
That is fetch the field (which can involve the dataloader) and complete the field which can cause the engine to descend and restart itself recursively.
There for the order can be as you observed. As it gets to node `2` it descends its list because it calls `dispatch` for that inner recursive call.
I think we would need to rewrite the ExecutionStrategy so that fetched all fields first in a level and then called resolve. But even then it would need to know about `lists` and `object types` since they cause the engine to descend for those fields.
Hmmmm
username_2: Even if you rewrite ExecutionStrategy such way, this will work only for first layer. After recursion call you won't be able to know if you are in recursive call and don't need to dispatch, or it is last recursion branch of a layer an you need run dispatch now
I had an idea to rewrite `DataLoaderDispatcherInstrumentation` to do something like reference counting of future recursive calls, and run dispatch only once in `end` `CompleteFields`. But I gave this idea up, because Instrumentation do not have all data that I need (`ExecutionContext`) and because this is very error-prone strategy and needs a lot of testing
username_0: Are there any plans for improving DataLoader batching regarding this issue? We are currently using BatchedExecutionStrategy because it handles this case properly, but would like to switch to async DataLoaders.
I wonder if BatchedExecutionStrategy itself contains something that could be reused for this?
username_3: I have the same problem.
Currently I am using `BatchedExecutionStrategy` and I wanted to switch to `AsyncExecutionStrategy` but it is much slower in my case and for larger data (where `BatchedExecutionStrategy` finishes its job in 30 seconds) `AsyncExecutionStrategy` fails after few minutes with `OutOfMemoryError`.
I described my use case here https://github.com/graphql-java/graphql-java/issues/760#issuecomment-372826395 since I overlooked this issue.
username_1: We have greatly improved the data loader performance by "tracking" the levels encountered and dispatching the DL when we know we have have "dispatched" all the outstanding field fetchers including ones for lists.
We have tests that have shown it reduces it greatly.
I am going to close this issue as the 9.0 release is imminent
Status: Issue closed
username_1: Whoops - jumped the gun - this didnt make it into 9.0. Sorry
username_1: Hello,
There seems to be still some batching issues with DataLoader and DataLoaderDispatcherInstrumentation, even after #764 was merged.
I created a simple example in https://gist.github.com/username_0/183d40ab7e8eb2507917fe8c46730189 that demonstrates the issue.
In the example, there is a simple tree of 7 items, like so:
```
1
/ \
2 3
/ \ / \
4 5 6 7
```
Using following query `query Q { root { id childNodes { id childNodes { id childNodes { id }}}}}` (where 'root' is node 1) results in output:
```
BatchLoader called for [1] -> got [[2, 3]]
BatchLoader called for [2, 3] -> got [[4, 5], [6, 7]]
BatchLoader called for [4, 5] -> got [[], []]
BatchLoader called for [6, 7] -> got [[], []]
```
If I've understood correctly the idea behind DataLoader, there should only be three calls, one for each level in tree, and calls to [4, 5] and [6, 7] should have been merged into a single call.
This example doesn't actually do any true async execution, but replacing `CompletableFuture.completedFuture` with an async call doesn't affect the results.
username_1: See https://github.com/graphql-java/graphql-java/pull/990
username_0: I take it this made it to 9.0 after all?
username_4: @username_0 yes, we actually managed to ship it in 9.0.
Status: Issue closed
username_4: closing it again, because it is fixed in 9.0. |
colin-kiegel/rust-derive-builder | 169788059 | Title: first release?
Question:
username_0: @username_1: ready to tag "v0.1.0" and publish, if you are. :-)
Answers:
username_1: Before you publish this:
- Do you have some edge cases you want to add tests for?
- Also, do you want to expand the readme with an example (if you haven't
already)?
- last but not least, dong forget to add ALL the meta tags to `Cargo.toml`
before you `cargo publish` 😄
Then we are good to go!
username_0: 1) Meta tags should be complete: I already added description, repository, documentation, license and
```
keywords = ["custom", "derive", "macro", "builder", "setter", "struct"]
readme = "Readme.md"
```
So even the readme should be _indexed_ by crates.io, too.
2.) Readme is updated with a simplified example. Do we want to keep the disclaimer until v1.0 or are we confident? Right now, I just kept it: _This is a work in progress. Use it at your own risk._
3) We can add a testcase / example for chaining a 'consuming' terminating method? But IMO this would not block a release right now.
You can still comment - I will not publish before the late evening.
username_1: Thanks!
1) I'd add "custom derive" as a keyword; don't know if people will search
for "custom", "struct", or "setter". IIRC you can set at most 6 keywords.
2) I'll have a look at the readme but assume it's fine :)
3) I wouldn't block on this either. It's 0.1, not 1.0 ;)
Status: Issue closed
username_0: done. :tada: :package:
https://crates.io/crates/derive_builder
But only 5 keywords are allowed and no whitespaces. I removed "custom", because it should always come in a pair with another keyword. |
jlippold/tweakCompatible | 523859527 | Title: `SwipeForMore` working on iOS 13.2
Question:
username_0: ```
{
"packageId": "org.thebigboss.swipeformore",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "org.thebigboss.swipeformore",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/org.thebigboss.swipeformore/",
"iOSVersion": "13.2",
"packageVersionIndexed": true,
"packageName": "SwipeForMore",
"category": "Tweaks",
"repository": "PoomSmart's Repo",
"name": "SwipeForMore",
"installed": "1.1.10",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "org.thebigboss.swipeformore",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Manage Cydia packages via swipe.",
"latest": "1.1.10",
"author": "PoomSmart",
"packageStatus": "Unknown"
},
"base64": "<KEY>",
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
PaddlePaddle/Paddle | 191183889 | Title: data_provider 加载字典错误
Question:
username_0: Running on machine: yq01-hpc-wutai01-w00054.yq01.baidu.com
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F1123 11:12:59.977150 10936 PyDataProvider2.cpp:222] Check failed: (module) != nullptr Python Error: <type 'exceptions.IOError'> : [Errno 2] No such file or directory: './sparse_fea_entropy.20161112.dict'
Python Callstack:
/home/disk1/normandy/maybach/17395/workspace/thirdparty/thirdparty/ei_fea_provider_gender_sparse.py : 26
Cannot imort module ei_fea_provider_gender_sparse
任务运行脚本:
`rm -rf thirdparty
mkdir thirdparty
cp ei_fea_provider_gender_sparse.py ./thirdparty/
cp ./weiyi01/sparse_fea_entropy.20161112.dict ./thirdparty/
SCRIPT_PATH=$PWD
export PYTHONPATH=$PYTHONPATH:$PWD:../../python/
#--where=yq01-ecom-triger-box_slurm_cluster \
paddle cluster_train \
--config=$SCRIPT_PATH/trainer_config_gender_sparse.cluster.py \
--use_gpu=cpu \
--time_limit=100:00:00 \
--submitter=weiyi01 \
--num_nodes=1 \
--job_priority=normal \
--config_args=is_local=False \
--trainer_count=4 \
--num_passes=1 \
--log_period=10 \
--dot_period=100 \
--saving_period=1 \
--test_all_data_in_one_period=1 \
--distribute_test=1 \
--job_name=weiyi01_dt_gender_train_sparse \
--where=yq01-hpc-wutai01-dmop-cpu-10G_cluster \
--thirdparty=$SCRIPT_PATH/thirdparty`
通过--thirdparty上传,workspace/thirdparty/thirdparty目录下也有这个字典,请问是什么原因,如何正确加字典?
Status: Issue closed
Answers:
username_1: 问题不适合这里解答,close issue。 |
google/clspv | 592216518 | Title: Instcombine can produce illegally sized vectors
Question:
username_0: A recent LLVM change ( https://github.com/llvm/llvm-project/commit/464b9aeafe29104a1a8391f43f91835eeca473b3) added a new instcombine rule that replaces a truncate of an extract with an extract of a bitcast.
```
// Example (little endian):
// trunc (extractelement <4 x i64> %X, 0) to i32
// --->
// extractelement <8 x i32> (bitcast <4 x i64> %X to <8 x i32>), i32 0
```
This is causing some failures to generate correct code for OpenCL CTS (e.g basic_image_r8):
```
__kernel void test_r_uint8(read_only image2d_t srcimg, __global unsigned char *dst, sampler_t sampler)
{
int tid_x = get_global_id(0);
int tid_y = get_global_id(1);
int indx = tid_y * get_image_width(srcimg) + tid_x;
uint4 color;
color = read_imageui(srcimg, sampler, (int2)(tid_x, tid_y));
dst[indx] = (unsigned char)(color.x);
}
```
Before instcombine:
```
%3 = call <4 x i32> @_Z12read_imageui14ocl_image2d_ro11ocl_samplerDv2_f(%opencl.image2d_ro_t addrspace(1)* %srcimg, %opencl.sampler_t addrspace(2)* %sampler, <2 x float> %2)
%4 = extractelement <4 x i32> %3, i32 0
%conv = trunc i32 %4 to i8
%arrayidx = getelementptr inbounds i8, i8 addrspace(1)* %dst, i32 %add
store i8 %conv, i8 addrspace(1)* %arrayidx, align 1
```
After instcombine:
```
%3 = call <4 x i32> @_Z12read_imageui14ocl_image2d_ro11ocl_samplerDv2_f(%opencl.image2d_ro_t addrspace(1)* %srcimg, %opencl.sampler_t addrspace(2)* %sampler, <2 x float> %2) #3
%4 = bitcast <4 x i32> %3 to <16 x i8>
%conv = extractelement <16 x i8> %4, i32 0
%arrayidx = getelementptr inbounds i8, i8 addrspace(1)* %dst, i32 %add
store i8 %conv, i8 addrspace(1)* %arrayidx, align 1
```
Will need some changes to undo this transformation.
Answers:
username_0: Another CTS test (profiling write_image_char) gets hit by a second instcombine that obfuscates the first:
From:
```
%4 = call <4 x i32> @_Z11read_imagei14ocl_image2d_ro11ocl_samplerDv2_f(%opencl.image2d_ro_t addrspace(1)* %srcimg, %opencl.sampler_t addrspace(2)* %2, <2 x float> %3) #4
%5 = bitcast <4 x i32> %4 to <16 x i8>
%conv = extractelement <16 x i8> %5, i32 0
%6 = insertelement <4 x i8> undef, i8 %conv, i32 0
%conv5 = extractelement <16 x i8> %5, i32 4
%7 = insertelement <4 x i8> %6, i8 %conv5, i32 1
%conv6 = extractelement <16 x i8> %5, i32 8
%8 = insertelement <4 x i8> %7, i8 %conv6, i32 2
%conv7 = extractelement <16 x i8> %5, i32 12
%9 = insertelement <4 x i8> %8, i8 %conv7, i32 3
%arrayidx = getelementptr inbounds <4 x i8>, <4 x i8> addrspace(1)* %dst, i32 %add
store <4 x i8> %9, <4 x i8> addrspace(1)* %arrayidx, align 4
```
To:
```
%4 = call <4 x i32> @_Z11read_imagei14ocl_image2d_ro11ocl_samplerDv2_f(%opencl.image2d_ro_t addrspace(1)* %srcimg, %opencl.sampler_t addrspace(2)* %2, <2 x float> %3) #4
%5 = bitcast <4 x i32> %4 to <16 x i8>
%6 = shufflevector <16 x i8> %5, <16 x i8> undef, <4 x i32> <i32 0, i32 4, i32 8, i32 12>
%arrayidx = getelementptr inbounds <4 x i8>, <4 x i8> addrspace(1)* %dst, i32 %add
store <4 x i8> %6, <4 x i8> addrspace(1)* %arrayidx, align 4
```
So overall the whole sequence is reduced significantly. This is nice for code size, but sucks for clspv.
Status: Issue closed
|
filecoin-project/rust-fil-proofs | 382810971 | Title: Distribute generated parameters to go-filecoin nodes.
Question:
username_0: We should have a way of distributing canonical parameters (a large binary) to Filecoin nodes.
ideally, there would be a command (e.g. `go-filecoin fetch-parameters`) which handles downloading via some protocol (IPFS? HTTP?) and validating the hash of the official parameters.
Where should the parameters be hosted?
There will be more than one file — at very least one for PoRep and one for PoSt.
This should dovetail with existing parameter caching, eventually replacing all parameters generated by 'paramcache' with fetched/delivered parameter files instead.
[See #323 for more on parameter generation.]
### Acceptance Criteria
1. There exists a process by which a rust-proofs developer publishes proof-parameters to IPFS
1. There exists n a process by which go-filecoin is configured to use a specific version of the published proof-parameters
1. There is a command to go-filecoin which downloads the proofs-parameters to a location where the libfilecoin_proofs can find them. [EDIT: this is not strictly necessary if we come up with a different way of accomplishing the delivery. However, there should be a way to get a checksum of actual parameters in use for comparison against an official published checksum.]<issue_closed>
Status: Issue closed |
status-im/nim-stint | 345185118 | Title: Add a toUint proc
Question:
username_0: see https://github.com/status-im/nimbus/pull/86
Answers:
username_1: An alternate approach which I've verified gets everything working, and in spirit (the code's not guaranteed to be especially pretty, e.g., hardcoding ``1 shl 31``) the right thing to do with signed int returns anyway:
```
diff --git a/stint/io.nim b/stint/io.nim
index ddec855..83ab210 100644
--- a/stint/io.nim
+++ b/stint/io.nim
@@ -75,7 +75,7 @@ func to*(x: SomeUnsignedInt, T: typedesc[StUint]): T =
func toInt*(num: Stint or StUint): int {.inline.}=
# Returns as int. Result is modulo 2^(sizeof(int)
- num.data.least_significant_word.int
+ num.data.least_significant_word.int and ((1 shl 31) - 1)
func readHexChar(c: char): int8 {.inline.}=
## Converts an hex char to an int
```
This, combined with https://github.com/status-im/nimbus/pull/94 gets every single test either not failing or skipped (some of the long benchmark-tests which hinder testing/iteration). The exact mechanism's not so important, but the basic point is that toInt running on the LSW or similar of a multiprecision integer should never return a bit pattern that, in signed interpretations, is negative. For example, max-clamping would probably work too, but this is cleaner to reason about and probably more efficient.
The tradeoff vs https://github.com/status-im/nim-stint/issues/58 is that this works on every existing ``foo.toInt`` reference in Nimbus, aside from being just conceptually more coherent than ``foo.toInt`` on a nonnegative stint returning a negative number, though both may be worthwhile.
username_0: The proper way would be something similar to this to compute the `and mask` so that it works on 32 and 64-bit platforms:
https://github.com/status-im/nim-stint/blob/406f1aa317b0dac6ce1c8fe1f3aa1cb6e3603f10/stint/private/uint_div.nim#L131-L138
or
https://github.com/status-im/nim-stint/blob/406f1aa317b0dac6ce1c8fe1f3aa1cb6e3603f10/stint/private/uint_mul.nim#L19-L24
But `num.data.least_significant_word.int and ((1 shl 31) - 1)` strips the sign bit.
The following should stay correct:
```Nim
import stint
let x = -1.stint(Uint256)
let y = x.toInt
doAssert y == -1
```
If we need a non-standard nonnegative signed int, we can use `abs` or implement it as a helper in Nimbus.
username_1: $\mathbb{N}. ... and the set of all non-negative integers smaller than $2^{256}$ is named $\mathbb{N}_{256}$.
Negative `.toInt` values are, in this context, illusory artifacts of the Nim/general two's-complement-based integer type system.
This is, obviously, not necessarily true with signed stints, but even there, it's not clear to me why the least significant word/limb's signedness has anything to do with the stint's signedness, and therefore why it's particularly useful to preserve the sign bit in `toInt` here. The only case where it sort of accidentally produces meaningfully more correct semantics that I can figure out is where `.toInt` returns the entire bit pattern of a signed stint, and therefore the sign bit in the lower limb/word exactly aligns with the sign bit of the stint as a whole. For example:
```
else: # We try to store an int64 in 2 x uint32 or 4 x uint16
# For now we only support assignation from 64 to 2x32 bit
```
That is, for other cases, the underlying type's unsigned anyway.
Still, I will, instead, add a helper function to Nimbus, as you suggest.
Status: Issue closed
|
DQinYuan/chinese_province_city_area_mapper | 487204919 | Title: 对于地名中带有省类别的地址识别错误
Question:
username_0: 详见图

Answers:
username_0: 对于地址中有的路名带有省关键词,识别会错误,比如上海市**山西路**号,就会被识别出省份为山西省。
username_1: 省 市 区 地址 adcode
0 北京市 市辖区 石景山区 老山东里 110107
```
Status: Issue closed
|
florisboard/florisboard | 870297792 | Title: Florisboard hangs a lot!
Question:
username_0: ~~~ 1618577913834.stacktrace ~~~
java.lang.OutOfMemoryError: Failed to allocate a 65552 byte allocation with 32120 free bytes and 31KB until OOM, target footprint 201326592, growth limit 201326592
at com.android.internal.util.FastXmlSerializer.<init>(FastXmlSerializer.java:86)
at com.android.internal.util.FastXmlSerializer.<init>(FastXmlSerializer.java:75)
at com.android.internal.util.XmlUtils.writeMapXml(XmlUtils.java:197)
at android.app.SharedPreferencesImpl.writeToFile(SharedPreferencesImpl.java:778)
at android.app.SharedPreferencesImpl.access$900(SharedPreferencesImpl.java:55)
at android.app.SharedPreferencesImpl$2.run(SharedPreferencesImpl.java:647)
at android.app.QueuedWork.processPendingWork(QueuedWork.java:264)
at android.app.QueuedWork.access$000(QueuedWork.java:50)
at android.app.QueuedWork$QueuedWorkHandler.handleMessage(QueuedWork.java:284)
at android.os.Handler.dispatchMessage(Handler.java:107)
at android.os.Looper.loop(Looper.java:214)
at android.os.HandlerThread.run(HandlerThread.java:67)
~~~ 1618577864337.stacktrace ~~~
java.lang.OutOfMemoryError: Failed to allocate a 32 byte allocation with 16 free bytes and 16B until OOM, target footprint 201326592, growth limit 201326592
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:491)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:940)
~~~ 1618577861845.stacktrace ~~~
java.lang.OutOfMemoryError: Failed to allocate a 16 byte allocation with 8 free bytes and 8B until OOM, target footprint 201326592, growth limit 201326592
at android.os.ThreadLocalWorkSource.setUid(ThreadLocalWorkSource.java:68)
at android.os.Binder.execTransact(Binder.java:992)
~~~ 1618577845423.stacktrace ~~~
java.lang.OutOfMemoryError: Failed to allocate a 104 byte allocation with 88 free bytes and 88B until OOM, target footprint 201326592, growth limit 201326592
at java.lang.StringFactory.newStringFromChars(StringFactory.java:260)
at java.lang.StringBuilder.toString(StringBuilder.java:413)
at java.lang.Daemons$FinalizerWatchdogDaemon.finalizerTimedOut(Daemons.java:446)
at java.lang.Daemons$FinalizerWatchdogDaemon.runInternal(Daemons.java:325)
at java.lang.Daemons$Daemon.run(Daemons.java:137)
at java.lang.Thread.run(Thread.java:919)
~~~ 1619638404811.stacktrace ~~~
java.lang.NullPointerException
at dev.patrickgold.florisboard.ime.core.FlorisBoard$Companion.getInstance(FlorisBoard.kt:182)
at dev.patrickgold.florisboard.ime.text.gestures.GlideTypingManager.setLayout(GlideTypingManager.kt:65)
at dev.patrickgold.florisboard.ime.text.keyboard.KeyboardView$initGestureClassifier$1.run(KeyboardView.kt:147)
at android.os.Handler.handleCallback(Handler.java:883)
at android.os.Handler.dispatchMessage(Handler.java:100)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7356)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:491)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:940)
Answers:
username_1: Thank you for your bug-report!
This issue has been reported a couple of times and is summarized in #677. Patrick (the creator of Florisboard) is working on it.
Let's close this report to keep the list of issues free of duplicates!
Status: Issue closed
|
numbas/Numbas | 695887849 | Title: Uploading Themes
Question:
username_0: Hi Christian,
It looks like email isn't an option at the moment, I think both Universities have blocked email traffic due to the current cyber attacks.
I have another problem that I was trying to resolve by creating a local installation but without being able to add extensions, I’m a bit lost.
My old theme doesn’t work with the latest update and I can’t upload a new theme via the NUMBAS main website, I get the message
“Server Error
Something's gone wrong. Sorry about that.
If you've got the time, please email <EMAIL> with a description of what you were trying to do.”
I also can't email a ncl.ac.uk until security issues are resolved.
Could you please help me resolve the theme uploading issue? The current themes have very wide margins and are messing up with the display of some questions.
Answers:
username_1: See https://github.com/numbas/editor/issues/535 for some comments on this, and a suggestion how this could be fixed in a locally installed instance of the Numbas editor. (At least, the fix appears to work for me.)
username_2: This is now fixed. Thanks for your patience.
Status: Issue closed
|
Phlow/feeling-responsive | 564888431 | Title: jekyll-redirect-from doesn't work
Question:
username_0: I tried installing jekyll-redirect-from on my site derived from feeling-responsive. It doesn't work; it tries to redirect to an url whose value is the empty string ("").
I downloaded the unmodified template, added jekyll-redirect-from, and ran it locally. Same result.
Answers:
username_1: Hello @username_0 – You dont need a plugin. There is a layout for that, which does the same:
https://github.com/username_1/feeling-responsive/blob/gh-pages/_layouts/redirect.html
Best practice is to use a .htaccess-file instead. For example:
~~~
redirect 301 /file.html http://domain.de/new-file.html
~~~
Status: Issue closed
|
electerm/electerm | 413349559 | Title: Bookmarks menu empty
Question:
username_0: Electerm version: electerm v0.26.54
Ubuntu 16.04 64 bit
The bookmarks menu on the left is empty after the latest upgrade instead of show the bookmarks list
Status: Issue closed
Answers:
username_1: @username_0 Released new version, thank you. |
blakeblackshear/frigate | 971969679 | Title: First hour of recordings missing each day
Question:
username_0: **Describe the bug**
I've been testing out Frigate in my setup and as such only have one camera for now, I was just browsing in Home Assistant and found that there are no recordings for 00:00 - 01:00, the day before records from 23:00 - 00:00 and the next day from 01:00 - 02:00 but when I go into the folder for 00:00 - 01:00 there is no camera listed (in other folders I can go in to
**Version of frigate**
0.8.4-5043040 (Specifically the Nvidia container)
**Config file**
Include your full config file wrapped in triple back ticks.
```
detectors:
cpu:
type: cpu
num_threads: 6
ffmpeg:
input_args:
- -c:v
- h264_cuvid
record:
enabled: true
retain_days: 7
cameras:
doorbell_camera:
ffmpeg:
inputs:
- path: <blanked>
roles:
- detect
- rtmp
- clips
- record
width: 1920
height: 1080
fps: 25
motion:
mask:
- 0,1080,759,1080,766,1053,764,1024,733,965,657,942,692,877,879,932,955,983,1010,1080,1920,1080,1920,0,0,0
```
**Frigate container logs**
```
No obvious relevant logs, there are some errors like this however:
ffmpeg.doorbell_camera.detect ERROR : [segment @ 0x55cba120c140] Non-monotonous DTS in output stream 0:0; previous: 633082860, current: 633082680; changing to 633082861. This may result in incorrect timestamps in the output file.
```
**Frigate stats**
```
{
"detection_fps": 0.0,
"detectors": {
"cpu": {
"detection_start": 0.0,
"inference_speed": 39.66,
"pid": 76
}
},
"doorbell_camera": {
"camera_fps": 24.9,
[Truncated]
"mount_type": "ext4",
"total": 7937383.5,
"used": 631283.0
},
"/tmp/cache": {
"free": 3990.8,
"mount_type": "tmpfs",
"total": 4000.0,
"used": 9.2
}
},
"uptime": 172999,
"version": "0.8.4-5043040"
}
}
```
**Screenshots**
Its kind of hard to grab a screenshot since it would require quite a bit of context (HA's media browser doesn't have breadcrumbs to show you where you are in the hierarchy which would help a lot here)
Answers:
username_0: I was trying to find more info for you, so this is the response from HA when browsing the 00:00 - 01:00 directory, as you can see there are no children contained in it.
```
{
"id": 52,
"type": "result",
"success": true,
"result": {
"title": "August 12",
"media_class": "directory",
"media_content_type": "video",
"media_content_id": "media-source://frigate/frigate/recordings/2021-08/12/00//",
"can_play": false,
"can_expand": true,
"children_media_class": "directory",
"thumbnail": null,
"children": []
}
}
```
This is the response from HA when browsing the same day for 01:00 - 02:00, as you can see the child inside of it is the one camera I have recording, and in that directory I can see the recordings.
```
{
"id": 45,
"type": "result",
"success": true,
"result": {
"title": "01:00:00",
"media_class": "directory",
"media_content_type": "video",
"media_content_id": "media-source://frigate/frigate/recordings/2021-08/12/01//",
"can_play": false,
"can_expand": true,
"children_media_class": "directory",
"thumbnail": null,
"children": [
{
"title": "Doorbell Camera",
"media_class": "directory",
"media_content_type": "video",
"media_content_id": "media-source://frigate/frigate/recordings/2021-08/12/01/doorbell_camera/",
"can_play": false,
"can_expand": true,
"children_media_class": "directory",
"thumbnail": null
}
]
}
}
```
username_0: Sorry for spamming, but its quite weird - the first response heading is also different from all other folders, it has the month instead of time I selected...
username_1: I think there was a bug fix for this in the integration. Are you running the latest version?
username_0: I'm on v1.1.1 |
openebs/openebs | 372187249 | Title: Refactor NewReplica(context *api.ApiContext, state replica.State, info replica.Info, rep *replica.Replica)
Question:
username_0: I've selected [**NewReplica(context *api.ApiContext, state replica.State, info replica.Info, rep *replica.Replica)**](https://github.com/openebs/jiva/blob/a3c24b8889f9ddd8b75fa1b3665e36b124d38d58/replica/rest/model.go#L117-L206) for refactoring, which is a unit of **84** lines of code and **8** branch points. Addressing this will make our codebase more maintainable and improve [Better Code Hub](https://bettercodehub.com)'s **Write Simple Units of Code** guideline rating! 👍
Here's the gist of this guideline:
- **Definition** 📖
Limit the number of branch points (if, for, while, etc.) per unit to 4.
- **Why**❓
Keeping the number of branch points low makes units easier to modify and test.
- **How** 🔧
Split complex units with a high number of branch points into smaller and simpler ones.
You can find more info about this guideline in [Building Maintainable Software](http://shop.oreilly.com/product/0636920049159.do). 📖
----
ℹ️ To know how many _other_ refactoring candidates need addressing to get a guideline compliant, select some by clicking on the 🔲 next to them. The risk profile below the candidates signals (✅) when it's enough! 🏁
----
Good luck and happy coding! :shipit: :sparkles: :100: |
kmorenov/hillel-mvc_project_1 | 320367349 | Title: ее тут быть впринципе не должно,странно что во второй модельке без этой строчки.
Question:
username_0: https://github.com/username_1/hillel-mvc_project_1/blob/9f058c506e84b4ba7eeb1dce09f02993933038c8/models/NewsModel.php#L13
Answers:
username_1: Removed this unneeded line. **require_once** allowed the interpreter to ignore this line. **require** would have triggered the Notice: Constant HOST already defined in /home/kostya/projects/php/www/hillel/hillel-mvc_project_1/config.php on line 2 |
hapijs/yar | 32013464 | Title: Unable to get Session Value in other handler
Question:
username_0: I am making a login page where i am handling session with Yar.
request.session.set('userid', { key: '1 });
Seems to set the value, but i cant access it in other handler. I am setting this value on the login page and when i redirected to home page i receive a null value in response to request.session.get('userid');
Please guide.
Thanks. |
devcnairobi/devc-nairobi-bot | 221920217 | Title: feat: events listing
Question:
username_0: The bot should be able to:
- list upcoming events
- give details about next event
- list past events
**Approach:** With the message "events", should bring a menu with the above menu items: `Next Event`, `Upcoming`, `Past`
Use the [Graph API: Events](https://developers.facebook.com/docs/graph-api/reference/event) to retrieve event details. |
lygaret/rack-params | 292061816 | Title: #validations - first pass at useful validations
Question:
username_0: - [x] `:required`
- [ ] `:not_blank => true` (nil or whitespace)
- [ ] `:format => /regex/`
- [ ] `:in => %w(opt1 opt2)` (Array, Range, anything with an `#include?` method)
- aliased: `:within`, `:range`, `:include`
- [ ] `:min => 0` `param <= value`
- [ ] `:max => 10` `param >= value`
- [ ] `:length => { :min => 0, :max => 10, in: 0..10 }`
- implies recursive validation!
- [ ] `:proc => ->(p) { ExternalValidator.validate! p }` |
SAP/fundamental-react | 405338630 | Title: Calendar component: currentDateDisplayed should be updated on date selection
Question:
username_0: ### Description
currentDateDisplayed is only updated on click on Month overlay, should be updated when click on day as well for continuity.
Unit tests should be updated as well.
### Versions
**fundamental-react:** 0.3.0-rc.4
**fiori-fundamentals:**
---
_**NOTE:** Where applicable, please include uncropped screen captures._
_**DISCLAIMER:**
After triaging an issue, the fundamental-react team will see if it can be reproduced or confirmed. If more information is needed, the fundamental-react team will contact the author. Any issues awaiting responses from the author for more than 7 days will be closed. The author can re-open the issue at a later time if they can present the requested information._<issue_closed>
Status: Issue closed |
rlabbe/filterpy | 603547522 | Title: KF with missing measurements
Question:
username_0: Hi, thanks for making this great package available. I have a quick question.
I tried looking through the issues but had a hard time finding how to deal with missing measurements. Just wondering if anyone could point me towards a way to use filterpy KF on a dataset with missing measurements. Thanks!
Answers:
username_1: Hi Kevin,
What is exactly the issue? What do you mean when you say you have missing measurements? What kind of measurements do you have? How are they sampled / how frequently do you read them? Are all missing or just some missing (at times)?
Some more information can help me.
Cheers
username_0: Hi, thanks for responding. I have a series of x,y coordinates of a moving object and at certain times, I have missing measurements. I found in the documentation that we can insert None for missing measurements, but not sure how I need to structure the data now.
here's my function for running Kalman Filter and smoothing:
```def kalman_filter_smooth(data, r_noise=5.):
cols = data.shape[1]
f1 = KalmanFilter(dim_x=cols*2, dim_z=cols)
f2 = KalmanFilter(dim_x=cols*2, dim_z=cols)
initData = []
for i in range(data.shape[1]):
initData.append(data[0,i])
initData.append(data[1,i]-data[0,i])
f1.x=np.array(initData)
# for matrix H
Hmat = np.zeros((cols, cols*2))
for r in range(Hmat.shape[0]):
c = r*2
Hmat[r,c] = 1.
# for matrix F
Fmat = np.eye(cols*2)
for r in range(Fmat.shape[1]):
if r % 2 == 0:
Fmat[r,r+1] = 1
f1.F = Fmat
f1.H = Hmat
f1.P *= 50.
f1.R *= r_noise
xs0,ps0,_,_ = f1.batch_filter(data)
xs1, ps1, ks, _ = f1.rts_smoother(xs0, ps0)
```
I feed in a data with `(n,2)` shape. If I have missing measurements at certain times, do I need to insert a row with `[None, None]`?
username_0: I figured it out. Just realized you have to put in `None` instead of `[None, None]`.
Status: Issue closed
username_2: Is there a way to have just some of the measurements missing per update?
Say I am tracking something where I get a position from one set of instruments and a velocity from another
set of instruments, with different update frequencies. So my measurement timeseries might look like
```
| t | z_pos | z_vel |
| ---- | -------- | -------- |
| 0 | 1 | 1 |
| 1 | 2 | None |
| 2 | 4 | None |
| 3 | 6 | 1.5 |
```
All of the KF matrices would be standard for a 1-d kinematic problem
Is there something that automatically takes care of the Nones or do I have to
make something that makes an H matrix for each situation?
Thanks
Thanks |
StarLegacy/Issue-Tracker | 743335140 | Title: [BUG]
Question:
username_0: <!-- IF YOU ERASE THIS FORM OR DON'T FILL OUT ALL THE INFO YOUR ISSUE WILL BE IGNORED AND CLOSED WITH NO FURTHER WARNING OR NOTICE -->
**Describe the bug**
<!-- REQUIRED, DO NOT PUT N/A -->
<!-- A clear and concise description of what the bug is. -->
You can starve in minigames lobby.
**To Reproduce**
<!-- REQUIRED, DO NOT PUT N/A -->
<!-- Steps to reproduce the behavior: -->
Run around in minigames lobby till you starve.
**Expected behavior**
<!-- REQUIRED, DO NOT PUT N/A -->
<!-- A clear and concise description of what you expected to happen. -->
Dont starve
**Screenshots**
<!-- If applicable (as in there's any visible thing you can share, and if so this is REQUIRED), add screenshots or videos to help explain your problem. -->
Self explanitory.
**System (please complete the following information):**
<!-- DO NOT PUT N/A -->
- OS: Arch Linux
- RAM: 6gb useable
- Minecraft Version: 1.16.4
**Additional context**
<!-- Add any other context about the problem here. --><issue_closed>
Status: Issue closed |
typora/typora-issues | 330847792 | Title: flowchart.js parallel object appears to not work
Question:
username_0: Typora canot render the example on flowchart.js until the 'parallel' object is removed.
Using flowchart.js' example renders a flowchart stacked upon itself and unreadable.
to recreate in Typora:
```flow
[then paste]
st=>start: Start:>http://www.google.com[blank]
e=>end:>http://www.google.com
op1=>operation: My Operation
sub1=>subroutine: My Subroutine
cond=>condition: Yes
or No?:>http://www.google.com
io=>inputoutput: catch something...
para=>parallel: parallel tasks
st->op1->cond
cond(yes)->io->e
cond(no)->para
para(path1, bottom)->sub1(right)->op1
para(path2, top)->op1
Answers:
username_1: It can render in the version 0.9.9.16.2 on Mac
username_2: I have the same problem on ubuntu 16.04 LTS
username_3: Also doesn't work on Windows version 0.9.58
username_4: I have the same issue. 'parallel' object doesn't work on Windows version 0.9.58
username_5: This is also broken on Version 0.9.9.18.1 (1088) for OSX.
username_6: me too
username_7: I'm having this issue on Windows 10. Syntax highlighting does not work on parallel object either.
username_8: I have the same issue on Windows version 0.9.65. T.T
username_8: 
username_9: fixed in new release
Status: Issue closed
username_10: not fixed yet in the newest version |
nthdeveloper/RichTextStripper | 394676215 | Title: Question about possible performance improvement
Question:
username_0: Hi there. In updating the code for issue #1 / #2 I noticed something that might could be improved, but I'm not sure and didn't want to make any changes without fully understanding how the code should work in all cases (not just the cases that I am aware of).
On line **119** the `_outputTextList` variable is created as `List<string>()`:
https://github.com/username_1/RichTextStripper/blob/d8a4508570f23ea6456b89075aa75c35a7e38424/RichTextStripper.cs#L119
Items are added to this collection as each piece is parsed, and finally all of those items are combined into a single string on line **250**:
https://github.com/username_1/RichTextStripper/blob/d8a4508570f23ea6456b89075aa75c35a7e38424/RichTextStripper.cs#L250
My question is: why is this not a `StringBuilder`? A `StringBuilder` would be more efficient, and can handle appending `char`, `string`, etc.
Is there a technical reason that this is a `List<string>()`? If not, I (or someone else) can change it to be a `StringBuilder`.
Answers:
username_0: @thamathar Just FYI...
username_1: @username_0 there is no special reason for using List<string>. Probably it was a list of strings in the original code and I did not change it. It seems, it can be safely replaced with StringBuilder.
username_1: Now uses StringBuilder.
Status: Issue closed
|
gatsbyjs/gatsby | 438308052 | Title: [docs] (meta) Incrementally improve top 25 learning workflows with baseline evaluations
Question:
username_0: A meta issue to track incremental improvements to documentation captured in baseline evaluations of the [top 25 workflows](https://docs.google.com/spreadsheets/d/175iZyC8khLy1JQncvgyvAD_vXpfsHxswQcn6N7GDeRA/edit).
You can participate in this initiative too! See below for information about contributing.
# Summary
Thanks to @username_3 and @username_6 in their [enumeration of the top 25 workflows](https://docs.google.com/spreadsheets/d/175iZyC8khLy1JQncvgyvAD_vXpfsHxswQcn6N7GDeRA/edit) that Gatsby developers seek out in documentation, we have a robust understanding of the most common needs for those using Gatsby on a day-to-day basis.
This meta issue is intended to provide a list of constituent issues targeting those top 25 workflows and the improvements based on those evaluations.
Contributions welcome! Please keep reading for evaluation criteria that we are applying to all of these learning workflows as well as an enumeration of the top 25 workflows and corresponding existing issues.
# Relevant information
## Evaluation criteria
There are six areas by which we are evaluating the top 25 learning workflows:
| Criterion | 😞 | 😐 | 😄 |
| --- | --- | --- | --- |
| **Searchability** | 5th page of results or nonexistent | 2nd–4th page of Google results | 1st page of Google results |
| **Discoverability** | Within 6+ clicks on .org (or trapped in GH issue) | Within 4-5 clicks on .org | Within 2-3 clicks on .org |
| **Completeness** | Entire procedures missing (6+ clicks required) | Some steps missing (4-5 clicks required) | Docs mostly or fully complete (no more than 2-3 clicks) |
| **Linkedness** | No links to other useful docs pages | Some links to other useful docs pages | Many links to other useful docs pages |
| **Tone and accessibility** (for tutorials and guides) | Negative or overbearing tone | Neutral tone | Friendly and helpful tone |
| **Tone and accessibility** (for docs and API pages) | Uninformative, list of prerequisites absent | Somewhat informative, prerequisites are somewhat incomplete | Informative, prerequisites are clear |
| **Style** | Many style issues (needs proofread) | Some style issues (e.g. capitalization) | Adheres fully to [style guide](https://www.gatsbyjs.org/contributing/gatsby-style-guide/) |
NB: All recommendations are prefixed with **[rec]** and collated later in each workflow evaluation.
## List of workflows
1. Setting up a blog that pulls content from Markdown
2. Setting up Gatsby Preview with Contentful [gatsbyjs.com]
3. Linking to Gatsby and non-Gatsby content
4. Using starters
5. Working with images and videos
6. Finding a source plugin
7. Installing and using WordPress
8. Adding and organizing CSS (including Sass)
9. Using Gatsby themes
10. Embedding components in Markdown with MDX
11. Using the GraphQL explorer to understand your source data
12. Troubleshooting error messages from queries not working
13. Installing and using Contentful
14. Deploying to Netlify
15. Working with fonts and typography
16. Deploying to other hosting services
17. Adding an RSS feed
18. Making reusable components
19. Implementing search with Algolia
## Contributing
To contribute, create a new issue titled "[docs] [workflows] Name of workflow being evaluated" and copy the template below for the text. See (issue link forthcoming) for a reference example.
<blockquote>
**User story:** As a new Gatsby user, I want to [describe workflow as completely as possible].
| Search | Discover | Complete | Linked | Tone | Style | Overall |
| --- | --- | --- | --- | --- | --- | --- |
| 😐 | 😐 | 😐 | 😐 | 😐 | 😐 | 😐 |
**Steps taken to implement:**
[List out steps taken to implement the workflow, evaluating against each of the criteria in the process.]
## Acknowledgments
Thank you to @username_3, @username_1, and @marisamorby for their feedback during this process and to @username_3 and @username_6 for the foundational work of identifying these learning workflows.
Answers:
username_0: Added `help wanted` label as this is a great way for new contributors to begin contributing to Gatsby documentation.
username_0: Added a link to #13804 in the meta — thanks for your help, @aravindballa!
username_1: This looks excellent! I think part of the steps that will be crucial to
document are how people searched for docs related to each workflow.
username_2: `7. Installing and using WordPress` and `13. Installing and using Contentful` are probably ambiguous. Whats the scope of those workflows? Reading the title it seems unrelated to Gatsby but I think its about how to pull content from those CMS into Gatsby and create a blog/website?
username_3: @username_2 everything on this list is in relation to Gatsby, so using headless CMS installs to pull in blog content. It is sort-of assumed because we're talking about Gatsby workflows
username_2: @username_3 thanks for clarifying
Added #13876: Installing and using WordPress, looking forward to feedback.
username_4: I'll be adding an issue for **13. Installing and using Contentful** soon
username_0: #13712 and #13715 are both now complete as of merged PR #14036!
username_5: Nice! Note that #17 was addressed (completed?) with https://github.com/gatsbyjs/gatsby/pull/11941/ and #5 with https://github.com/gatsbyjs/gatsby/pull/13170
username_6: @username_5 thanks! I edited the original issue to reflect that.
username_0: @username_6 I don't think #13170 covers video handling — should that be separated out into another workflow?
username_6: @username_0 great question — I think @username_3 already did a bunch of work on the video workflows, though. Maybe she can add more info here?
username_3: Hey there. #13170 did add more docs for working with video, but it doesn't quite go all the way for HTML5 video. The page has a callout for requesting more details and there's an open issue:
- https://www.gatsbyjs.org/docs/working-with-video/#hosting-your-own-html5-video-files
- https://github.com/gatsbyjs/gatsby/issues/3346#issuecomment-483814732
username_6: Thanks @username_3! I updated the issue to reflect that and unchecked the video workflow todo.
username_7: Just began work on:
10. Embedding components in Markdown with MDX
over in #14258.
username_6: @username_7 thanks! Feel free to edit this issue to add the reference to the checklist above!
username_3: I'm taking on the image workflow! Initial evaluation here: https://github.com/gatsbyjs/gatsby/issues/14529
username_8: Hey, I started to work for Using Gatsby Themes, and came across this #14107. Work going on for the same. I was wondering if I should go ahead with my work or should wait for the changes to take place. Thank you.
username_8: I would like to take 19. Implementing search with Algolia.
username_9: Taking Google AMP implementation https://github.com/gatsbyjs/gatsby/issues/17645
username_10: Go for it, can't wait to see it, been tracking the amp integration wirh Gatsby.
username_11: Is there anyone taking care of using Gatsby theme item? I would like to help with this one.
username_3: @username_11 yes -- that one is being discussed right now internally. If anything changes though, I'll get in touch with you. Thanks for offering!
username_12: Before I do anymore work, is "Working with fonts and typography" being worked on? Almost done with the evaluation criteria!
username_11: @username_3 can you provide an updated list of the topics that are not being discussed? Thanks!
username_12: If no one is taking "Working on fonts and typography", I've created an issue for it and would like to work on it! #17703
username_3: @username_13 this is the most up-to-date list: I've updated it with some of the assignments and work that started recently.
@username_12 sounds great! We've recently added some recipes for working with fonts, but I think there could be a whole [reference guide](https://www.gatsbyjs.org/contributing/docs-templates/#reference-guides) on it, in addition to the [Typography.js library doc](https://www.gatsbyjs.org/docs/typography-js/).
username_13: @username_3 I think you meant @username_11 🙂
username_14: @username_15 Please don't unpin the pinned issues. Thanks!
username_15: @username_14 oh snap im sorry i thought that that was just on my end not everyones im sorry 🤦♀️ wont happen again my mistake
username_16: Hi @username_11, I'm assigned to the "Using Themes" workflow! It's under issue #18242. I'll be working on it until at least 10/13. If you have any ideas regarding it, or want to make a PR related to the workflow, feel free to comment on the issue and we can tackle it together 😊
username_7: I'm going to begin working on #12 Building Apps with Gatsby soon, and will have an issue with a workflow assessment that links back to this issue when it's ready.
username_17: Hi, I was thinking of taking on 20. Making reusable components. is it alright if I make an issue for it?
username_18: I created issue #19768 for
13. Building for E-commerce
username_19: I created issue #20691 for 19. Deploying to other hosting services 😊🖖
username_20: Hey y'all!! Just created an issue #24641 for 22. Implementing Search with Algolia. Waiting for some feedback. Thanks.
PS: First contribution 😄
username_21: Hi 👋 I just created the issue #25659 for npm vs. yarn.
Thank you in advance for any feedback!
Status: Issue closed
username_22: @meganesu and @username_14 - I'm closing this one - it's not received meaningful interaction since July 2020, and our docs have evolved significantly since then. |
square/moshi | 589483516 | Title: Generated code formatting causes error with field names containing space
Question:
username_0: Currently, there seems to be some auto-formatting of generated code, with respect to maximum line length
However, if strings this can cause strings to be wrapped, which causes compilation errors
**Example:**
```
@JsonClass(generateAdapter = true)
class Example(
@Json(name = "Some Thing") val a: String,
@Json(name = "Some Thing Else") val b: String,
@Json(name = "Foo Bar") val c: String
)
```
generates
```
private val options: JsonReader.Options = JsonReader.Options.of("Some Thing", "Some Thing
Else", "Foo Bar")
```
**Solution:**
Avoid wrapping this line
Answers:
username_1: This looks like it's probably a KotlinPoet issue.
username_1: I've got a test case in the works for this, but can't reproduce it. Even going to extremes, I can't get the code generator to break within a string. Here's what I see generated:
```kotlin
private val options: JsonReader.Options =
JsonReader.Options.of("a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a",
"b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b b",
"Some Other Else or some other thing")
```
Would you mind seeing if you can reproduce this in the current master branch, and providing a reproducible case, if so?
username_0: @username_1 sorry for the slow response - I'll see if I can put something together
username_2: Got same issue:
Test class:
```
import com.squareup.moshi.Json
import com.squareup.moshi.JsonClass
@JsonClass(generateAdapter = true)
data class TestJsonClass(
@Json(name = "Long Field Name With Spaces 1") val field1: Int,
@Json(name = "Long Field Name With Spaces 2") val field2: Int,
@Json(name = "Long Field Name With Spaces 3") val field3: Int,
@Json(name = "Long Field Name With Spaces 4") val field4: Int,
@Json(name = "Long Field Name With Spaces 5") val field5: Int
)
```
username_2: Any news?
Status: Issue closed
username_4: This has been fixed since 1.9.3 via #1053 |
MicrosoftDocs/microsoft-365-docs | 535098641 | Title: Typo on line 37
Question:
username_0: There is a line 37 which says Sing-in rather than Sing-in rather than Sign-in
Answers:
username_1: @username_0 - May we know which Microsoft document link you used in reference to this issue?
Please share it with us so we can investigate further. Thank you.
Status: Issue closed
username_1: @username_0 - We appreciate any feedback that improves the content in the Microsoft docs but we haven't had any response from you so we are going to close this issue.
If there is a specific area of the docs that we can improve on, please feel free to open a new issue by clicking on the Feedback section on the specific Microsoft doc which is found at the bottom of the page. Thank you. |
JabRef/jabref | 208553087 | Title: Cannot import from Medline
Question:
username_0: JabRef version 3.3 (via homebrew)
I wanted to import an article from Medline via their ID (ie 11456302)
The error `Leerstellen erforderlich zwischen publicId und systemId.` (spaces required between publicid and systemid) keeps coming up. Java 8.
```
org.xml.sax.SAXParseException: Leerstellen erforderlich zwischen publicId und systemId.
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:203) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1472) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLScanner.scanExternalID(XMLScanner.java:1072) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.scanDoctypeDecl(XMLDocumentScannerImpl.java:642) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:924) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:602) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:505) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:841) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:770) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:643) ~[?:1.8.0_121]
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:327) ~[?:1.8.0_121]
at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) ~[?:1.8.0_121]
at net.sf.jabref.importer.fileformat.MedlineImporter.importEntries(MedlineImporter.java:122) ~[JabRef-3.3.jar:?]
at net.sf.jabref.importer.fileformat.MedlineImporter.fetchMedline(MedlineImporter.java:93) ~[JabRef-3.3.jar:?]
at net.sf.jabref.importer.fetcher.MedlineFetcher.processQuery(MedlineFetcher.java:153) ~[JabRef-3.3.jar:?]
at net.sf.jabref.importer.fetcher.GeneralFetcher.lambda$actionPerformed$5(GeneralFetcher.java:245) ~[JabRef-3.3.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
```
Answers:
username_0: Updated to 3.8.2 and it works flawlessly.
Status: Issue closed
|
HISKP-LQCD/sLapH-projection-NG | 484931365 | Title: Redundancies from particles exchange
Question:
username_0: So far I have created a correlator matrix and only showed the parts that Markus also had. This provided a nice deduplication. Going forward I want to produce a non-redundant correlator matrix without external input. As one can see in this example (others are similar), the correlator matrix is redundant:

Resolving the particle momenta to `P - q` and `q` gives us these two cases:
| q | p₁ | p₂ |
| --- | --- | --- |
| 0 1 1 | 0 -1 0 | 0 1 1 |
| 1 0 0 | -1 0 1 | 1 0 0 |
These might now seem to be the same thing, but remember that these are each summed over the whole little group stabilizing `P = (0, 0, 1)`. So a rotation with 180° with transform one into the exchanged particle case of the other.
If the particles were not identical, this would not be a problem. Therefore I need to tweak the detection of equal momentum orbits just that it also sorts the momenta of the individual particles as they are all interchangeable for the current case of interest. This should become a configurable option in the longer term to support something like ππK where some but not all particles are interchangeable.
Answers:
username_0: This sorting does the trick, it is a minimal change to the code. It now only supports identical particles, but I have no plans to do anything else soon anyway. The Lüscher formalism likely isn't even developed for that yet.
Sorting the individual momenta in the orbit makes them differ a bit in the sorting:

But now the orbits of the two cases above are really the same:

I will do the analytic projection for the rho again and see whether this cures all the problems.
Right now the problem is comically severe, take a look at the following correlator matrices to have a laugh:

username_0: The P² = 1, B₁, P = 001 is now fixed. Before it had 2×2 entries, now it is 1×1 as one would want:

But that did not cure all the issues. For instance the P² = 2, B₁, P = 011 still shows apparent redundancies:

Computing the individual particle momenta is a bit hard because the q are given in the reference frame where the total momentum is P = (1, 1, 0).
q | p₁ | p₂
--- | --- | ---
0 0 -1 | 1 1 1 | 0 0 -1
0 0 1 | 1 1 -1 | 0 0 1
0 1 -1 | 1 0 1 | 0 1 -1
0 1 1 | 1 0 -1 | 0 1 1
The first two should be within the same orbit, actually. For the last two I would think that the stabilizer subgroup for (1, 1, 0) could have the same with particle exchange.
And indeed the first two give the same orbit even without particle exchange, so the old function is just fine.

The last two give the same results for both q, so they should drop out as well:

Then likely I have screwed up something with the reference rotation. But the filtering does work, it removes all the ones that we do not want!

Perhaps I am working with some stale files or something?
username_0: It was stale data for this part 🤦♂️:

username_0: But re-running did not cure all of them. There is still this one here:

We can let Mathematica figure out the individual momenta here:

Each row are the two individual momenta in the actual frame, not in this reference frame.
We can take a look at the momentum orbits (exploiting particle exchange already). Each row is one orbit. We can see that all them are unique, therefore it is no surprise that they were not filtered out.

But if we go into the reference frame such that the total momentum is P = (1, 1, 0) and keep the same relative momenta, we see that this momentum structure actually has only two unique ones.

So I would except that something is wrong in the part where the reference rotation is applied and therefore the global orientation makes a difference, which it definitely should not. It is okay if a global rotation gives a different signal because the gauge is not rotationally invariant, but the dimension of the correlator matrix must be the same for fixed P² and irrep.
username_0: I wrote that one has to add these `q` to `P_ref` and not `P` in this issue, right? I should also do that in the code 🤨. With that the filtering works just as in the reference frame:

username_0: I was adding the relative momenta `q` to the `P_ref` all along. With the latest fix I just effectively removed the reference rotation which means that the generated individual momenta are in the reference frame. As one can see in the above Mathematica screenshot the total momentum is `P = (1, 1, 0)`, which is the reference momentum for P² = 2. This means that the orbit calculation works exactly the same way, which is fine for our purposes. But I just added another `MomentumRef` application and then in the end nothing except the reference total momenta coupled to anything any more.
So the issue is more subtle apparently, but at least the duplicates have gone now. The following contains everything up to this point: [Correlator_Matrix_Visualization.pdf](https://github.com/HISKP-LQCD/sLapH-projection-NG/files/3538442/Correlator_Matrix_Visualization.pdf)
We are left with peculiar non-square correlator matrices like these ones:

In the prescription there are only these three elements, so that is present already there. It seems rather peculiar because the analytic projection code should use the same set of momenta for source and sink.
username_0: The momentum combinations are there in the pure spin part, so this is not the problem.
```mathematica
<|{-2, 0, 0} -> <|"A1" -> <|"1" -> <|"1" -> <|
"000" -> {"000", "001", "011"},
"001" -> {"000", "001", "011"},
"011" -> {"000", "001", "011"}|>|>|>|>|>
```
The cutoffs are not the problem either, we have 002 & 000, then 001 & 001, and 0-11 & 011, these have sum norms of 4, 2 and 4, none are above the cutoff of 4 we have for P² = 2. The max norms are 4, 1 and 2; these are not above the global cutoff of 4 as well. And the cutoff filtering is done before the momenta are put into the group theory, so that cannot be it.
The resulting analytic prescription has this non-square form. So the error is within the analytic projection code, the numeric one just does what it is supposed to do.
```js
{
"-200":{
"A1":{
"1":{
"1":{
"000":{
"000":[
14 summands
]
},
"001":{
"000":[
2 summands
]
},
"011":{
"000":[
8 summands
]
}
}
}
}
}
}
```
We can take a look at an intermediate state where we have a linear combination of strings. And there for some reason is zero in these other momentum combinations:

I have a hard time believing that this zero really is physical, I would rather think that something is wrong here.
username_0: Looking at Markus's rho data I find that he only has these following “gevp indices”:
```
id element
0 p: 4, g: \gamma_{50i}
1 p: 4, g: \gamma_{i}
2 p: 4, q: (0.0, 0.0, 1.0), g: \gamma_{5}, \gamma_{5}
```
With his momentum parametrization this is 002 & 000 for the individual particles, exactly the one diagonal element that works. He does not have any of the other elements, although I would think that they would be allowed from the cutoffs.
@username_1: Is it a coincidence that you only have a 1 two-pion operator for P² = 4 in the A₁ irrep while at the same time that is the only operators that produces a non-zero diagonal element for me?
username_1: No, thats not a coincidence.
(0,0,0) \times (0,0,2) is obviously A1.
(0,1,1) \times (0,-1,1) subduces into the E irrep. Thus I would expect the projection into A1 to be 0.
Regarding (0,0,1) \times (0,0,1) that is a bit more complicated. The momenta are equal, therefore the pions are indistinguishable (again). I investigated that a year ago and found that there is a cancellation in the isospin projection. I am not sure whether the pions are in S-wave but you should find that charge conjugation would be violated.
Also this is documented in https://github.com/HISKP-LQCD/sLapH-projection/blob/master/two-meson-operators/selection-of-q.md if you can understand what I wrote there xD
username_0: @username_1: The peculiar thing then really is that the two elements other than -200 & 000 are non-zero, right?
username_1: I am confused. Which elements are non-zero? Other than (0,0,2) \times (0,0,0) the projection is zero, isn't it?
username_0: @username_1: Look here: https://github.com/HISKP-LQCD/sLapH-projection-NG/issues/23#issuecomment-524650539
username_1: They are 1e-13 all the way through. Is that really significant?
username_0: @username_1: Thanks for pointing out the obvious, I haven't noticed the numerical scale 🤦♂️. I was caught up that analytically my code says that there is a signal in that particular channel. But perhaps it just does not take some particular symmetry or simplification into account and then it comes out as numerically zero.
This means that I need to filter correlator matrix elements that are very small and see whether they turn out to be smaller and square after that. Thank you!
Status: Issue closed
username_0: Just filtering out correlators which have |C(0)| < 1.0e-8 makes all of these A1 irreps well behaved and now all correlator matrices look sensible.
[Correlator_Matrix_Visualization.pdf](https://github.com/HISKP-LQCD/sLapH-projection-NG/files/3558587/Correlator_Matrix_Visualization.pdf)
username_2: looks very good now! |
silverstripe/silverstripe-userforms | 862169102 | Title: Custom required message not showing when field set to required via Recipient section
Question:
username_0: The custom required message not showing when the field is used in the Recipient section dropdown list.
It shows the default `This field is required` message instead.
Steps to reproduce:
- Create a user form with an email field.
- Open the `Form Fields` tab
- Edit the email field and open the `Validation` tab
- Add a Custom error message and make sure the `Is this field Required?` is unchecked.
- Save and close
- Open the `Recipients` tab
- Click the `Add Email Recipient` button
- Add initial data to the fields
- Modify one of the fields and select the `email` field on one of the dropdown options. This will make the email field be `required` on the front end.
- `Save` or `Publish` and test the Front end form.
Expected result:
- custom required message should be shown to the user.




Answers:
username_0: This scenario shows the custom error so I am assuming it should be the same case for the scenario I reported above.
**Steps to reproduce:**
Create a user form with an email field.
Open the Form Fields tab
Edit the email field and open the Validation tab
Add a Custom error message and make sure the Is this field Required? is **checked**.
Save or Publish and test the Front end form.
**Expected result:**
The custom required message is shown to the user.
username_1: @username_0 ould you please confirm the version of framework and userforms that you're using?
Could you also confirm if you're using CWP or not, and also if you're using either the starter or watea themes?
username_0: @username_1 -> non CWP, Custom theme
```
"silverstripe/recipe-cms": "4.x-dev",
"silverstripe/userforms": "^5.8"
``` |
eduardsui/tlse | 651999521 | Title: Cannot compile with TLS_ACCEPT_SECURE_RENEGOTIATION
Question:
username_0: Complains about 'buf' not defined.
Answers:
username_1: Secure renegotiation was dropped .
As of 2020, TLS renegotiation is no more because it was insecure.
I will remove all renegotiation-related code.
Status: Issue closed
username_1: Updated, but not supported anymore. Do not use "secure renegotiation, it may be broken". |
eosio-enterprise/chappe | 581328090 | Title: Update GraphQL query string to use contract name from configuration
Question:
username_0: The config.yaml file lists the contract that messages are published to. It is Eosio.PublishAccount.
This is obtained with viper.GetString("Eosio.PublishAccount")
However, the GraphQL template used to query is having issues with dynamic injection of strings.
I'm sure it's simple but I tried a few options and it didn't work. Until this is fixed, changing the messaging contract using GraphQL is not supported.
https://github.com/eosio-enterprise/chappe/blob/master/pkg/dfuse-graphql.go#L68 |
ebean-orm/ebean-types | 531941264 | Title: Cdir type should be named Cidr instead
Question:
username_0: The class name for the `Cdir` type is wrong. Since it is to be used for storing values of `cidr` type columns, it should be name `Cidr` instead.
(FYI: CIDR means [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)).
Answers:
username_0: This also applies to some of the other Ebean repositories by the way.
Do you want me to create separate issues for those?
username_1: I don't understand this part. Repositories such as ?
Status: Issue closed
username_1: Ok, the Cidr was also stored as `INET` rather than `CIDR` so also fixing that with #1880
I'll close this. Update with what "other Ebean repositories" means - thanks.
username_0: https://github.com/ebean-orm/ebean, but I see you already fixed it :-)
Oh, the README.md still contains the typo. Sorry...
username_1: Cool - fixed readme and summary. Should be all good now. Thanks!!
username_0: A colleague of mine found out that the src/main/java/io/ebeaninternal/server/type/ScalarTypeCdir.java (and its javadoc) still has the wrong name... Sorry...
username_1: Thanks and great work spotting that. It's good to get it right right !! Sorted now.
Cheers, Rob.
username_1: Adding reference to: https://github.com/ebean-orm/ebean/issues/1880 |
JimmyLv/reading | 243996969 | Title: 如何选择工作 - 知乎专栏
Question:
username_0: ## 如何选择工作 - 知乎专栏<br>
陈天 2 个月前 这是篇老文章,添了不少新内容,供你参考。 这个问题在『黑客与画家』里 <NAME> 已经给出了答案:选择那些具备 可测量性 和 可放大性 的工作。 我们来详细说说。注意以下的话跟「敏捷宣言」的措辞类似 —— 当你有选择的权利和能力时,优先选择前者而不是后者。但,这并不意味着后者不好。…<br><br>
July 19, 2017 at 06:38PM<br>
via Instapaper http://ift.tt/2qWvxcb<issue_closed>
Status: Issue closed |
maurice-daly/DriverAutomationTool | 648173902 | Title: Admin Rights + Internet Access Requirements - Not Best Practice
Question:
username_0: The application must be run with admin rights. The application typically needs to connect to the internet. This combination elevates risk any may be unnecessary.
One reason for requiring admin rights, is that user modifiable data exists in Program Files, which goes against best practice. Better locations would be %AppData% or %ProgramData%.
Request that the requirement for the application to *run* with admin rights is removed.
Status: Issue closed
Answers:
username_1: The tool can be run remotely from the site server, as long as sufficient rights to configuration manager are provisioned to the running account. The reason for installing into Program Files is to adhere to defaults within app control mechanisms like app locker, and given if the app is installed on a server, that the settings could be persisted across multiple users (yes it could be written to the all users profile too). |
dougmoscrop/serverless-http | 566046797 | Title: Express View Engine Support
Question:
username_0: Does this support Express' ability to render view template files?
````javascript
const express = require('express');
const path = require('path');
const serverless = require('serverless-http');
const app = express();
const router = express.Router();
router.get('/', (req, res) => {
res.render('index', { name: 'John' });
});
app.engine('jsx', require('express-react-views').createEngine());
app.set('views', path.join(__dirname, "views"));
app.set('view engine', 'jsx');
app.use('/.netlify/functions/server', router); // path must route to lambda
app.use('/', (req, res) => res.sendFile(path.join(__dirname, '../index.html')));
module.exports = app;
module.exports.handler = serverless(app);
````
When I run the express app locally, it works fine as expected.
When I run it using the serverless wrapped version without attempting to res.render() and set express to use the view engine, it also works fine as expected.
However, the moment I attempt to use the view engine (and others) I started have path and directory issues and the app can't seem to find my views folder.
Answers:
username_1: I have not tested this, is it possible something else during packaging is causing the views folder to be unavailable?
username_0: I feel like that is likely but I'm just a bit out of my depth. Can you point me in the right direction?
username_1: I guess the first step would be to run `serverless package` and then `cd .serverless` and `unzip {xyz}.zip` where xyz is the name of your service - I would look in the unzipped contents to see if they match.
username_0: Thanks!
username_1: How did you make out? Is this issue ok to close? |
rook/rook | 551118016 | Title: Device path filter creates misleading log messages
Question:
username_0: **Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
The following log message prints out when scanning devices
```
2020-01-16 23:01:37.564669 I | cephosd: device "sdc" (aliases: "/dev/disk/by-id/ata-SSDSC2KB960G8R_BTYF930201Z2960CGN /dev/disk/by-id/wwn-0x55cd2e41514802d9 /dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0") matches device path filter "/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0": false
2020-01-16 23:01:37.564682 I | cephosd: skipping device "sdc" that does not match the device filter/list ([{/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 1 0 false true}]). <nil>
```
This is misleading because it looks like the device path filter successfully matches the device, when it actually doesn't. In this example it looks like `/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:4:0` is matched by the filter `/dev/disk/by-path/pci-0000:18:00.0-scsi-0:0:2:0 1`. The log message should only print if the device path filter actually matches.
https://github.com/rook/rook/blob/a42d8222761544bc103f9bdcbff54592ca98dddf/pkg/daemon/ceph/osd/daemon.go#L339
**Expected behavior:**
**How to reproduce it (minimal and precise):**
Setup a devicePathFilter (search on https://rook.io/docs/rook/v1.2/ceph-cluster-crd.html)
Tail the logs of the node prepare job (e.g. `rook-ceph-osd-prepare-kubeminionrook0016`)
**Environment**:
* Rook version (use `rook version` inside of a Rook Pod):
rook: v1.2.1
Answers:
username_0: In #4712 I moved the log message to only print if the filter matches. If its desired that these log messages print out every time then I would suggest rewording the log message to say something like 'device ... matches/does not match device path filter....'
Status: Issue closed
|
google/guava | 47426257 | Title: None
Question:
username_0: /cc @kevinb9n re. nullability annotations
Answers:
username_0: (We probably won't do anything until Kevin's nullability annotation project is launched, so closing this.)
Status: Issue closed
username_1: @username_0 Off-topic: is Kevin's nullability project public knowledge yet? I remember reading somewhere that he did a talk about it at a conference, but I wasn't able to find a public recording of it. :)
username_2: He did [talk about it](https://openjdk.java.net/projects/mlvm/jvmlangsummit/agenda.html), but I don't think there's a public video (or slides, or anything else just yet).
username_1: @username_2 Cool, good to know. I'll patiently wait for further news then. 👍 |
MyCryptoHQ/MyCrypto | 775699596 | Title: Small typo in migrate app
Question:
username_0: <img width="620" alt="Screen Shot 2020-12-29 at 12 02 13 AM" src="https://user-images.githubusercontent.com/14945613/103259894-a449f180-499b-11eb-92de-e6d9471b61eb.png">
It says REP but it should be ANT
Answers:
username_1: Thanks for reporting Griff! Will commit a fix for this which will be included in the next release 👍
Status: Issue closed
|
connectivedx/fuzzy-chainsaw | 290546706 | Title: Add ability to turn off toast notifications
Question:
username_0: <!--- Provide a general summary of the issue in the Title above -->
<!--- Please provide as much of the following information as possible -->
<!--- Feel free to delete whatever is not relevant to you -->
## Problem
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Build servers experience problems with toast notifications. When they pop up on the build server, they are present and can either slow down the build process or cause it to hang entirely.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->

## Effect on Build Server

## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Maybe add a `--silent` flag or something for build servers? From a back-end side, we're typically using the `production` task if that helps<issue_closed>
Status: Issue closed |
wso2-extensions/esb-connector-file | 848075614 | Title: Update file connector sftpIdentities parameter and Passphrase parameter
Question:
username_0: **Description:**
As per the current wso2 file connector implementation, it is only supported for one sftpIdentities parameter and sftpIdentityPassphrase to specify the location of the private key and Passphrase of the private key for either source or the destination.
It is required to improve the file connector to specify the location of the private key and Passphrase for both the source server and the destination server.
**Related Issues:**
https://github.com/wso2-extensions/esb-connector-file/issues/123 |
AY2021S2-CS2103T-T13-4/tp | 823683547 | Title: Update testutil classes to simplify testing
Question:
username_0: - Create helper static fields in `CommandTestUtil` for easier testing of property-related commands
- Create `PropertyBuilder` class in testutil to help with building `Property` objects for testing
- Create `TypicalProperties` class in testutil that contains a list of `Property` objects to be used in tests<issue_closed>
Status: Issue closed |
amyamyx/Slack-c | 331652990 | Title: [MVP] Direct messages
Question:
username_0: - [ ] Backend: DB, model, controller, views
- [ ] Redux Loop: ajax, actions, reducer
- [ ] Presentational Components
- [ ] Adequate styling
- [ ] Smooth, bug-free navigation
- [ ] Seeded with info to demonstrate this feature |
ElementsProject/elementsproject.github.io | 503923491 | Title: signblock fails with dynamic blocks missing witnessScript error
Question:
username_0: The line...
```
SIGN1=$(e1-cli signblock $HEX)
```
fails with error 8 - "Signing dynamic blocks requires the witnessScript argument"
This is a change in 0.18 release relating to dynamic blocks.
Looking at fixing this now...
Answers:
username_0: Can be fixed with:
```
@phil, replacing the setting of the SIGNBLOCKARG variable to the following line will get you past the error you had:
```
SIGNBLOCKARG="-signblockscript=$(echo $REDEEMSCRIPT) -con_max_block_sig_size=150 -con_dyna_deploy_start=0"
```
```
username_0: Fixed with #106
Status: Issue closed
|
hexdigest/gowrap | 376782398 | Title: Access to Documentation Comments
Question:
username_0: We would like to be able to make interface and method ‘documentation’ comments available to templates.
For example, we would like to use a method comment to provide hints to a logging decorator about what and how to log specific params and results (avoiding monolithic log output, hiding sensitive data etc)
The documentary comments are readily available in the ast and so could be copied to the template input.
I’m happy to prototype this and raise a pull request, but would welcome your opinion on whether this would be a useful feature.
Answers:
username_1: @username_0 I was thinking about it and it totally makes sence. Feel free to raise a PR.
username_2: Was this ever completed?
username_1: @username_0 @username_2
Please take a look at https://github.com/username_1/gowrap/pull/36
Status: Issue closed
|
zynthian/zynthian-issue-tracking | 830159797 | Title: Show updates available in UI and Webconf
Question:
username_0: **Is your feature request related to a problem? Please describe.**
It is not evident when updates are available requiring users to manually perform an update to find out if there is anything new. It would be advantageous to indicate that updates are availble to the user.
**Describe the solution you'd like**
An indication in the UI that updates are available. This could be a change of text / colour of the "Update Software" menu item in Admin menu. It could also be advantageous to change the Admin menu item in the main menu so a user sees an indication without having to go looking.
Similarly, an indication in the webconf next to each repo could show that an update is available.
**Describe alternatives you've considered**
Notifications via Discourse can inform of updates but not all users follow this and there may be updates to a branch that users are not using hence not relevant to them which can cause notification fatigue.
**Additional context**
`git remote update` will update a local working copy to know what updates are available from the remote repo.
`git status` includes a line like, "Your branch is behind 'origin/master' by 4 commits, and can be fast-forwarded." which may be parsed to check if updates are available.
Answers:
username_0: I have added indication of updates available to webconf dashboard which is updated when the dasboard is refreshed. This is in _repository_mod_ branch.
I have added ability to check for updates in UI and only offer Update Software if updates are available. This is in _check_for_updates_ branch.
username_1: It's merged on testing. |
atom/tree-view | 156536121 | Title: Can't delete any file in tree-view.
Question:
username_0: Any attempt to right-click and remove results in the following error:

Answers:
username_0: Atom : 1.7.3
Electron: 0.36.8
Chrome : 47.0.2526.110
Node : 5.1.1
username_1: Is `gvfs-trash` installed?
username_0: username_0@solace:~$ /usr/bin/gvfs-trash --version
gvfs 1.28.1
username_0: Fixed. It was a systems problem, I just had to flush the existing trash directory.
Status: Issue closed
username_2: I have the same problem, @ghost how did you solve it? any help will be appreciated!
I'm using atom v1.19.6 and node v6.9.1 |
arisgk/open-cite | 594605441 | Title: Apply Airbnb code style with ESLint
Question:
username_0: Current ESLint config doesn't throw warnings for undefined variables etc, so applying Airbnb code style will result in a better experience.
https://github.com/airbnb/javascript
Answers:
username_0: Fixed by https://github.com/username_0/open-cite/commit/aed7e6aaa24199e87b8561ad9b7a2613f216f7bc
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 1000112134 | Title: Build Radio Button web component
Question:
username_0: - [ ] Build a Radio Button web component
- [ ] Identify existing Radio Button REACT components that need to be migrated
- [ ] Announce the new Radio Button web component to all teams
Answers:
username_1: It appears this work has already been completed https://design.va.gov/storybook/?path=/docs/components-va-radio--default
Status: Issue closed
|
pouchdb/pouchdb | 93152746 | Title: indexedDB have a store to persist rows, pounchdb how?
Question:
username_0: I have different class,
Student, Course, Teacher.
How to save rows to different store in poucdb?
guild book not clear about this.
Answers:
username_1: A simple approach would be to store your data in one instance of PouchDB, with a type property e.g.:
```js
{
type: 'student'
}
```
Then you can filter against that.
Status: Issue closed
username_2: Yup, you can do
```
new PouchDB("Student");
new PouchDB("Course");
new PouchDB("Teacher");
```
However you will not be able to write queries across those databases and will need to query them seperately, or you can do as @username_1 suggested and put them all in the same database with a `type` field. If you have any more questions feel free |
hanami/hanami | 243727696 | Title: dotenv gem not installed as a dependency
Question:
username_0: While trying to follow along on the tutorial in Getting Started, encountered this error at the point of creating the new hanami project:
~~~bash
08:56:20:hanami >> hanami new bookshelf --database=postgresql
create .hanamirc
create .env.development
create .env.test
create Gemfile
create config.ru
create config/boot.rb
create config/environment.rb
create lib/bookshelf.rb
create public/.gitkeep
create config/initializers/.gitkeep
create lib/bookshelf/entities/.gitkeep
create lib/bookshelf/repositories/.gitkeep
create lib/bookshelf/mailers/.gitkeep
create lib/bookshelf/mailers/templates/.gitkeep
create spec/bookshelf/entities/.gitkeep
create spec/bookshelf/repositories/.gitkeep
create spec/bookshelf/mailers/.gitkeep
create spec/support/.gitkeep
create db/migrations/.gitkeep
create Rakefile
create spec/spec_helper.rb
create spec/features_helper.rb
create db/schema.sql
create .gitignore
run git init . from "."
/Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/env.rb:59:in `load!': uninitialized constant Dotenv::Parser (NameError)
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/environment.rb:522:in `set_application_env_vars!'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/environment.rb:504:in `set_env_vars!'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/environment.rb:207:in `block in initialize'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/environment.rb:207:in `synchronize'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/environment.rb:207:in `initialize'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/generate/app.rb:19:in `new'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/generate/app.rb:19:in `initialize'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/container.rb:97:in `new'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/container.rb:97:in `generate_app'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/container.rb:43:in `post_process_templates'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/generators/generatable.rb:39:in `process_templates'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/generators/generatable.rb:12:in `start'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/abstract.rb:61:in `block in start'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/abstract.rb:58:in `chdir'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/commands/new/abstract.rb:58:in `start'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/lib/hanami/cli.rb:122:in `new'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/thor-0.19.4/lib/thor/command.rb:27:in `run'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/thor-0.19.4/lib/thor/invocation.rb:126:in `invoke_command'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/thor-0.19.4/lib/thor.rb:369:in `dispatch'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/thor-0.19.4/lib/thor/base.rb:444:in `start'
from /Users/username_0/.rvm/gems/ruby-2.3.3/gems/hanami-1.0.0/bin/hanami:5:in `<top (required)>'
from /Users/username_0/.rvm/gems/ruby-2.3.3/bin/hanami:22:in `load'
from /Users/username_0/.rvm/gems/ruby-2.3.3/bin/hanami:22:in `<main>'
from /Users/username_0/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `eval'
from /Users/username_0/.rvm/gems/ruby-2.3.3/bin/ruby_executable_hooks:15:in `<main>'
~~~
Turns out I did not have dotenv gem installed. After fixing with:
~~~
[Truncated]
identical spec/spec_helper.rb
identical spec/features_helper.rb
identical db/schema.sql
create apps/web/application.rb
create apps/web/config/routes.rb
create apps/web/views/application_layout.rb
create apps/web/templates/application.html.erb
create apps/web/assets/favicon.ico
create apps/web/controllers/.gitkeep
create apps/web/assets/images/.gitkeep
create apps/web/assets/javascripts/.gitkeep
create apps/web/assets/stylesheets/.gitkeep
create spec/web/features/.gitkeep
create spec/web/controllers/.gitkeep
create spec/web/views/.gitkeep
insert config/environment.rb
insert config/environment.rb
append .env.development
append .env.test
~~~
Answers:
username_1: @username_0 Thanks for reporting this. I'm working on the CLI, so I'll fix this too.
Enjoy Hanami! 🌸
username_2: @username_1 Can you let me know what this error is? I looked it and it looks like we have a guard clause to catch exactly this:
https://github.com/hanami/hanami/blob/master/lib/hanami/env.rb#L56-L59
```
return unless defined?(Dotenv)
contents = ::File.open(path, "rb:bom|utf-8", &:read)
parsed = Dotenv::Parser.call(contents)
```
I can't reproduce it, even on a ruby without dotenv installed.
So, the question is, how is `Dotenv` defined but not `Dotenv::Parser`, especially since `require 'dotenv'` [loads the parser as its very first line](https://github.com/bkeepers/dotenv/blob/master/lib/dotenv.rb#L1). `dotenv-rails` should load the parser too.
username_1: @username_2 Shall we make things more explicit?
1. Add a `require "dotenv/parser"`
2. `return unless defined?(Dotenv::Parser)`
username_2: @username_1 I still don't see how the current solution failed, so I'm not sure if that'll fix it. Line 56 of `lib/hanami/env.rb`is `return unless defined?(Dotenv)` and somehow it got past that, without the `dotenv` gem being installed?
username_2: I just tried to reproduce this with ruby 2.4.1, without `dotenv` installed and I couldn't :/
username_1: @username_0 Did you created the new project from Hanami git repository?
I mean, did you ran the following commands?
```shell
git clone https://github.com/hanami/hanami.git
cd hanami
hanami new bookshelf --database=postgresql
```
I'm asking because your prompt reports `hanami` on the left:
```shell
08:56:20:hanami >> ...
```
username_0: @jbodah no. gem installed hanami. However, it occurred to me to take a look at my gem environment, and I suspect I now know what happened:
11:13:22:bookshelf >> gem list dotenv
*** LOCAL GEMS ***
dotenv (2.2.1, 0.10.0)
As you can see, I apparently already had 0.10.0 version of dotenv installed, so this was likely enough to get past the defined?(Dotenv) guard, but not actually work with Hanami's expectations.
dotenv's not a gem I used directly before, so I didn't even realize it was on my system at all. I simply did a gem install dotenv w/o looking at the time.
username_2: @username_0 Thanks! I was able to reproduce this error with `dotenv` `0.10.0`. Will open a PR with a fix now :)
Status: Issue closed
|
awsccp/awsccp.github.io | 641681880 | Title: [Errata]
Question:
username_0: **Please describe the error you're reporting**
Exercise 3.1 - Copy from one S3 bucket to another s3 bucket is not what is searched for in the exercise. The exercise is described as finding information in the knowledge center on how to copy information from one S3 bucket to another S3 bucket in the S3 services section of the knowledge center. However, the exercise searches for and finds how to copy an AMI from one region to another, which is not in the S3 service section, but in the EC2 service section of the knowledge center.
**Please indicate the chapter or appendix, section, and page number where the error appears (if applicable)**
Chapter 3
Page 39
Exercise 3.1
**Please include any other comments or questions**
Answers:
username_1: Yup. That one's a blooper, alright. We'll have to fix it.
Thanks! |
imagemin/imagemin-webp | 450992171 | Title: Make webp optional dependency
Question:
username_0: ⚠ spawn /builds/backend/felix/front/node_modules/gifsicle/vendor/gifsicle ENOENT
⚠ gifsicle pre-build test failed
ℹ compiling from source
✖ Error: Command failed: /bin/sh -c autoreconf -ivf
/bin/sh: autoreconf: not found
at Promise.all.then.arr (/builds/backend/felix/front/node_modules/bin-build/node_modules/execa/index.js:231:11)
at processTicksAndRejections (internal/process/task_queues.js:86:5)
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `node lib/install.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-05-31T20_04_56_307Z-debug.log
ERROR: Job failed: exit code 1
```
</details><issue_closed>
Status: Issue closed |
JacobDB/pwa-install-prompt | 590207372 | Title: Not work in Ionic/Vue
Question:
username_0: Hi, thanks for sharing this resources.
I try with your library in my project and not work.
I execute the code ni mounted() in Vue.
<ion-content>
<div class="pwa-install-prompt__container">
.......
</div>
</ion-content>
<script>
import pwaInstallPrompt from "pwa-install-prompt";
.....
....
mounted() {
new pwaInstallPrompt(".pwa-install-prompt__container", {
active_class: "is-active",
closer: ".pwa-install-prompt__overlay",
condition: null,
expires: 180,
show_after: 90,
on: {
beforeOpen: function () {
console.log("before open!");
},
afterOpen: function () {
console.log("after open!");
},
beforeClose: function () {
console.log("before close!");
},
afterClose: function () {
console.log("after close!");
},
}
});
thanks!

Answers:
username_1: Seems like you may just need to import the CSS.
Status: Issue closed
|
DePaul-Medix/SmartCAD | 124518110 | Title: UI: Layout: Create sublayout: debug mode.
Answers:
username_1: https://docs.google.com/drawings/d/1voTgIqkX6In0y54Znilu0J5GdMYm1TFZ9gsStEY1XX0/edit
See link (debug mode query section under search bar)
username_1: Debug mode has been completed. Debug mode allows admin to view the time at which errors occurred as well as the text representation and SQL query performed. Additional information such as the server response (i.e. 400,500) as well as method of input when performing search by the user are available. Datatables Jquery plugin used to provide search, sort, pagination, and filtering capabilities to debug/error log.
Status: Issue closed
|
Rangi42/polishedcrystal | 820876560 | Title: Disable can hit during turn 1 of Dig
Question:
username_0: I used Dig against Beauty Cassandra's Wigglytuff. It used Disable and it successfully hit, disabling Dig and forcing my Pokemon to come up. I don't have No Guard (I have Damp) and I'm pretty sure Lock-On wasn't used prior.<issue_closed>
Status: Issue closed |
micronaut-projects/micronaut-cache | 836217831 | Title: Most cache metrics remain zero
Question:
username_0: When using micronaut-cache-caffeine, and exposing metrics via prometheus, most metrics show 0 forever.
Micronaut 1.x shows the same behaviour, so this might be a old bug, but it's still the same with micronaut 2.4.1.
### Steps to Reproduce
1. Create application that uses a cache and exposes cache metrics (see example application)
2. Start it
3. Open http://localhost:8080/prometheus
### Expected Behaviour
The cache metrics should become > 0.0 over time.
### Actual Behaviour
Only the `cache.size` is reported != 0, everything else stays 0.0 forever. In the example application, at least `cache.gets` and `cache.puts` should report values > 0, since the cache contains a value.
I am not sure about the `cache.evictions` and `cache.eviction.weight` counters mean or should display, but these stay 0 as well, like below.
```
HELP cache_puts_total The number of entries added to the cache
# TYPE cache_puts_total counter
cache_puts_total{cache="mycache",} 0.0
# HELP cache_eviction_weight_total The sum of weights of evicted entries. This total does not include manual invalidations.
# TYPE cache_eviction_weight_total counter
cache_eviction_weight_total{cache="mycache",} 0.0
# HELP cache_evictions_total cache evictions
# TYPE cache_evictions_total counter
cache_evictions_total{cache="mycache",} 0.0
# HELP cache_size The number of entries in this cache. This may be an approximation, depending on the type of cache.
# TYPE cache_size gauge
cache_size{cache="mycache",} 1.0
# HELP cache_gets_total the number of times cache lookup methods have returned an uncached (newly loaded) value, or null
# TYPE cache_gets_total counter
cache_gets_total{cache="mycache",result="hit",} 0.0
cache_gets_total{cache="mycache",result="miss",} 0.0
```
### Environment Information
- **Operating System**: MacOS 10.15
- **Micronaut Version:** 2.4.1
- **JDK Version:** OpenJDK 11
### Example Application
https://github.com/username_0/micronaut-experiments/tree/cache-metrics
Check out the branch `cache-metrics`, and run `./mvnw test`. There is one test failing that expects the cache counters to be non-zero.
Or build it with `./mvnw package -DskipTests`, start it, and open http://localhost:8080/prometheus
While I deactivated all other metrics for this example, activating them does not "heal" the cache metrics.
Answers:
username_1: Statistics has to be enabled for Caffeine to record them, and there appears to be a flag in Micronaut. Your configuration doesn't seem to be enabling it. That's a guess as a quick glance of the integration.
https://github.com/micronaut-projects/micronaut-cache/blob/67d0c94596c5f500f5be8748b88c63bde9602448/cache-caffeine/src/main/java/io/micronaut/cache/caffeine/DefaultSyncCache.java#L202-L204
username_0: That was too easy, the setting solved it -- at least almost. With this setting (`caches.mycache.record-stats: true`) , the metrics increase (so the previously failing test is now green), except the `cache.puts` counter, which remains 0.
username_1: hmm, caffeine doesn’t instrument puts (we strongly advise loads instead). Maybe micronaut doesn’t instrument it separately.
username_1: Oh, this is micrometer metrics. That is known to have a wrong mapping and uses `loadCount`, copy-pasted from the Guava metrics which had this mistake. See https://github.com/micrometer-metrics/micrometer/issues/2215
Status: Issue closed
username_0: Thanks for explaining this! Since this seems a different issue than my initial "no metrics at all" problem, I think it would be fine to close this issue, since it was a misconfiguration issue on my side.
The `cache.puts` being there, but zero, is something else, and I think deserve a more specific issue.
username_0: TDLR for others falling for this:
If you need stats from caffeine, they need to be activated in config, similar to the following:
```
caches:
mycache:
record-stats: true
``` |
serilog/serilog | 387835175 | Title: Serilog write to file operation is taking more time.
Question:
username_0: Serilog write to file operation is taking more time so which on and average it is taking 515 milliseconds to
Log one entry in the notepad.
We added following but no luck
Implemented async mechanism to write data to file.
Instead of using transient dependency injection we are using scoped in dot net core startup.
Used Buffer True flag which will help to increase performance.
Code snippet

Log times:

Target framework
Serilog.Sinks.File version -4.0.0
Serilog version 2.5.0
operating system
Windows server 2008 R2
- [ ] netCore 2.1
Answers:
username_1: Hi @username_0,
Thanks for creating the issue, however this would better live over at the [File sink issues](https://github.com/serilog/serilog-sinks-file/issues) list.
If you could provide an full sample that illustrates the problem that would be great.
Thanks!
username_1: Seems that #1253 and 1252 are duplicates.
username_2: Hi @username_0 - if you're still experiencing an issue here, would it be possible to reopen the issue with some more information/performance traces/etc.? It's hard for us to narrow this down without profiling the app on your infrastructure. Thanks!
Status: Issue closed
username_3: Hi @username_2 . I am observing the same issue. Actually, when i set buffered =true in the config, I am getting exactly the same amount of time to write logs into file(s) when this buffered is set to false.
Since it's a simple test project, I can share it with you. Please let me know. |
bazelbuild/rules_nodejs | 489278713 | Title: Manifest path logic gets wrong localWorkspacePath, breaks require
Question:
username_0: When we resolve from a RUNFILES_MANIFEST_FILE (generally Windows only), then we load the runfiles manifest and do some path logic on it to power later resolutions.
A manifest typically looks like
```
bazel_tools/tools/bash/runfiles/runfiles.bash C:/users/username_0/_bazel_username_0/7yoikqcr/external/bazel_tools/tools/bash/runfiles/runfiles.bash
build_bazel_rules_nodejs/packages/terser/test/inline_sourcemap/out.min.js C:/users/username_0/_bazel_username_0/7yoikqcr/execroot/build_bazel_rules_nodejs/bazel-out/x64_windows-fastbuild/bin/packages/terser/test/inline_sourcemap/out.min.js
build_bazel_rules_nodejs/packages/terser/test/inline_sourcemap/out.min.js.map C:/users/username_0/_bazel_username_0/7yoikqcr/execroot/build_bazel_rules_nodejs/bazel-out/x64_windows-fastbuild/bin/packages/terser/test/inline_sourcemap/out.min.js.map
build_bazel_rules_nodejs/packages/terser/test/inline_sourcemap/spec.js C:/users/username_0/documents/github/rules_nodejs/packages/terser/test/inline_sourcemap/spec.js
```
We iterate through these entries and skip to the last one, thanks to this logic:
https://github.com/bazelbuild/rules_nodejs/blob/df37fca0113f9e7e7536185cb907e2e6ca9e33f2/internal/node/node_loader.js#L126-L129
so we print
`[test_loader.js] using localWorkspacePath C:/users/username_0/_bazel_username_0/7yoikqcr/external/build_bazel_rules_nodejs/` and this works.
However, if the manifest has a new entry that appears higher, and is under the output_base but not under the bin_dir (like the linker.js I was adding in #1079) then the manifest starts like this
```
bazel_tools/tools/bash/runfiles/runfiles.bash C:/users/username_0/_bazel_username_0/7yoikqcr/external/bazel_tools/tools/bash/runfiles/runfiles.bash
build_bazel_rules_nodejs/internal/node/node_loader.js C:/users/username_0/_bazel_username_0/7yoikqcr/external/build_bazel_rules_nodejs/internal/node/node_loader.js
```
The second entry isn't matched by the short-circuit logic, so it gets used. then we print
`[test_loader.js] using localWorkspacePath C:/users/username_0/_bazel_username_0/7yoikqcr/external/build_bazel_rules_nodejs/`
and later module resolutions fail.<issue_closed>
Status: Issue closed |
ICESAT-2HackWeek/CloudMask | 734988426 | Title: Missing data in some ATL06 h5 files
Question:
username_0: The function `read_atl06_fromfile` in `utils_atl06` access an h5 file and returns the data frames. However, some of the h5 files do not have all the features we request included, leading to the following problem:
<img width="801" alt="Screen Shot 2020-11-03 at 12 25 43 AM" src="https://user-images.githubusercontent.com/39526081/97947238-706f9880-1d6b-11eb-80ea-1729581ba18f.png">
I am still not sure if this is a problem of the data itself, in icepyx, or in the way I retrieve ATL06 data. This may also be related to this another [issue](https://github.com/ICESAT-2HackWeek/CloudMask/issues/10), since I am not specifying the name of the granules I want to access.
For now, I will add a `try` command in `read_atl06_fromfile` to avoid this problem, but probably this is no the best solution. |
idaholab/raven | 220263807 | Title: Possible bug in MAAP interface.
Question:
username_0: MAAP5Interface.py has the code:
```
if timer in lines[line]: found[cont]=True
```
Is it possible for this to give a false positive? (timer == "10", with the line "Time 100") (Not sure what the values look like so maybe this is impossible.)
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [ ] 1. Is it tagged with a type: defect or improvement?
- [ ] 2. Is it tagged with a priority: critical, normal or minor?
- [ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)?
- [ ] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
Status: Issue closed
Answers:
username_1: Closing since no longer valid |
filipows/angular-animations | 510440163 | Title: How to achieve the following with your library?
Question:
username_0: ```
// html
<span [@fadeAnimTrigger]="wordIndex">{{words[wordIndex]}}</span>
// ts
animations: [
trigger('fadeAnimTrigger', [
transition(':decrement', [
style({ opacity: 0 }), animate('500ms', style({ opacity: 1 }))
]),
transition(':increment', [
style({ opacity: 0 }), animate('500ms', style({ opacity: 1 }))
]),
])
```
I'm trying to replace `style({ opacity: 0 }), animate('500ms', style({ opacity: 1 }))` in the above code with your `fadeIn`
Answers:
username_1: Hi @username_0,
It's an interesting use case. Currently the library doesn't support `:increment` and `:decrement` selectors. I'll have a look how we could incorporate similar use cases to the library.
In the example you posted, the animation for `:increment` and `:decrement` are the same, so maybe you can do sth like in [this Stackblitz example](https://stackblitz.com/edit/angular-animations-lib-demo-inc-dec?file=src%2Fapp%2Fdemo-main%2Fdemo-main.component.html) |
TorqueGameEngines/Torque2D | 639337630 | Title: OS freezes when launching Torque2D.exe
Question:
username_0: **OS:** Windows 10, 64-bit (version: 1903)
**GPU:** RX480 4GB (version: 26.20)
**Issue:**
- When launching Torque2D.exe, it goes fullscreen then freezes my entire OS. Can't `ctrl+alt+delete`, press `esc` or `alt-f4`. Have to shut down my computer manually
Answers:
username_0: console.log might help:
```
//-------------------------- 6/15/2020 -- 20:12:27 -----
Console trace is off.
--------------------------------------------------------------------------------
Video Initialization:
--------------------------------------------------------------------------------
Video initialization:
Accelerated OpenGL display device detected.
Activating the OpenGL display device...
Activating the OpenGL display device...
Setting screen mode to 1024x768x32 (w)...
Creating a new window...
Acquiring a new device context...
Pixel format set:
32 color bits, 24 depth bits, 8 stencil bits
Creating a new rendering context...
Making the new rendering context current...
OpenGL driver information:
Vendor: ATI Technologies Inc.
Renderer: Radeon (TM) RX 480 Graphics
Version: 4.6.13587 Compatibility Profile Context 20.4.2 26.20.15029.27016
OpenGL Init: Enabled Extensions
ARB_multitexture (Max Texture Units: 8)
EXT_blend_color
EXT_blend_minmax
EXT_compiled_vertex_array
EXT_texture_env_combine
EXT_packed_pixels
EXT_fog_coord
ARB_texture_compression
EXT_texture_compression_s3tc
(ARB|EXT)_texture_env_add
EXT_texture_filter_anisotropic (Max anisotropy: 16)
WGL_EXT_swap_control
OpenGL Init: Disabled Extensions
EXT_paletted_texture
NV_vertex_array_range
3DFX_texture_compression_FXT1
Max Texture Size reported as: 16384
OpenAL Driver Init
OpenAL Driver Init Success
Shutting down the OpenGL display device...
Making the GL rendering context not current...
Deleting the GL rendering context...
Releasing the device context...
```
username_1: You must be using the prebuilt copy. If you have the technical expertise to build the engine yourself, we've recently included a hot fix in the engine code for this bug. https://github.com/GarageGames/Torque2D/commit/ad141fc7b5f28b75f727ef29dcc93aafbb3f5aa3
username_0: Thank you @username_1 !!
username_1: No problem. I also put together a new release so this doesn't happen again. Happy coding!
Status: Issue closed
|
kubevirt/common-templates | 357118150 | Title: e1000e for Windows 2012+
Question:
username_0: https://github.com/kubevirt/kubevirt/issues/1354#issuecomment-407354007
Windows 2012+ should be using a e1000e NIC model.
Answers:
username_1: I believe we do have that in the templates https://github.com/kubevirt/common-templates/blob/master/templates/win2k12r2.tpl.yaml#L87
Status: Issue closed
|
Kotlin/kotlinx-benchmark | 964506767 | Title: WorkerExecutor.submit is deprecated
Question:
username_0: The method `WorkerExecutor.submit` used in `JmhBytecodeGeneratorTask` is deprecated in Gradle and will be removed in Gradle 8. We need to migrate to using one of its work queues, for example to `noIsolation()`.
`JmhBytecodeGeneratorWorker` also has to be migrated to implement `WorkAction<>` interface.
More details here: https://docs.gradle.org/current/userguide/upgrading_version_5.html#method_workerexecutor_submit_is_deprecated |
googleapis/google-cloudevents-nodejs | 735501967 | Title: Run postgen on firebase folder
Question:
username_0: There's a bug where the postgen step is not run on the firebase folder:
https://github.com/googleapis/google-cloudevents-nodejs/blob/master/tools/postgen.ts#L22
To fix this, we should recursively check all the folders with the name found in the source repo/gen script:
```
$(dirname $PWD)/google-cloudevents/jsonschema
```<issue_closed>
Status: Issue closed |
SCIInstitute/SCIRun | 234942353 | Title: simplified InterfaceWithMatlab
Question:
username_0: ### Description
This idea of this simplified version of the Interface with Matlab. This version would run matlab code through the python interface without the users learning much of the matlab python interpreter. I envision the module allowing inputs similar to the IWP module, but only allowing one function call with matlab syntax. For instance the function call:
`[output1, output2,...] = function(input1,input2,input3,...,options,....)`
would convert to the appropriate series of functions. There are some important considerations including:
-making sure the python matlab library is valid
-starting the matlab engine in a useful way.
-converting input and output data to the proper format.
-path to function
-matlab path
-documenting formats to make sure that the users know what kind of matlab code that they need to write.
- different versions of matlab may be different. It may be worth supporting only one or two versions.
Answers:
username_1: Discussed with @username_0 and @shayestehfard today at IBBM. The initial steps will be:
- [ ] Add special syntax to the text editor in IWP module to enclose "native" Matlab code, akin to embedding assembly code in C++ (@username_1)
- [ ] Add hook in IWP module to call the python conversion/engine script on the embedded Matlab code (@username_1)
- [ ] Test with simplest-possible Matlab code and current conversion script (@shayestehfard)
- [ ] Document heavily (@username_0)
Then iterate.
username_0: [pythonscript_new.py.zip](https://github.com/SCIInstitute/SCIRun/files/2488626/pythonscript_new.py.zip)
here is the code that Kimia and I have been working on
username_1: Starting this today!
username_1: The function `stringtotext` seems to be limited to inputs of the form `matlabFunctionOfFields(field1,field2,...)`. Is that the main use case right now? If so we'll need to be explicit about it with the "matlab block" button in the editor.
username_0: I think we will want it more general than that. We can implement it like this as a prototype and generalize the converter.
username_0: For the matlab demarcator, I think `%%` would work the best. An example could be:
```
%%
A = magic(3);
[evec, eval] = eig(A);
%%
username_1: Cool. Next week let's set up a pair-programming session since I need a matlab-enabled machine to test.
username_1: today!
username_1: Got a proof-of-concept working with @username_0. Next step is to figure out where/how to call the essential `convertMatlabBlock` function, and where to store the helper python functions (which need matlab engine running). Also, being able to force-reexecute will be nice since one is usually editing matlab code and re-running SCIRun without changing any of the latter state, so that should be exposed as a button.
username_1: Future steps:
- [ ] Support more matlab datatypes
- [ ] Support more matlab syntax
- [ ] Support more lines of code (blocks?)
- [ ] Extracting general matlab engine management code to a higher level
- [ ] Code maintenance of python helper functions
Status: Issue closed
|
nojb/ocaml-imap | 87901387 | Title: Decode error in certain html email
Question:
username_0: d</em>, <em>starred</em> or <em>important</em> messages at the top of your =
inbox or try <a href=3D"https://support.google.com/mail/answer/186531?hl=3D=
en&utm_source=3Dgmailwelcomeemail&utm_medium=3Demail&utm_campai=
gn=3Dgmailwelcome" style=3D"text-decoration:none; color:#15C">Priority Inbo=
x</a>, which combines all of these. Go to the <a href=3D"https://mail.googl=
e.com/mail/#settings/inbox" style=3D"text-decoration:none; color:#15C">Inbo=
x tab in Gmail settings to make changes</a>.</span></td></tr></table></div>=
<div style=3D"float:left; clear:both; padding:0px 5px 10px 10px;"><img src=
=3D"https://ssl.gstatic.com/accounts/services/mail/msa/welcome_hangouts_2.p=
ng" alt=3D"Video meetings" style=3D"display:block;"width=3D"129"height=3D"1=
29"/></div><div style=3D"float:left; vertical-align:middle; padding:10px; m=
ax-width:398px; float:left;"><table style=3D"vertical-align:middle;"><tr><t=
d style=3D"font-family:'Open sans','Arial',sans-serif;"><span style=3D"font=
-size:20px;">Send messages and hold video meetings from your inbox</span><b=
r/><br/><span style=3D"font-size
2015-06-12 21:34:35.374248-04:00 Info <<< :small; line-height:1.4em">Chat with contac=
ts and start video meetings with up to 15 people in <a href=3D"https://www.=
google.com/enterprise/apps/business/products.html?hl=3Den&utm_source=3D=
gmailwelcomeemail&utm_medium=3Demail&utm_campaign=3Dgmailwelcome#ha=
ngouts" style=3D"text-decoration:none; color:#15C">Google+ Hangouts</a>.</s=
pan></td></tr></table></div><div style=3D"float:left; clear:both; padding:0=
px 5px 10px 10px;"><img src=3D"https://services.google.com/fh/files/emails/=
importcontacts.png" alt=3D"Contacts" style=3D"display:block;"width=3D"129"h=
eight=3D"129"/></div><div style=3D"float:left; vertical-align:middle; paddi=
ng:10px; max-width:398px; float:left;"><table style=3D"vertical-align:middl=
e;"><tr><td style=3D"font-family:'Open sans','Arial',sans-serif;"><span sty=
le=3D"font-size:20px;">Bring your contacts into Gmail</span><br/><br/><span=
style=3D"font-size:small; line-height:1.4em">You can <a href=3D"https://su=
pport.google.com/a/answer/14024?hl=3Den&topic=3D3056079&utm_source=
=3Dgmailwelcomeemail&utm_medium=3Demail&utm_campaign=3Dgmailwelcome=
" style=3D"text-decoration:none; color:#15C">import your contacts</a> from =
other webmail to make the transition to Gmail easier. <a href=3D"https://su=
pport.google.com/a/answer/14024?hl=3Den&topic=3D3056079&utm_source=
=3Dgmailwelcomeemail&utm_medium=3Demail&utm_campaign=3Dgmailwelcome=
" style=3D"text-decoration:none; color:#15C">Learn how</a></span></td></tr>=
</table></div><br/><br/>
<div style=3D"clear:both; padding-left:13px; height:6.8em;"><table style=3D=
"width:100%; border-collapse:collapse; border:0"><tr><td style=3D"width:68p=
x"><img alt=3D'Gmail icon' width=3D"49" height=3D"37" src=3D"https://ssl.gs=
tatic.com/ui/v1/icons/mail/images/gmail_logo_large.png" style=3D"display:bl=
ock;"/></td><td style=3D"align:left; font-family:'Open sans','Arial',sans-s=
erif; vertical-align:bottom"><span style=3D"font-size:small">Happy emailing=
,<br/></span><span style=3D"font-size:x-large; line-height:1">The Gmail Tea=
m</span></td></tr></table></div>
</td></tr></table></div>
<div style=3D"direction:ltr;color:#777; font-size:0.8em; border-radius:1em;=
padding:1em; margin:0 auto 4% auto; font-family:'Arial','Helvetica',sans-s=
erif; text-align:center;">=C2=A9 2015 Google Inc. 1600 Amphitheatre Parkway=
, Mountain View, CA 94043<br/></div></div></body></html>
--bcaec51ba295ff9082051787cbf1--
((pid 78436) (thread_id 0)
((human_readable 2015-06-12T21:34:35-0400)
(int63_ns_since_epoch 1434159275375828000))
"unhandled exception in Async scheduler"
("unhandled exception"
((lib/monitor.ml.Error_
((exn
(email/async_imap.ml.Imap_error
"Decode error: Unexpected character 'a'"))
(backtrace
("Called from file \"lib/raw_deferred.ml\", line 55, characters 65-68"
"Called from file \"lib/job_queue.ml\", line 164, characters 6-47" ""))
(monitor
(((name main) (here ()) (id 1) (has_seen_error true)
(is_detached false) (kill_index 0))))))
(Pid 78436))))
```
Note that the `Unexpected character` is different all the time :/
The snippet the clause throws it is:
```ocaml
| `Error e ->
Imap.pp_error Format.str_formatter e;
imap_ex (Format.flush_str_formatter ())
```
Answers:
username_1: Does it happen only with a particular message ? Or with all messages ? (I just tried to fetch a message and it seems to work ...)
username_0: Just this particular message. It's one of the initial messages you can when you sign up for gmail.
username_1: Could you try pinning the branch `err_context` and try again ? It will hopefully let us pinpoint exactly where the error is triggered...
username_1: I just open a new gmail account. I fetched all three emails `RFC822.TEXT` using `imap_shell` without any problem. Could you try fetching the problematic message using `imap_shell` ?
username_1: (also note that for some reason none of the messages in my newly opened account actually coincides with your problematic message - probably owing to our different locations)
username_1: Ok, I finally could reproduce the error - it fails when fetching all three initial messages, but not if I fetch only one or any two. Investigating further.
username_0: Yup. I'm playing around with it in the imap_shell now and I can fetch the messages individually.
username_0: Using your branch this is the message that I get btw:
```
"Decode error: Unexpected character '<' at 1138 in \"bf1--<tabl\""
```
username_1: The reason the unexpected character changes each time is because the library is trying to read beyond the end of input (which is either uninitialised or has data from previous reads)...
username_1: I think I found the bug (an off-by-1 calculation in certain cases), let me try a fix.
username_1: Can you try pinning the branch `fix-literal-read-off-by-one` and try again ? Thanks!
username_0: It works :100:
Status: Issue closed
username_1: Great. Merging to mastar. Thanks very much for reporting! |
spring-cloud/spring-cloud-app-broker | 501627648 | Title: Sensitive configuration should not be logged by default
Question:
username_0: When creating a service broker for Spring Cloud Config Server, we found that when sensitive configuration information caused an error during create/update that it was logged:
```
2019-10-02 16:37:09.110 ERROR 8 --- o.s.c.a.s.WorkflowServiceInstanceService : Error creating service instance ServiceBrokerRequest{platformInstan
ceId='null', apiInfoLocation='api.example.io/v2/info', originatingIdentity=null}AsyncServiceBrokerRequest{asyncAccepted=true}AsyncParameterizedService
InstanceRequest{parameters={git={label=master, uri=<EMAIL>:acceptance/spring-cloud-config.git, cloneOnStart=true, privateKey=-----BEGIN RSA PRI
VATE KEY-----
MIIEpAIBAAKCAQEAoqyz6YaYMTr7L8GLPSQpAQXaM04gRx4CCsGK2kfLQdw4BlqI...
```
For many customers this could be a problem for their security policy validation rules. App Broker should have a way to ensure configuration with sensitive data is not logged or put anywhere that it might be accessible during or after processing the request.
A first step could be to remove any offending log statements. A secondary step could be to provide a way to turn on/off as a user of Spring Cloud App Broker or enable redacting of specific pieces of configuration although could be error prone for future changes.<issue_closed>
Status: Issue closed |
sshnet/SSH.NET | 172983487 | Title: SshOperationTimeoutException and SshConnectionException continue to appear randomly.
Question:
username_0: Hello, I am currently working on a project that must download, edit and then upload a file from an sftp server. To achieve this, I am using this library. As well as this, the file on the sftp is renamed before it is edited and uploaded, then renamed back to its original name once completed.
My issue is 50% of the time, an exception will appear. This exception can be either a SshOperationTimeoutException (timeout set to 10 seconds) or a SshConnectionException, but will always occur on the 'SftpClient.Connect()' function. The other 50% of the time, the program works as expected. What could be causing these issues and what can I do to minimize or solve them. I will put the code I use to upload, rename and download below.
**Download File**
`public bool downloadFile(String ftpPath, String localPath)
{
retrieveFile:
try
{
using (SftpClient request = CreateFtpWebRequest())
{
request.Connect();
using (var file = File.OpenWrite(localPath))
{
request.DownloadFile(ftpPath, file);
}
request.Disconnect();
}
return true;
}
catch (Exception e)
{
//Exception rules
return false;
}
}`
**Upload File**
`try
{
using (SftpClient request = CreateFtpWebRequest())
{
request.Connect();
FileInfo f = new FileInfo(localPath);
String uploadedfile = f.FullName;
var fileStream = new FileStream(uploadedfile, FileMode.Open);
request.UploadFile(fileStream, newPath, null);
request.Disconnect();
fileStream.Dispose();
}
this.alterations.Clear();
}
catch (Exception e)
{
//Exception rules
}`
**Rename File**
`using (SftpClient ftp = CreateFtpWebRequest())
{
ftp.Connect();
if (ftp.Exists(newFilePath))
ftp.DeleteFile(newFilePath);
ftp.RenameFile(ftpPath, newFilePath);
ftp.Disconnect();
}`
Answers:
username_1: What version of SSH.NET are you using, and using which CLR (.NET 4.0, .NET Core, ...)?
What SSH server, and what version?
username_1: @username_0: Ping!
username_0: Sorry it's taken so long to. The issue was with the sftp server itself and not with the library. One quick call to the host and everything sorted itself out.
Status: Issue closed
|
navossoc/KeePass-Yet-Another-Favicon-Downloader | 301671685 | Title: Not working at some sites
Question:
username_0: Most of the time this plugin works. However, it fails to get favicon from some websites. For example, it hangs on www.bestbuy.com for long time before reporting "Error". During the hang, the cancel button does not interrupt the wait. www.hertz.com does not work with the same symptom. And if I load the page with Firefox, I can clearly see the `<rel type="icon">` or `<rel type="short icon">` nodes.
Some other sites fail with their login subdomain, but succeeds with the "www" domain. For example, reg.usps.com fails, but www.usps.com is OK. Further, https://www.amazon.com/ap/signin fails, but https://www.amazon.com/ is OK.
Answers:
username_1: Yes, to be honest some of this things are on my TODO list...
Including this one:
https://github.com/username_1/KeePass-Yet-Another-Favicon-Downloader/issues/2
I should have finished this already, but unfortunately those last few days are very busy.
Some of this errors are not just "can't find the favicon", maybe a failed TLS handshake with the server or something like that. .NET enforces some kind of security rules.
That is why the log will be useful and it it should be done already.
Besides that, the cancel action works between downloads, it is not async with each download per self.
What I mean is if you have 10 downloads total and 4 of them are running, you can cancel the other 6 immediately.
This is a limitation on how I write the code, but can be improved later.
I was targeting a lower .NET version (C# 2.0) so it can be mono/linux compatible, but[ mono has others bugs](https://github.com/username_1/KeePass-Yet-Another-Favicon-Downloader/issues/9) that prevent its operation on some cases anyway.
So I'm not sure if that was a good decision at the time.
Now that I have a small user base (3k downloads, yay!), including some linux users, it may be easier now to develop and test all that.
username_2: Glad more people are noticing this plugin and you feel your efforts are appreciated, You do good work on this ^^
username_1: @jokupudip Sure!
Can you please create an issue with more details about your environment?
Thinks like: Windows/Linux? KeePass version, URL you are using, etc...
username_3: I'm having the same problems on Linux with Keepass 2.36. Some entries work flawlessly but most of them are resulting in errors.
Is there anything I can do to help debugging?
PS: Thank you for this _awesome _plugin_. I love it!
username_4: I haven't had any luck downloading any icons. Using Keepass 2.39.1 portable on Windows 10. It simply says error, but not sure where to check error log. Perhaps the issue has something to do with security software, windows firewall, anti-malware?
username_1: @username_4 can you try to install keepass? just to check if the problem is the "portable mode" or not.
Status: Issue closed
username_1: @username_4 I did some additional tests and it works properly on `portable mode`, maybe it is windows firewall blocking it.
---
I did a few improvements related to issues with TLS handshakes. Next version should solve most of them. |
aws/aws-sdk-js-v3 | 949942397 | Title: How to provide custom credentials to the client
Question:
username_0: We are using Vercel and the names for AWS Keys are reserved hence can't be inferred by the app.
How do I provide custom credentials? Cant find anything in the doc!
```
import { S3 } from '@aws-sdk/client-s3';
const s3 = new S3({
accessKeyId: AWS_SHEPHERD_ACCESS_KEY, // error
secretAccessKey: AWS_SHEPHERD_SECRET_KEY,
region: 'us-west-1',
});
```
Answers:
username_1: Hi @username_0, thank you for reaching out. Here is one of the documentation that you can use to config credentials ([link](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html)). If you still find this is not helpful, please follow this [template](https://github.com/aws/aws-sdk-js-v3/issues/new?assignees=&labels=documentation&template=--documentation.md&title=) so we can help you better.
Status: Issue closed
|
mabcodes/prj-rev-bwfs-dasmoto | 367310994 | Title: Feedback Score
Question:
username_0: ## Rubric Score
### Criteria 1: HTML and CSS Linking
* _Score Level:_ 4 - Exceeds Expectations
### Criteria 2: Implementation of Design Specification and Content
* _Score Level:_ 3 - Meets Expectations
* _Comment(s):_ Because you didn't use sections or divs to contain/group together different parts of your code, that threw off your styling when it comes to spacing. Check the code feedback for more info.
### Criteria 3: HTML Elements and Content
* _Score Level:_ 2 - Approaches Expectations
* _Comment(s):_ This is because of my earlier comment about not using divs or sections to group elements together. The code feedback will have more information.
### Criteria 4: CSS Selectors and Syntax
* _Score Level:_ 3 - Exceeds Expectations
* _Comment(s):_ Great job here. Here's a cheat sheet that can help you learn more ways to better [target CSS elements](https://code.tutsplus.com/tutorials/the-30-css-selectors-you-must-memorize--net-16048)
### Overall Score: 12/16
Good job, I can tell you have a good basic understanding of how HTML and CSS works, the only thing is learning to use divs/sections as containers for elements. Also keep in mind, the title element is the only display element in your head tag. In the head tag, do not use p, divs, or any other display element that you would normally find in the body.
Answers:
username_1: Thanks, I appreciate your comprehensive feedback. I'll try to work on the problems you mentioned on my next project. |
dimsemenov/PhotoSwipe | 307242212 | Title: ADA Empty button Flag
Question:
username_0: Hello,
I'm working on making sure my site passes ADA compliance and I'm getting an Empty Button error on some of the code generated by your plugin.
`
<button class="pswp__button pswp__button--close" title="Close (Esc)"></button>
<button class="pswp__button pswp__button--share" title="Share"></button>
<button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>
<button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>
`
These lines of code are causing an issue with our web accessibility evaluation tool. We can go in and add the appropriate aria-labels to these buttons, but I'm concerned that any future updates will overwrite our changes. Are there any other options we can do to help resolve this issue?
Thanks. |
mapbox/supermercado | 842518689 | Title: _feature_extrema() in burntiles.py does not handle multigeometry types
Question:
username_0: Hi,
I don't actually know much about this library but I had [a problem when using robosat](https://github.com/mapbox/robosat/issues/217) where multipolygons were causing a breakdown due to unreferenced variables.
I eventually tracked it down to [this function](https://github.com/mapbox/supermercado/blob/ecc9f5a36c1c8d13872df5bdcbd01621e5f3ab48/supermercado/burntiles.py#L29) in burntiles.py.
Like I said I don't know much about this library so I don't know if it is possible to make this handle multigeometry types. If not then maybe it should throw a more descriptive error.
Cheers
Answers:
username_1: I encountered this same issue.
```
File "/Users/username_1/code/titiler/venv/lib/python3.8/site-packages/supermercado/burntiles.py", line 78, in burn
bounds = find_extrema(polys)
File "/Users/username_1/code/titiler/venv/lib/python3.8/site-packages/supermercado/burntiles.py", line 44, in find_extrema
*[_feature_extrema(f["geometry"]) for f in features]
File "/Users/username_1/code/titiler/venv/lib/python3.8/site-packages/supermercado/burntiles.py", line 44, in <listcomp>
*[_feature_extrema(f["geometry"]) for f in features]
File "/Users/username_1/code/titiler/venv/lib/python3.8/site-packages/supermercado/burntiles.py", line 38, in _feature_extrema
return min(x), min(y), max(x), max(y)
UnboundLocalError: local variable 'x' referenced before assignment
```
even if MultiPolygon can't be supported here, it would be nice to have an `else` that raises a clear exception, rather than this incidental one.
username_2: Hi folks,
I tried to solve this problem with pull request #50 (supporting MultiPolygon), and it works for me.
You may try that commit too. |
samszo/generateur | 106978716 | Title: premières questions...
Question:
username_0: Voici donc mes premières questions (s'il vaut mieux que je fasse un sujet par question, dis-le moi !)
- est-il possible de programmer une répétition dans une phrase ?
Ex : sur une phrase du type "J'ai acheté un [m_chapeau], [6|m_chapeau] à moi", j'aimerais pouvoir faire en sorte que les deux fois, la même valeur de "chapeau" apparaisse
("j'ai acheté une casquette, cette casquette est à moi")
- j'imagine que le générateur ne gère pas les temps composés ? (il faut donc gérer l'auxiliaire + adjectif participe passé ?)
Chaque chose en son temps, j'en ai plein d'autres, mais qui peuvent attendre.
Merci !
Gaspard |
kubernetes-sigs/apiserver-network-proxy | 1096474868 | Title: Are multiple DIAL_REQs valid?
Question:
username_0: I was reading throuhg the code, and trying to understand the protocol in use. I think we're lacking a bit of documentation on the semantics.
First, I think that there are two different protocols, sharing the same messages. One is for client->server, one is for agent->server.
Also, I think for the client -> server communication, a single DIAL_REQ message must be the first message from the client to the server, and then is not allowed to happen again. (i.e. there's no multiplexing)
For the agent -> server communication, it looks like multiple DIAL_REQ messages are allowed.
Is that right? If so, I'd be happy to send a contribution to start adding some docs to the .proto files.
Answers:
username_1: Mostly correct, there are segments for the tunnels, which can each have different protocols.
Agent -> Server communication is currently always grpc and allows multiple DIAL_REQ messages (multiplexing).
Client (KAS) -> Server communication is as follows.
- http-connect: uses the http connect verb/standard protocol which only supports a single DIAL_REQ.
- grpc: Theoretically supports multiple DIAL_REQ messages at the protocol level. However for historical reasons the implementation "CreateSingleUseGrpcTunnel" only allows a single DIAL_REQ. This should really be fixed. |
chrysn/aiocoap | 1032151831 | Title: Randomly "Incoming error 111 from <UDP6EndpointAddress" received and failed
Question:
username_0: Python version: 3.8.10 (default, Jun 2 2021, 10:49:15)
[GCC 9.4.0]
aiocoap version: 0.4.1
Modules missing for subsystems:
dtls: missing DTLSSocket
oscore: missing cbor2, ge25519
linkheader: everything there
prettyprint: missing cbor2
Python platform: linux
Default server transports: tcpserver:tcpclient:tlsserver:tlsclient:udp6
Selected server transports: tcpserver:tcpclient:tlsserver:tlsclient:udp6
Default client transports: tcpclient:tlsclient:udp6
Selected client transports: tcpclient:tlsclient:udp6
SO_REUSEPORT available (default, selected): True, True
We have one scenario where randomly get an error "Incoming error 111 from <UDP6EndpointAddress" when we do multiple GET requests. Debug logs are as follow:
DEBUG Using selector: EpollSelector
DEBUG Sending request - Token: <PASSWORD>, Remote: <UDP6EndpointAddress 192.168.1.110>
DEBUG Sending message <aiocoap.Message at 0x7fd783059c40: Type.CON GET (MID 26022, token 59e9) remote <UDP6EndpointAddress 192.168.1.110>, 1 option(s)>
DEBUG Exchange added, message ID: 26022.
DEBUG Socket error recevied, details: SockExtendedErr(ee_errno=111, ee_origin=2, ee_type=3, ee_code=3, ee_pad=0, ee_info=0, ee_data=0)
DEBUG Incoming error 111 from <UDP6EndpointAddress 192.168.1.110 (locally 192.168.1.51%enp4s0)>
**192.168.1.51 is aiocoap client and 192.168.1.110 is CoAP server.**
We captured traffic in Wireshark and receive an ICMP Destination port unreachable. I have attached pcang file.
[coapcl.zip](https://github.com/username_1/aiocoap/files/7386880/coapcl.zip)
Answers:
username_1: Closing as the logs indicate that the server is just not running; if you can tell more (see above), please follow up / reopen.
Status: Issue closed
|
moment/luxon | 966042341 | Title: Inconsistent offset after setZone()
Question:
username_0: **Describe the bug**
Inconsistent timezone being printed when using `setZone()`. In the following example, I have a list of timestamps in UTC, I use `setZone()` to set each one to "America/New_York". I then print each one out using `toString()`. The output has various offsets like "-04:00" and "-16:00".
**To Reproduce**
```
const { DateTime } = require('luxon');
const oris = [
"2017-05-15T00:10:23Z",
"2017-05-15T01:10:23Z",
"2017-05-15T02:10:23Z",
"2017-05-15T03:10:23Z",
"2017-05-15T04:10:23Z",
"2017-05-15T05:10:23Z",
"2017-05-15T06:10:23Z",
"2017-05-15T07:10:23Z",
"2017-05-15T08:10:23Z",
"2017-05-15T09:10:23Z",
"2017-05-15T10:10:23Z",
"2017-05-15T11:10:23Z",
"2017-05-15T12:10:23Z",
"2017-05-15T13:10:23Z",
"2017-05-15T14:10:23Z",
"2017-05-15T15:10:23Z",
"2017-05-15T16:10:23Z",
"2017-05-15T17:10:23Z",
"2017-05-15T18:10:23Z",
"2017-05-15T19:10:23Z",
"2017-05-15T20:10:23Z",
"2017-05-15T21:10:23Z",
"2017-05-15T22:10:23Z"
];
for (const ori of oris) {
const oriUTCString = DateTime.fromISO(ori, { zone: 'UTC' }).toString();
const dtUTC = DateTime.fromISO(oriUTCString, { zone: 'UTC' });
const dtRezoned = dtUTC.setZone('America/New_York');
console.log(dtRezoned.toString());
}
```
**Actual vs Expected behavior**
I expect each timestamp to have the same offset, i.e. -04:00. Actual behavior is a mix of "-16:00", "+08:00", and "-04:00".
```
2017-05-14T08:10:23.000-16:00
2017-05-14T09:10:23.000-16:00
2017-05-14T10:10:23.000-16:00
2017-05-14T11:10:23.000-16:00
2017-05-15T12:10:23.000+08:00
2017-05-15T01:10:23.000-04:00
2017-05-15T02:10:23.000-04:00
2017-05-15T03:10:23.000-04:00
2017-05-15T04:10:23.000-04:00
2017-05-15T05:10:23.000-04:00
2017-05-15T06:10:23.000-04:00
2017-05-15T07:10:23.000-04:00
2017-05-15T08:10:23.000-04:00
2017-05-15T09:10:23.000-04:00
2017-05-15T10:10:23.000-04:00
2017-05-15T11:10:23.000-04:00
2017-05-15T12:10:23.000-04:00
2017-05-15T01:10:23.000-16:00
2017-05-15T02:10:23.000-16:00
2017-05-15T03:10:23.000-16:00
2017-05-15T04:10:23.000-16:00
2017-05-15T05:10:23.000-16:00
2017-05-15T06:10:23.000-16:00
```
**Desktop (please complete the following information):**
- OS: MacOS 11.4
- Luxon version 2.0.2
- Your timezone America/Los_Angeles
Answers:
username_1: This version gives consistent results:
```js
for (const ori of oris) {
const dtRezoned = DateTime.fromISO(ori, { zone: 'America/New_York' });
console.log(dtRezoned.toString());
}
```
username_2: I get consistent results on the original too...
username_3: I copied and ran the script trying both loops, I got the same mix of offset (-16:00", "+08:00", and "-04:00"). I'm on Windows using Webstorm.
username_3: Actually OP and I figured it out, we are both using an old version of node 10.16.0 for a CS class. Switching to the most recent version gave consistent results. Thanks! |
deeplearning4j/deeplearning4j | 431796848 | Title: ND4J CUDA: Can't create UTF8 DeviceLocalNDArrays
Question:
username_0: This comes up in inference on TF import models on CUDA, though could happen in other contexts too.
```
@Test
public void testDeviceLocalStringArray(){
INDArray arr = Nd4j.create(Arrays.asList("first", "second"), 2);
assertEquals(DataType.UTF8, arr.dataType());
assertArrayEquals(new long[]{2}, arr.shape());
DeviceLocalNDArray dl = new DeviceLocalNDArray(arr); //Exception here
INDArray arr2 = dl.get();
assertEquals(arr, arr2);
}
```
```
java.lang.ClassCastException: org.nd4j.linalg.api.buffer.Utf8Buffer cannot be cast to org.nd4j.linalg.jcublas.buffer.BaseCudaDataBuffer
at org.nd4j.jita.handler.impl.CudaZeroHandler.getDevicePointer(CudaZeroHandler.java:775)
at org.nd4j.jita.allocator.impl.AtomicAllocator.getPointer(AtomicAllocator.java:320)
at org.nd4j.jita.concurrency.CudaAffinityManager.replicateToDevice(CudaAffinityManager.java:274)
at org.nd4j.linalg.util.DeviceLocalNDArray.broadcast(DeviceLocalNDArray.java:58)
at org.nd4j.linalg.util.DeviceLocalNDArray.<init>(DeviceLocalNDArray.java:38)
at org.nd4j.Temp.testDeviceLocalStringArray(Temp.java:42)
```
We don't have a CudaUtf8DataBuffer so I'm not sure if the solution is to add that, or handle Utf8 differently here...
https://github.com/deeplearning4j/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-backend-impls/nd4j-cuda/src/main/java/org/nd4j/linalg/jcublas/buffer
Status: Issue closed
Answers:
username_1: Fixed/implemented |
jlippold/tweakCompatible | 627592716 | Title: `VolumeBrightness` working on iOS 13.4.1
Question:
username_0: ```
{
"packageId": "com.gilshahar7.volumebrightness",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.gilshahar7.volumebrightness",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.gilshahar7.volumebrightness/",
"iOSVersion": "13.4.1",
"packageVersionIndexed": false,
"packageName": "VolumeBrightness",
"category": "Tweaks",
"repository": "Packix",
"name": "VolumeBrightness",
"installed": "1.0.0",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.gilshahar7.volumebrightness",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Toggle between changing the volume and brightness by pressing both volume buttons.",
"latest": "1.0.0",
"author": "gilshahar7",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
esbenp/bruno | 313286828 | Title: Is there a way to get pagination metadata?
Question:
username_0: Hi there, first I'd like to say tanks and congrats for this great package.
I'm starting to use this package on my new project and i what to kow if there is a way to get the pagination meta-data?
Lets say I've requested /books?page=1&limit=10, how do i know about how many pages can i go through and what is the next page or if the next page exists, etc..
Sorry if I'm making a dump question, I just need a guide.
Answers:
username_1: I usually make something like following:
```
$query = User::query();
$this->applyResourceOptions($query, $this->parseResourceOptions());
$queryCount = clone $query;
$total = $queryCount->offset(0)->limit(PHP_INT_MAX)->count();
$users = $query->get();
return response()->json([
'users' =>$users,
'total' => $total,
], 200);
```
This way, I include in the response the users array and the total items matching query params, so the frontend can build a pagination component.
Note that two queries are required to get the items and the total, and we need to "reset" offset and limit in the cloned query object. |
splunk/splunk-connect-for-kubernetes | 477983397 | Title: RBAC and Project based log routing to Splunk through HEC.
Question:
username_0: Hi All,
We have a centralised open shift cluster having many project teams deployment. How can we configure and ensure that only our project specific logs are pushed to the Splunk index that we mapped and not logs from all projects. We are not authorised to view the logs from other projects.
Answers:
username_0: We do not want the logs from all the projects deployed in a node, we want it only from our Project(Namespace). I am not sure if open shift have a centralized node / cluster level logging and if the logs could be segregated based on Project namespace.
username_1: If you only want it from a specific namespace you can customize the fluentd source config:
https://github.com/splunk/splunk-connect-for-kubernetes/blob/e613504297371c844965c35942129411487aa91e/manifests/splunk-kubernetes-logging/configMap.yaml#L39-L55
The tail input allows you to use regex to define the path to only the files you want:
https://docs.fluentd.org/input/tail
For example, for the namespace `foo` you might set it like this:
```
path /var/log/containers/*_foo_*.log
```
Kubernetes embeds the pod namespace and container id in the path. This should allow you achieve what you want.
username_2: Closing as resolved. Please reopen/create a new one if you need further support.
Status: Issue closed
|
Ocramius/Instantiator | 75275583 | Title: Update Packagist to mark package as abandoned in favor of doctrine/instantiator
Question:
username_0: When packages are marked as abandoned within Packagist, a replacement package can be specified. This allows users to easily see the package is no longer suggested for new development.
To implement this, the maintainer of the package should be able to navigate to https://packagist.org/packages/ocramius/instantiator/abandon and follow the instructions.
### Mockup:

Status: Issue closed
Answers:
username_1: The package is not really abandoned, but whatever.
Done :) |
FreeUKGen/MyopicVicar | 444603331 | Title: Deploy SEO Model Updates
Question:
username_0: The SEO landing pages require a script to be run occasionally to update them. Once the multi-modal code is deployed on production, I will write a cron job that runs this rake task.
(I'll also test our privacy settings against `test2` with `wget` while I'm at it.)
Answers:
username_1: @username_0 has this already been completed?
username_2: @username_0 SEO and nofollow is in production
username_0: The production run of the SEO landing page creation task raised two issues:
1) After an apparently successful completion, nothing appeared under the `/open` landing page URL. I need to research and diagnose this.
2) The task ran (albeit in `nice` mode) for about six hours. We need to strategize on the best time/location for a long-running task to run, with input form @username_3
username_1: @username_3 it looks like Ben may require your help here (unless you've already done so). Can you take a look when you can?
username_3: Each of the UKGEN projects is allocated a week in every four for their database rebuilds, since that's how the "one" projects operated. I expect FreeREG2 hasn't needed this, but it's still scheduled. There's a [calendar which tracks which project's week it is](https://www.freebmd.org.uk/update.html).
During that week you're free to do computationally expensive stuff, customarily on the master server. We can disable searches on that server during your work.
Is that schedule sufficient?
If you need it more frequently please can your characterise the load? Is it a single thread? Does it need lots of memory? If you'd like me to measure it please let me know how to run it safely.
username_1: @username_0 please see Lemon's comments above in relation to getting this story updated and completed.
username_0: That schedule sounds fine. Once the PR is merged and deployed, I'll set up the cron jobs.
username_1: Closing this as now done (cron job aside - which is in the other story).
Status: Issue closed
|
getsentry/sentry-python | 364233937 | Title: Attribute error
Question:
username_0: AttributeError
'Identifier' object has no attribute 'postponed_alerts_counts'
ID: 8099fd1f8815496eaa1f815657918b9d
Sept. 26, 2018, 7:37:17 p.m. UTC
Exception
AttributeError: 'Identifier' object has no attribute 'postponed_alerts_counts'
File "interfaces.py", line 43, in start
self.startInternal()
File "identifier.py", line 63, in startInternal
send_time=send_time)
File "identifier.py", line 128, in match_batch
accepted = self.correct_by_past_history(match.person_guid, confident_score)
File "identifier.py", line 91, in correct_by_past_history
self.postponed_alerts_counts[guid] += 1
Answers:
username_1: Hi, what does this have to do with Sentry's Python SDK? A little bit more context about your problem would be useful.
username_0: My bad. Wrong data. Wrong problem. Sorry @username_1
Status: Issue closed
|
SeasideSt/Seaside | 931869275 | Title: Cycle dependency between Seaside-Core and Seaside-Canvas
Question:
username_0: If you see in BaselineOf, Seaside-Core depends on no one (https://github.com/SeasideSt/Seaside/blob/master/repository/BaselineOfSeaside3.package/BaselineOfSeaside3.class/instance/baselinecommon..st#L34) and Seaside-Canvas depends on Seaside-Core (https://github.com/SeasideSt/Seaside/blob/master/repository/BaselineOfSeaside3.package/BaselineOfSeaside3.class/instance/baselinecommon..st#L30)
However... Seaside-Core DOES NEED Seaside-Canvas... as it references the class WAHtmlCanvas which is contained in Seaside-Canvas: https://github.com/instantiations/Seaside/blob/tonel-dev/repository/Seaside-Core/WASessionCookieProtectionFilter.class.st#L87
Thoughts? @username_1
Answers:
username_1: That's indeed a dependency that was recently (and inadvertently) introduced. The `WASessionCookieProtectionFilter` has some rendering built-in to display a page when no cookies are allowed. Perhaps that should be an extension to the filter in the Seaside-Canvas package, indeed.
In Pharo and GemStone, this does not cause an issue. The code is loaded and the reference to the undeclared class is automatically resolved when the Seaside-Canvas package is loaded. Does this kind of undeclared classes cause issues loading in VAST?
username_0: Hi Johan,
Yes, I imagine this wouldn't be an issue on Pharo/GemStone because otherwise the CI would be screaming hahaha. It was not a problem in VAST per-se but in our Tonel importer strategy we were using to import Seaside packages. We were trying to use a "smart" tool that would detect dependencies automatically so that we didn't have to tell dependencies explicitly (we don't have Metacello). This automatic dependencies processor was who detected this cycle.
However...when we found this problem we realized that maybe it was better to set explicit dependencies based on Seaside's BaselineOf (requires: and friends). So now we switched to this explicit approach and now the problem is solved for us.
So.... up to you Johan what you want to do. From VAST perspective we are OK, we took another approached and we moved forward.
Regardless of the result, thanks for taking a look!
username_0: Hi @username_1
I keep thinking about this one and while I solved my original problem, I still agree with you with the "Perhaps that should be an extension to the filter in the Seaside-Canvas package". So I will try to make that change soon.
username_2: The only way I see to solve this properly is to introduce a third package that will depend on both.
My initial approach to this was to create a `Seaside-ProtectionFilter` package that will depend on `Seaside-Canvas` (and transitively on `Seaside-Core`). And then modify the `BaselineOfSeaside3` to load them in the `Core` and `common`.
But as I was doing this, I noticed that having a separate package only for this two classes might be overkill or at least short-sighted, then I thought about having a separate `Seaside-RequestFilter` package to accomodate all `WARequestFilter` related things (aka "middleware"). I analyzed the change and dependencies and it could be isolated without issues, and could be a good place to integrate future filters.
So @username_1 the next PR will include this change. If you disagree with that, please let me know and I'll modify it to have only the `Seaside-ProtectionFilter` package.
username_1: @username_2 my idea was to remove rendering in the base implementation of the filter (just return http code) and move the response that renders html to a subclass.
username_2: Well... that's a much simpler solution.
To what package should the canvas dependent subclass be moved?
username_0: I guess to Seaside-Canvas, right?
username_1: @username_2 I was going to take care of it but you can beat me to it as I will only be able to get back to Seaside by the end of the week. 😉
Maybe Seaside-Components ?
username_2: BTW, I didn't move the canvas-dependent filter to `Seaside-Canvas` nor to `Seaside-Component`, both packages are pretty well defined in the scope, and I didn't want to pollute them. So I moved it to `Seaside-Environment` that seemed more like a mix.
Status: Issue closed
|
w3c/ttml1 | 115106309 | Title: Clarify syncbase in SMPTE continuous mode
Question:
username_0: In smpte + continuous mode, the spec should clarify what the implicit syncbase is; as it stands this mode is not different to media mode.
It has been interpreted that in this mode the syncbase should be the external time interval for all elements, rather than the immediate parent/sibling so that nesting is not significant. Also should non 12M syntax be disallowed in this mode?
(raised by <NAME> on 2012-02-15)
From tracker issue http://www.w3.org/AudioVideo/TT/tracker/issues/151<issue_closed>
Status: Issue closed |
quasarframework/quasar | 607181952 | Title: Electron Node Integration Build Fails
Question:
username_0: **Describe the bug**
Using Quasar Electron Mode: with Node Integration False fails on production build using electron-builder.
```
// quasar.conf.js
nodeIntegration: false,
```
```
// electron-main.js
nodeIntegration: QUASAR_NODE_INTEGRATION,
nodeIntegrationInWorker: QUASAR_NODE_INTEGRATION,
preload: path.resolve(__dirname, 'electron-preload.js'),
```
The dev version works and loads fine, but on production build, i get the following error:
Uncaught ReferenceError: __dirname is not defined
at Object.<anonymous> (vendor.js:formatted:60347)
Diving Deeper:
The error is initiated at line APP_URL, I suspect this is a webpack config error, since for the same project and same code, SPA build (both prod and dev) works just fine.
Also, the main process and renderer process both do not throw any error, the problem just lies in the vendors.js chunk emitted for the app itself.
Also another project initiated some time before (earlier versions) used to work fine too.
```
e.exports = f(Object({
NODE_ENV: "production",
CLIENT: !0,
SERVER: !1,
DEV: !1,
PROD: !0,
MODE: "electron",
**APP_URL: "file://" + __dirname + "/index.html",**
VUE_ROUTER_MODE: "hash",
VUE_ROUTER_BASE: ""
}).NODE_NDEBUG)
```
**What does not work**
Setting up same env variable in quasar.conf.js with blank JSON string to overwrite the default value.
```
env: ctx.dev
? { }
: { APP_URL: JSON.stringify('') },
```
**Expected behavior**
Should probably extend SPA config without modification on Node Integration = False.
**Platform (please complete the following information):**
OS: Win 10.0.17134
Yarn: 1.22.4
Browsers: Chrome 81.0.4044.122
Electron: 8.2.3
Electron-Builder: 22.5.1
Answers:
username_0: Temporary Hack:
node_modules\@quasar\app\lib\quasar.conf.js
1. Comment Out
`cfg.build.APP_URL = `file://" + __dirname + "/index.html``
2. Change the loadURL param in electron-main.js in your src-electron/main-process
`mainWindow.loadURL(process.env.PROD ? `file://${__dirname}/index.html` : process.env.APP_URL);`
username_1: Cannot reproduce. But regardless, your changes don't actually make sense. They are essentially the same thing. Can you offer a reproduction repo pls?
username_2: Came across this bug. It appears that the quasar-conf.js does not have access to node, so the URL needs to be set in electron.
username_0: @username_1, please find the reproduction repo here:
https://github.com/username_0/quasar-issue-6893
I took some more deep diving, the issue does not pop up until i import the dependency 'request', which then seems to break the quasar webpack config generating the above error.
Not sure if this should be happening but it is.
username_0: Also by the way, using icon genie latest version on windows and electron builder default config settings in quasar.config.js, the build process is failing.
As on windows "linux-512x512" image is not added by default in src-electron/icons/ folder and this seems to be the default requirement for electron builder.
So if anyone has cleared the src-electron/icons and rebuilt the icons, they would have to copy the 512x512 image and rename it to "linux-512x512" for the build process to actually work on default config.
username_3: I have come across the same error when building Electron with nodeIntegration: false
username_4: @username_0 Is this still an issue on QApp 2.0?
username_5: I was just trying to upgrade from QApp 1.x to QApp 2.1 and got stuck with a similar error, don't know if it is related.
I'm building an electron app and after following the upgrade guide, I get the following error when running `quasar dev -m electron`

```
// quasar.conf.js
electron: {
nodeIntegration: true,
// ...
}
```
```
// electron-main.js
function createWindow() {
mainWindow = new BrowserWindow({
width: 1000,
height: 600,
useContentSize: true,
webPreferences: {
// Change from /quasar.conf.js > electron > nodeIntegration;
// More info: https://quasar.dev/quasar-cli/developing-electron-apps/node-integration
nodeIntegration: QUASAR_NODE_INTEGRATION
// More info: /quasar-cli/developing-electron-apps/electron-preload-script
// preload: path.resolve(__dirname, 'electron-preload.js')
}
});
// ...
}
```
If I just set a value (true or false) instead of using `QUASAR_NODE_INTEGRATION` it works fine.
Was this variable removed on the upgrade or something?
username_5: I created an electron project from scratch and saw that now this `QUASAR_NODE_INTEGRATION` is set to `process.env.QUASAR_NODE_INTEGRATION`.
username_0: @username_4, I can too confirm this error. Once this is resolved, we would know more about the initial error.
username_4: @username_0 Check @username_6 commits above.
Use: `process.env.QUASAR_NODE_INTEGRATION` instead of `QUASAR_NODE_INTEGRATION`
username_0: @username_4, you are right thanks. @username_6's commits would solve this issue completely.
username_4: @username_0 Just to clarify, his PR correctly documents *how* it should be done. It still needs changing in your source code.
username_0: @username_4 and @username_6, there is one new error appearing though. With ```nodeIntegration: false```, the following script tag should not be added to index.html by default webpack config.
Only happens in production env. Should I open a new issue for that?
```
<script>window.__statics = __dirname</script>
```
username_6: @username_0 yes, please open another issue
username_4: Docs PR merged and now live.
Status: Issue closed
|
ant-design/ant-design | 234088405 | Title: Uncaught TypeError: (0 , _domAlign2.default) is not a function
Question:
username_0: ### Version
2.10.4
### Environment
Development
### Reproduction link
[https://codepen.io/username_0/pen/pwJKBK?editors=001](https://codepen.io/username_0/pen/pwJKBK?editors=001)
### Steps to reproduce
Add antd select to the https://github.com/nicksp/redux-webpack-es6-boilerplate boilerplate and open the options of the select.
### What is expected?
It should open without any errors
### What is actually happening?
Align.js?f454:77 Uncaught TypeError: (0 , _domAlign2.default) is not a function
at Align._this.forceAlign (eval at <anonymous> (application.js:6702), <anonymous>:77:57)
at Align.componentDidMount (eval at <anonymous> (application.js:6702), <anonymous>:85:10)
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: Could you please provide a reproducible repository?
username_2: @username_0
username_0: Hi,
The issue was resolved as I was using older version of the babel. Needed to update it.
Status: Issue closed
username_3: @username_0 Which version of babel did you upgrade to? Do you mind sharing your package.json for the rest of us that are experiencing the same issue?
username_0: @Hipster
This is the striped down version of my package.json
"devDependencies": {
"babel-cli": "6.24.1",
"babel-core": "6.24.0",
"babel-eslint": "7.2.3",
"babel-loader": "7.0.0",
"babel-plugin-transform-decorators-legacy": "1.3.4",
"babel-plugin-transform-flow-strip-types": "6.22.0",
"babel-plugin-transform-react-constant-elements": "6.23.0",
"babel-plugin-transform-react-remove-prop-types": "0.4.5",
"babel-plugin-transform-runtime": "6.23.0",
"babel-preset-es2015": "6.24.1",
"babel-preset-react": "6.24.1",
"babel-preset-stage-0": "6.24.1",
"copy-webpack-plugin": "4.0.1",
"cross-env": "2.0.1",
"eslint": "3.19.0",
"eslint-plugin-flowtype": "2.34.0",
"eslint-plugin-react": "7.0.1",
"extract-text-webpack-plugin": "2.1.0",
"rimraf": "2.6.1",
"webpack": "2.6.1",
"webpack-bundle-analyzer": "2.8.3",
"webpack-dashboard": "0.4.0",
"webpack-dev-middleware": "1.10.2",
"webpack-hot-middleware": "2.18.0",
"webpack-merge": "4.1.0"
},
"dependencies": {
"antd": "^2.10.4",
"autoprefixer": "6.4.1",
"babel-plugin-import": "^1.2.1",
"babel-polyfill": "^6.23.0",
"babel-runtime": "6.23.0",
}
Hope it helps :) |
APY/APYDataGridBundle | 32743406 | Title: custom select datetime filter
Question:
username_0: Hi, I have a Datetime column with the filter format (Y-m-d)
I would like to filter the column result by Month with the format '%F'.
for exemple : the columns show me a liste of dates ( 2014-01-12 ,...) but i want my filter shows me the months (APril, JANUARY, MAY,....)
Could you give me help Please???
Thanks<issue_closed>
Status: Issue closed |
kmcgaire/MLSBBot | 305268355 | Title: <NAME>
Question:
username_0: I'm young and I'm restless. And I've only got one life to live, so I've got to follow my guiding light and search for tomorrow. Chuck Norris compresses his files by doing a flying round house kick to the hard drive. Chuck Norris can solve the Towers of Hanoi in one move. I’ve got an idea–an idea so smart that my head would explode if I even began to know what I’m talking about. "It works on my machine" always holds true for Chuck Norris. Never trust anything that can think for itself if you can't see where it keeps its brain.<issue_closed>
Status: Issue closed |
zlainsama/SkinPort | 631069766 | Title: Modification conflict.
Question:
username_0: Hello, as I understand it, several modifications conflict with 'SkinPort', such as' Twilight Forest', 'Headcrumbs'.
Twilight Forest - Giant Miner.

Headcrumbs - Heads.

Headcrumbs - Human.

Answers:
username_1: Having a lot of trouble with it too.
username_1: Regarding TwilightForest (and probably Headcrumbs) the issue is being caused by the mods itself
Here is an extract from the render class for Giant
```
public RenderTFGiant() {
super(new ModelBiped(), 0.625F);
this.textureLoc = new ResourceLocation("textures/entity/steve.png");
}
```
As you see it uses vanilla's model to render which grabs the unsupported texture.
But I actually found out that you can fix it using `RenderLivingEvent.Pre`
Here is the Gist, not the cleaneast solution and could potentially break some other mods.
https://gist.github.com/username_1/89f8835034a3cf84aea94dc82edc8237
username_1: The soution for skulls might be very close to this one
username_2: Sorry, I don't really want to patch this mod for every incompatibilities...
This mod is licensed under MIT, please feel free to fork it, and make your own patches.
username_3: These will be noted for a fix in Et Futurum Requiem's Alex model. |
menatwork/MultiColumnWizard | 35274634 | Title: fileTree and pageTree widgets don't work in Contao 3.3
Question:
username_0: MultiColumnWizard field with the following configuration
``` php
'inputType' => 'multiColumnWizard',
'eval' => array(
'columnFields' => array(
'my_file' => array(
'inputType' => 'fileTree',
'eval' => array(
'multiple' => true,
'fieldType' => 'checkbox',
),
),
'my_page' => array(
'inputType' => 'pageTree',
'eval' => array(
'multiple' => true,
'fieldType' => 'checkbox',
),
),
),
),
'sql'=> 'blob NULL',
```
doesn't work correctly:
1. In the file manager no file or folder can be selected
2. In the page tree only single pages are selectable
It might be related to one of these commits:
- https://github.com/contao/core/commit/b783205a6dd9e792b13b6ca026f8023823eb1b82
- https://github.com/contao/core/commit/3227b43e1b1cc8ee058e6b25790a5bdd66a31cc4
- https://github.com/contao/core/commit/39275e2b82b4ecd9f6ef5ccc6c808e461f15a60e
Answers:
username_1: check with MCW 3.3.12
* Filepicker works
* Pagepicker works but only with single select per radio
```
'mcw_pagetree' => array
(
'label' => &$GLOBALS['TL_LANG'][$strTblName]['mcw_pagetree'],
'exclude' => true,
'inputType' => 'pageTree',
'eval' => array
(
'fieldType' => 'checkbox',
'multiple' => true
)
),
'mcw_filetree' => array
(
'label' => &$GLOBALS['TL_LANG'][$strTblName]['mcw_filetree'],
'exclude' => true,
'inputType' => 'fileTree',
'eval' => array
(
'fieldType' => 'checkbox', // 'checkbox'|'radio'
'files' => true,
'filesOnly' => true,
'multiple' => true, // true|false
'extensions' => $GLOBALS['TL_CONFIG']['uploadTypes']
)
),
```



username_1: pagePicker fixed with https://github.com/menatwork/MultiColumnWizard/pull/235
Status: Issue closed
|
stonegray/MacEasySaveSHSH | 196212557 | Title: Use latest tsschecker
Question:
username_0: There is a version 1.0.6 of tsschecker available.
Answers:
username_1: Heard some reported issues with 1.0.6 were resolved with 1.0.5, deliberately using older version for stability.
You can change the 1.0.5 to 1.0.6 in the source, no other changes are needed to use the new version.
username_0: ah ok, no worries. I just noticed there was a newer version.
username_1: Ideally we'd update for the bugfixes in 1.0.6 but I don't have time to check it to see if it's stable across platforms.
If you want to take that on, I'll merge your change in. El Capitan seems to be the most problematic from what I've heard.
username_0: It's only a crash fix, so I think it's fine.
Status: Issue closed
|
NaturalHistoryMuseum/pyzbar | 313164472 | Title: error in rect from Decoded
Question:
username_0: After decoding a barcode from an Image,
I made further process where PIL draws a rectangle around the decoded barcode.
mostly it works, except sometime the value of rect(width) is zero, but the decoded data is correct.
example is attached.
-
from PIL import Image, ImageDraw
from pyzbar.pyzbar import decode
from pyzbar.pyzbar import Decoded
image = Image.open(r'e:/test1.png')
result = decode(image)
num = len(result)
i=0
while i < num:
resulti = result[i]
draw = ImageDraw.Draw(image)
x0i = resulti.rect[0]
y0i = resulti.rect[1]
x1i = x0i + resulti.rect[2]
y1i = y0i + resulti.rect[3]
drawrec = (x0i, y0i, x1i, y1i)
drawInrec = (x0i+1, y0i+1, x1i-1, y1i-1)
draw.rectangle(drawrec, fill=None, outline='red')
draw.rectangle(drawInrec, fill=None, outline='red')
print(i, resulti.data, resulti.rect)
i += 1
if i == num:
image.save(r'e:/test1.png')
image.show(r'e:/test1.png')


Answers:
username_1: Hi!
Could you make this image available? If it is very large please post a dropbox share or similar?
Thanks.
username_2: I am having a similar issue when decoding CODE128 barcodes.
I am on a raspberry pi 3 B+ using python 3.5.3
Installed via `sudo apt install -y libzbar*` and pip and pip3 install pyzbar
It appears to happen primarily on CODE128 barcodes that are shorter (top to bottom) than typical. The only example I have it of the multiple barcodes on the box for a cellphone. All of the CODE128 barcodes were decoded correctly but the rect and polygon each had a width of 1 or 0.
username_1: I think this is an issue of the `zbar` library itself, not of the `pyzbar` wrapper. I need to have images to be able to reproduce the problem.
username_3: Here is a simple example with the source file attached. I am sorry if the info here is a bit redundant but I wanted to provide all the information someone might need. Note that the problem is not only with the rect and polygon attributes but with the data values received as well.
Using what I believe are the current stable versions of zbar (0.10) and pyzbar (0.1.8) and a very unambiguous image (below), I receive two Decoded objects (below).
One of them is correct and the other is very close but the data is wrong and the rect and polygon attributes have seeming incomplete values.
The images are fuzzy because they were originally scanned as jpegs and then opened and cropped using Pillow.
The rect and polygon attributes of the incorrect Decoded image have bogus or missing values. Is this a reliable clue on which I could use to weed out the bogus values?
(MacOS Mojave, zbar installed with HomeBrew)
**Correct**:
Decoded(
data='10001330100070',
type='I25',
rect=Rect(left=35, top=79, width=69, height=674),
polygon=[
Point(x=35, y=79),
Point(x=36, y=753),
Point(x=104, y=753),
Point(x=103, y=80),
Point(x=99, y=79)]
)
**Incorrect:**
Decoded(
data='10001330100062',
type='I25',
rect=Rect(left=33, top=145, width=0, height=0),
polygon=[Point(x=33, y=145)]
)
unambigous bar code
 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.