repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
HexHive/printbf | 196097748 | Title: README < > inverted?
Question:
username_0: I'm likely to have missed something, but shouldn't it be:
* `> == dataptr++ (%1$.*1$d %2$hn)`
* `< == dataptr-- (%1$65535d%1$.*1$d%2$hn)`
?
Or was it put this way to go down the stack when `>` is run?
Answers:
username_1: You are correct, thanks for pointing this out. I've updated the README.
Status: Issue closed
|
uber/causalml | 485034044 | Title: PIP installation error on Windows
Question:
username_0: Below is the pip install error:
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.22.27905\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\ProgramData\Anaconda3\lib\site-packages\numpy\core\include -IC:\ProgramData\Anaconda3\lib\site-packages\numpy\core\include -IC:\ProgramData\Anaconda3\include -IC:\ProgramData\Anaconda3\include -IC:\ProgramData\Anaconda3\include -IC:\ProgramData\Anaconda3\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.22.27905\include" /Tccausalml/inference/tree/causaltree.c /Fobuild\temp.win-amd64-3.7\Release\causalml/inference/tree/causaltree.obj -O3
cl : Command line warning D9002 : ignoring unknown option '-O3'
causaltree.c
C:\ProgramData\Anaconda3\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.22.27905\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2
Answers:
username_0: As mentioned on that SO post, install Windows 10 SDK solved the problem for me.
More specifically, here are what I have installed: Visual Studio Build Tools 2019 with MSVC, windows 10 SDK, CMake.
Status: Issue closed
username_1: Thanks for reporting the result back. Glad to hear that you were able to resolve it. Please try out causalml and let us know what you think. :) |
alptium/alptium.github.io | 296136544 | Title: Welcome Nevena!
Question:
username_0: Hi @nevenan ,
I create for you a page 'How To Write a CV' within our main page ,in section 'Recruitment' for the task that you have. Here are the links of these pages:
1. https://github.com/alptium/alptium.github.io/blob/master/English/How-to-write-a-CV.md ,for English version
2. https://github.com/alptium/alptium.github.io/blob/master/Srpski/Биографија.md , for Serbian version.
In the next link you will find a few tips for writing, but of course if you have any question ,please ask.
https://github.com/alptium/alptium.github.io/issues/9
Also, like you do in Java project same thing do in Writing project:
-when you start your task, please move ticket from 'To do' column in a 'Progress' column, and when you finish your task, move it in 'Review' or 'Done' column.
Kind regards,
Iva
Answers:
username_0: Hi @nevenan ,
one more link that will be useful for you:
https://github.com/alptium/alptium.github.io/blob/master/Srpski/Git-Hub-Desktop-%D1%83%D0%BF%D1%83%D1%82%D1%81%D1%82%D0%B2%D0%BE.md
Kind regards,
Iva
username_0: Closing ticket.
Status: Issue closed
|
JunioJsv/mtk-easy-su | 587440906 | Title: My Device Not Rooted
Question:
username_0: My Device Lenovo Vibe K5 Note is not getting root access...
My Device not rooted ...
Answers:
username_1: Your device is not susceptible to the mtk su security breach, you should try another method.
Status: Issue closed
username_0: I have tried all the methods...
Please suggest me a best way bro...
Please
username_2: Temporary root by diplomatic@XDA
Home URL:
https://forum.xda-developers.com/android/development/amazing-temp-root-mediatek-armv8-t3922213
--------------------------------------------------
Temporary root by diplomatic@XDA
Home URL:
https://forum.xda-developers.com/android/development/amazing-temp-root-mediatek-armv8-t3922213
--------------------------------------------------
Failed critical init step 1
exit: 1 |
tulios/kafkajs | 821149513 | Title: message.key is null in eachMessage implementation
Question:
username_0: **Describe the bug**
The message provided to my `eachMessage` function has `.key = null` sometimes, even though the type `KafkaMessage` type definition states that `.key: Buffer` and not `.key: Buffer | null`.
**To Reproduce**
My implementation is part of a closed source project and I cannot share details. But the implementation of `eachMessage` is just something like:
```
eachMessage: async ({ message }) => {
if (messagePayload.value) {
const key = message.key.toString("utf-8");
const value = message.value?.toString("utf-8");
...
}
}
```
**Expected behavior**
Since the `KafkaMessage` type definition states that `.key: Buffer`, I was expecting it to not be `null`.
**Observed behavior**
`.key` has been consistently `null` for a subset of the consumed messages. Here's the output from the KafkaJS logger when the line `message.key.toSring("utf-8")` triggered the error:
```
2021-03-02T13:59:40.372Z f1c814a0-38e6-44ec-a54f-d23203552cd7 ERROR {"level":"ERROR","timestamp":"2021-03-02T13:59:40.372Z","logger":"kafkajs","message":"[Consumer] Crash: KafkaJSNumberOfRetriesExceeded: Cannot read property 'toString' of null","groupId":*****, "retryCount":0,"stack":"KafkaJSNonRetriableError\n Caused by: TypeError: Cannot read property 'toString' of null\n at Runner.eachMessage *****\n at Runner.processEachMessage (/var/task/node_modules/kafkajs/src/consumer/runner.js:187:20)\n at onBatch (/var/task/node_modules/kafkajs/src/consumer/runner.js:323:20)\n at /var/task/node_modules/kafkajs/src/consumer/runner.js:375:21\n at invoke (/var/task/node_modules/kafkajs/src/utils/concurrency.js:38:5)\n at push (/var/task/node_modules/kafkajs/src/utils/concurrency.js:51:7)\n at /var/task/node_modules/kafkajs/src/utils/concurrency.js:60:53\n at new Promise (<anonymous>)\n at /var/task/node_modules/kafkajs/src/utils/concurrency.js:60:20\n at /var/task/node_modules/kafkajs/src/consumer/runner.js:365:11"}
```
**Environment:**
- OS: AWS Lambda
- KafkaJS version: ^1.13.0
- NodeJS version: 12 something
**Additional context**
Add any other context about the problem here.
Answers:
username_1: Right, that must be a mistake in the typings. If the message is produced without a key, it'll be null when you consume it.
username_0: That makes sense! Thanks for your quick response and action :)
Status: Issue closed
username_1: This is fixed in `1.16.0-beta.10`. |
EasyNetQ/EasyNetQ | 449894209 | Title: Question: Disable automatic ack
Question:
username_0: I have a queue used by multiple clients, I would like that only a specific client performs the ack on the msg directed to him, then how disabling the automatic ack and execute it manually ?
```c#
bus.Subscribe<MessageModel>(string.Empty, message =>
{
// if a message sent by this machine
// if not a message directed to this machine
// then ignore it
if (message.MachineName == this.MachineName || message.MachineNameDest != this.MachineName)
return;
// if the same message already received but still not removed from the queue
// then ignore it
if (message.ID == LastMessageReceived?.ID)
return;
LastMessageReceived = message;
MessageEventArgs handler = new MessageEventArgs(message);
Received?.Invoke(this, handler);
RemoveMessage(message); // <--- how manual ack ???
}, cfg => cfg.WithAutoDelete(false)
.WithQueueName(queueName));
```
thanks.<issue_closed>
Status: Issue closed |
chakki-works/seqeval | 710967353 | Title: why classification_report can not count label "ORGANIZATION"
Question:
username_0: the result is
precision recall f1-score support
PRODUCT 0.884 0.840 0.862 1007
LOCATION 0.927 0.760 0.835 50
PERSON 0.000 0.000 0.000 2
micro avg 0.885 0.835 0.859 1059
macro avg 0.884 0.835 0.859 1059
but in the data set,i have:
pred:['ORGANIZATION', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
true:['ORGANIZATION', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
Answers:
username_1: I believe **pred** sequence should be changed to pred:['B-ORGANIZATION', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
and similarly **true** sequence.
You can have a look at the example shown in https://github.com/chakki-works/seqeval/blob/master/seqeval/metrics/sequence_labeling.py#L288
In terms of the code flow:
`classification_report` calls `get_entities` to convert the sequence of tags for the tokens into entities.
In case of suffix=False,
https://github.com/chakki-works/seqeval/blob/master/seqeval/metrics/sequence_labeling.py#L43
```
tag = chunk[0]
type_ = chunk.split('-')[-1]
```
This gives tag="ORGANIZATION"
whereas `start_of_chunk` and `end_of_chunk` expects tag to be one of ['B', 'I', 'O', 'E', 'S']
Hence it fails to extract any entity from the input sequence provided by you.
username_0: Thanks for your help,
I adjust a little part of code in "get_entities" you me mentioned.
tag = chunk[0]
if(chunk == 'ORGANIZATION'):tag = 'ORG'
type_ = chunk.split('-')[-1]
and it's work.
Thanks again.
Status: Issue closed
|
electron/asar | 407461386 | Title: observation about the .readme file
Question:
username_0: Hello, the .readme file in the installation section I suggest that the following should be,
`npm install asar ` or `npm install asar -g`
since when doing it in the way, it is only being added locally to the project and you can not use your cli, but if you install it in this way
`npm install asar -g `, you can use the cli of asar
could be the following way:
`npm install asar ` or `npm install -g `
to use it either locally or globally, this because the new developers when trying to do
`npm install asar` and then `asar --help `
it will not work. |
emu-feedback/emu-mammals | 752207003 | Title: Monthly VertNet data use report for 2019-12, resource emu_mammals
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of this report at:
http://tools-usagestats.vertnet-portal.appspot.com/reports/ff2e788b-963a-4be8-8f96-d432d397ecbd/201912/
Raw text and JSON-formatted versions of the report are also available for
download from this link.
A copy of the text version has also been uploaded to your GitHub
repository under the "reports" folder at:
https://github.com/emu-feedback/emu-mammals/tree/master/reports
A full list of all available reports can be accessed from:
http://tools-usagestats.vertnet-portal.appspot.com/reports/ff2e788b-963a-4be8-8f96-d432d397ecbd/
You can find more information on the reporting system, along with an
explanation of each metric, at:
http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to:
http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
devinim-istanbul/shopping-list | 320503065 | Title: Add Main Screens
Question:
username_0: We talked about 3 main screens:
HomeScreen, ButtonScreen, ListScreen
Answers:
username_0:  [Add Main Screens](https://trello.com/c/Bi1sgI4O/19-add-main-screens)
Status: Issue closed
|
rstudio/pagedown | 790811388 | Title: Extra blank pages
Question:
username_0: We are using the Paged HTML Document template for the [unhcr-report](https://github.com/username_0/unhcr-report) project but we have a few issue with extra blank page automatically added.
1. There is always one extra blank page between the `frontmatter` and `chapter` section. I haven't included any `break-before` or `break-after` for this in my CSS but can't get rid of it. Does it come included in the package?
2. We set `break-before: left;` for the `back-cover` section but when it is suppose to sit on the left page following the final page with content, it inserts 2 blank pages instead. Is this behavior normal? The other scenario works as expected.
Thanks for your help
Answers:
username_1: Hi @username_0 !
Have you also tried with the development version of pagedown? #202 should have fixed that.
username_0: @username_1 indeed it removed the extra blank pages but now there is no more page by list e.g. 1 page for the table of content, 1 for list of figures, etc.
<img width="533" alt="all_lists" src="https://user-images.githubusercontent.com/1830612/105736454-1844ce80-5f35-11eb-95c0-465e385d1b11.png">
Same for the `<h1>` they don't create page break by default. I guess all of this is easily solve by adding page break to the `<h1>` tag.
Another thing with the update to the new dev version of pagedown it doesn't find anymore the backcover image. See
[Reproducible Analysis_pagedown_dev_v.pdf](https://github.com/rstudio/pagedown/files/5868077/Reproducible.Analysis_pagedown_dev_v.pdf) backcover BG disappeared compare to previous version.
username_0: @username_1 trying to fix this I noticed that `break-before: recto` works but when I change it to `break-before: page` it doesn't any idea why?
Also since we upgraded the pagedown version still getting the error of the back-cover image not showing anymore.
username_0: Actually I think I know why the back-cover doesn't show it related to the issue [211](https://github.com/rstudio/pagedown/issues/211) as in the default css the back-cover image is called with `.pagedjs_page:last-of-type {background-image: var(--back-cover);}`
username_1: Yes, this is the same issue as #211
username_0: FYI couldn't make `break-before` work but oddly `break-after: page` works like a charm. Really weird.
username_1: @username_0 With the development version of pagedown, this example works fine for me:
---
title: "Page breaks test"
date: "`r Sys.Date()`"
output: pagedown::html_paged
knit: pagedown::chrome_print
---
# Break before test
::: page-break-before
this text is on a new page
:::
# Break after test
::: page-break-after
no more text after that
:::
this text is on a new page
I get this result:
[page-breaks.pdf](https://github.com/rstudio/pagedown/files/5906693/page-breaks.pdf)
username_1: @username_0 any follow up? is your problem solved?
Status: Issue closed
username_0: @username_1 yes it's now working. Thanks for your help |
DUNE-DAQ/ddpdemo | 696194731 | Title: Create the 'DataStore' interface class.
Question:
username_0: This task will involve
- deciding on the signature of the write() method
- creating the C++ class for this interface (the [Queue interface in the appfwk repo](https://github.com/DUNE-DAQ/appfwk/blob/develop/include/appfwk/Queue.hpp) may be a useful example)
- if possible, creating some unit tests that verify the correct operation of the class
Answers:
username_0: This task will involve
- deciding on the signature of the write() method
- creating the C++ class for this interface (the [Queue interface in the appfwk repo](https://github.com/DUNE-DAQ/appfwk/blob/develop/include/appfwk/Queue.hpp) may be a useful example)
- if possible, creating some unit tests that verify the correct operation of the class
username_0: I haven't yet marked this Issue as 'Ready for work', but if someone has time to play with this before our next meeting, that would be fine. (Of course, it probably needs the results of Issues #4 and #5 as input.)
username_0: I added a preliminary version of this interface in include/ddpdemo/DataStore.hpp. This is on the _feature/Issue7_DataStore_ branch (this branch was created from the _feature/Issue5_DataBlock_ branch).
I haven't verified that this first version compiles or runs yet (since it's just an interface, it's hard to test). Although, maybe I'll make a FakeStorage concrete implementation that simply throws the data away.
username_0: I merged the latest code from the _develop_ branch into the branch for this Issue (_feature/Issue7_DataStore_) and verified that it still compiled.
I then created a simply little DataStore implementation called TrashCanDataStore, which as the name implies, simply throws the data away (after printing out a message to say that). This was a good thing since it helped me find some typos in the DataStore.hpp file. The TrashCanDataStore isn't the prettiest code, and its write() method should probably move to a TrashCanDataStore.cpp file at some point. Also, the sample instantiation of the TrashCanDataStore instance in SimpleDiskWriter should be changed at some point to be a dynamic creation of the DataStore instance based on what is provided in the configuration.
All of this code has been committed and pushed to the central repo.
Status: Issue closed
|
gureum/gureum | 39877725 | Title: 따옴표 입력시 굽은 따옴표로 입력이 되지 않습니다.
Question:
username_0: 1.따옴표 입력시 영문에 쓰이는 ' " 이런 따옴표가 입력이 됩니다.
굽은 따옴표(‘ ’ “ ”)로 입력할 수 있는 옵션이 있었으면 좋겠습니다.
2. 인디자인 편집작업을 자주 하는 편입니다. 구름입력기 상태로 편집시 'delete' 자판키로 글자를 삭제하면 두번 연속(두 글자 삭제) 삭제되는 버그가 종종 일어납니다. 항상 그런건 아닌데 자주 이런 현상이 생겨서 꼼꼼히 보지 않으면 안되는 문제가 있군요. 다른 입력기 상태에서는 그런 현상이 없는걸로 보아 키보드의 문제는 아닌것 같습니다.<issue_closed>
Status: Issue closed |
flutter/flutter | 472712119 | Title: skip SSL error webview_flutter
Question:
username_0: Hi there..
I'm using the Flutter plugins 'webview_flutter' to load certain url and it works perfectly.. with http or without https .... but there is certains url comes with https was detected having not a valid ssl caused my flutter apps to be blank...
So my questions is...is that possible I can ignored those SSL errors and just proceed with the url using this 'webview_flutter' plugins..anyone got luck with it...or maybe point me in the right direction....
Thanks in advanced
Status: Issue closed
Answers:
username_2: Hi there..
I'm using the Flutter plugins 'webview_flutter' to load certain url and it works perfectly.. with http or without https .... but there is certains url comes with https was detected having not a valid ssl caused my flutter apps to be blank...
So my questions is...is that possible I can ignored those SSL errors and just proceed with the url using this 'webview_flutter' plugins..anyone got luck with it...or maybe point me in the right direction....
Thanks in advanced
username_2: The comment quoted above was about a PR that simply turned off all SSL handling; allowing an *option* to bypass checks is a valid request.
username_3: what about a callback to onReceivedSslError and/or use flutter's SecurityContext, so one can add his own certs and manually validate certificates like in dart:io?
username_4: Any update on how to add ignoreSSL error property to webView in flutter?
username_5: So, I think it is not out of scope for "webview_flutter" as many applications created using "flutter_webview_plugin" will have to migrate to "webview_flutter" once "flutter_webview_plugin" deprecates to continue app development and maintain stability. |
DataDog/dd-trace-java | 1180992550 | Title: Release 0.98.0 Missing Java Agent Jar Asset
Question:
username_0: Other assets are missing to. This may block the CI/CD processes of many systems that rely on building docker images with the Java agent using `RUN wget -O dd-java-agent.jar 'https://dtdg.co/latest-java-tracer'` as defined in datadog documentation here: https://docs.datadoghq.com/tracing/setup_overview/setup/java/?tab=containers
Answers:
username_0: Duplicate already created, closing this one.
Status: Issue closed
|
sclasen/akka-persistence-dynamodb | 77679487 | Title: Dozens of WARN DynamoDBJournal at=backoff request=BatchGetItemRequest sleep=8
Question:
username_0: Hi @sclasen,
I'm getting dozens of such warnings in the log, but the message they comes with is completely not understandable for me.
Can you please give me a hint on how to get rid of these warnings? Or at least what does it means?
```
WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
```
There are plenty of them:
```plain
2015-05-18 16:29:48,376 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-34] [ClusterSystem-akka.actor.default-dispatcher-23] WARN a.p.journal.dynamodb.DynamoDBJournal - at=unprocessed-reads, unprocessed=59
2015-05-18 16:29:48,425 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-34] [ClusterSystem-akka.actor.default-dispatcher-14] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,428 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-49] [ClusterSystem-akka.actor.default-dispatcher-22] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,429 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-48] [ClusterSystem-akka.actor.default-dispatcher-14] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,437 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-48] [ClusterSystem-akka.actor.default-dispatcher-42] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,440 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-49] [ClusterSystem-akka.actor.default-dispatcher-42] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,445 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-34] [ClusterSystem-akka.actor.default-dispatcher-43] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,445 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-45] [ClusterSystem-akka.actor.default-dispatcher-43] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,446 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-47] [ClusterSystem-akka.actor.default-dispatcher-43] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,447 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-49] [ClusterSystem-akka.actor.default-dispatcher-37] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=2
2015-05-18 16:29:48,506 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-49] [ClusterSystem-akka.actor.default-dispatcher-31] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=4
2015-05-18 16:29:48,509 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-46] [ClusterSystem-akka.actor.default-dispatcher-31] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=4
2015-05-18 16:29:48,517 [ClusterSystem-akka.persistence.dispatchers.default-replay-dispatcher-46] [ClusterSystem-akka.actor.default-dispatcher-28] WARN a.p.journal.dynamodb.DynamoDBJournal - at=backoff request=BatchGetItemRequest sleep=4
```
Answers:
username_0: Ok, I was playing with the Dynamo DB settings on the Amazon Console it it seems like this message is generated when throughput to Dynamo DB table is too low (in my case it was set to 1 r/w capacity). I modified it to 10 r/w and the message is gone. That's good.
However, I will leave this issue open because as I stated above - the message that comes with this warning is completely unintuitive. Imho, this should be rather something like:
```
Cannot read/write state due to exhausted DynamoDB throughput, wait 4 seconds for next attempt
``` |
NanQi/blog | 348592956 | Title: 再次做小程序感触
Question:
username_0: 我们的小程序在今年(2018)的3月21日被人举报类目不符给封了,致使我们用户一度跌落,从那以后,我们做了网页版、PC版和APP,近半年时间没有做小程序。
这次做小程序使用了美团开源的mpvue,也算是敢于尝试了,当年初次接触小程序没有使用wepy就是犹豫各种转换带来的问题更难解决,这次使用mpvue也遇到不少坑,但是多多少少带来了遍历。
不管是做同道圈产品还是第二空间产品,每次做新的功能,都有种做的很好的感觉,总有种推出以后一定会火的假想,当然我知道要火却没有那么容易。
让我感觉良好的功能有如下几个:
1. 同道圈列表自动播放视频(静音播放)
2. 同道圈我的圈子选择圈子后,更改第二栏tab,总感觉这算是一种新的产品布局尝试
3. 同道圈关联赞赏码和关联小程序功能,这算是特点了吧,做的时候很激动
4. 不管是安卓还是iOS都可以上传动图,这是参考了猫卡的做法
5. 创作中心增加代码块,使用prism做代码高亮,用了大概1个小时时间做完了代码高亮工作 |
kshitija/zen-test | 56199785 | Title: Order id - 74 from amazon.com 'Kenneth Cole New York Men's Long Sleeve Contrast Placket Shirt, Black Combo, Large'
Question:
username_0: 
#### [Kenneth Cole New York Men's Long Sleeve Contrast Placket Shirt, Black Combo, Large](http://www.bestbuy.com/site/apple-ipad-mini-with-wi-fi-16gb-space-gray-black/2874502.p?id=1219080300496&skuId=2874502)
referrerUrl : https://www.google.com/search?output=search&tbm=shop&q=ipad&oq=ip…Dipad&tuhCpeOeHV8t:4745,vw:g&tbm=shop&spd=1340092703385387124
detailViewUrl : https://www.google.com/shopping/product/16700433696658735690?q=ipad&bav…YpRRmw0OKJhyusLkfG2mT_tmQ&ei=oF7HVK-TEtWD8gXGmoJA&ved=0CIcBEKYrMAI
**attributes**
| keys | value |
| ------------- |:-------------:|
| color | red |
| size | S |
**price**
| keys | value |
| ------------- |:-------------:|
| subtotal| $111 |
| Est. tax | $12|
| Est. shipment | $4.34 |
| Order Total | **$127.34** |
**Address Information**
| keys | value |
| ------------- |:-------------:|
| Name | <NAME>|
| Address | Nrusinha Sadan Apartments, First Floor,1087, Sadashiv Peth, Bajirao Road|
| City | pune|
| State | maharashtra|
| Zip | 411030|
| Country | India|
| contact number | 404-324-7977|
| Email Id | <EMAIL>|
Seller:
Estimated delivery time :
___
Answers:
username_0: ```json{
"items": [
{
"domain": "amazon.com",
"url": "http%3A%2F%2Fwww.bestbuy.com%2Fsite%2Fapple-ipad-mini-with-wi-fi-16gb-space-gray-black%2F2874502.p%3Fid%3D1219080300496%26skuId%3D2874502",
"title": "Kenneth Cole New York Men's Long Sleeve Contrast Placket Shirt, Black Combo, Large",
"image": "https:\/\/www.amazon.com\/gp\/buy\/spc\/handlers\/display.html?hasWorkingJavascript=1",
"listPrice": "123",
"currentPrice": "111",
"tax": "12",
"shipping": "4.34",
"category": "",
"attributes": {
"color": "red",
"size": "S",
"RAM": "",
"model": "",
"memory": ""
},
"card": {
"type": "2",
"token": "<KEY>"
},
"shippingAddressId": "1",
"contactDetailId": "1",
"billingAddressId": "1",
"currency": "USD",
"userStripeCCId": "1",
"discount": "",
"status": "",
"referrerUrl": "https%3A%2F%2Fwww.google.com%2Fsearch%3Foutput%3Dsearch%26tbm%3Dshop%26q%3Dipad%26oq%3Dip\u2026Dipad%26tuhCpeOeHV8t%3A4745%2Cvw%3Ag%26tbm%3Dshop%26spd%3D1340092703385387124",
"detailViewUrl": "https%3A%2F%2Fwww.google.com%2Fshopping%2Fproduct%2F16700433696658735690%3Fq%3Dipad%26bav\u2026YpRRmw0OKJhyusLkfG2mT_tmQ%26ei%3DoF7HVK-TEtWD8gXGmoJA%26ved%3D0CIcBEKYrMAI",
"description": "",
"upc": "",
"sku": "",
"isbn": "",
"ean": "",
"pid": "",
"productCondition": "",
"asin": "",
"quantity": "",
"giftWrap": "",
"stock": "",
"brand": "",
"shopName": ""
}
],
"cardId": "",
"type": "2",
"token": "<KEY>",
"domain": "amazon.com",
"tax": "12",
"shipping": "4.34",
"discount": "",
"userStripeCCId": "1",
"shippingAddressId": "1",
"billingAddressId": "1",
"status": "new",
"referrerUrl": "https:\/\/www.google.com\/search?output=search&tbm=shop&q=ipad&oq=ip\u2026Dipad&tuhCpeOeHV8t:4745,vw:g&tbm=shop&spd=1340092703385387124",
"currency": "USD",
"contactDetailId": "1",
"state": "new",
"isViaQuickCheckout": "1"
} |
0xjb/MESRepo | 389687070 | Title: Baggrund: analyse
Question:
username_0: Background. Material that you have learned and what we need to know to understand the report, presented from the point of view of your project, but independently of what you have done concretely. (So you might tell us about, e.g., SCRUM, in general terms and perhaps using concrete examples, but not getting into how it has been used in your project.) |
syndesisio/syndesis | 342685792 | Title: 'View Log in Openshift' link doesn't work
Question:
username_0: ## This is a...
<!-- Check ONLY one of the following options with "x" -->
<pre><code>
[ ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Documentation issue or request
</code></pre>
## The problem
After clicking on 'View Log in Openshift' I'm getting blank syndesis page instead of openshift log.
Looks like it's trying to redirect to wrong link:
Minishift url: https://192.168.64.12:8443/
Syndesis url: https://stability.192.168.64.12.nip.io/
'View Log in Openshift' url : https://stability.192.168.64.12.nip.io/project/stability/browse/pods/i-guessing-number-6-c2cdq?tab=logs
<!--
Briefly describe the issue you are experiencing (or the feature you want to see implemented on Syndesis).
+ For BUGS, tell us what you were trying to do and what happened instead.
+ For NEW FEATURES, describe the _User Persona_ demanding it and its use case.
-->
## Expected behavior
<!-- Describe what the desired behavior would be, enlistin gthe acceptance criteria. -->
## Screenshot
<!--
For features/bugs tackling with UI functionality, drag and drop a screenshot depicting the desired presentation layer or supporting the UX narrative for the new functionality.
-->
## Request and Response Data
<!--
Many issues involve both the UI and it's backend, if possible capture relevant request and response data JSON messages and include it here.
Request and response data can be gathered from your browser's developer tools on the 'Network' tab.
+ As you reproduce the issue, take note of any network requests that are made.
+ Requests that result in an error will be highlighted red.
+ Click on line in the network tab and then the 'Headers' tab to get the request data
+ Click on the 'Preview' or 'Response' tabs to get the response data.
+ Pretty print the json too -> http://jsonprettyprint.com/
BE CAREFUL NOT TO INCLUDE ANY USER TOKENS!!!!
Things like connection objects can contain sensitive data in their configuration, make sure to rip these out
-->
## API Endpoints and Schemas
<!--
For features or bugfixes entailing data exchanges between the UI and the REST API,
enlist the different endpoints available and the payload/response schemas.
-->
## Tasks involved / Steps to Reproduce
<!--
Enlist all the acceptance criteria for new features or the steps required to reproduce the bug/regression reported.
-->
1.
2.
3.
4. |
webpack-contrib/css-loader | 666781402 | Title: v4 tries to resolve the absolute urls in css files and gets an error.
Question:
username_0: I have this rules in my css file:
```
.swiper-button-prev {
background-image: url('/img/slider-arrow-left.png');
}
```
`/img/slider-arrow-left.png` is located at site root (public_html directory) and not to be resolved.
In version before v4 it works perfectly and not try to resolve the url because it absolute, but after updating to v4 i get an error
`Error: Can't resolve '/img/slider-arrow-right.png' in 'C:\SERVER\projects\laminas\mt4\frontend\src\css'` because css-loader tries to resolve it in frontend src directory.
my weback config to parse css:
```
{
test: /\.(css)$/,
use: [
'style-loader',
{
loader: 'css-loader',
options: {
sourceMap: true,
},
},
]
},
{
test: /\.(scss)$/,
use: [
{
loader: MiniCssExtractPlugin.loader
},
{
loader: 'css-loader',
options: {
sourceMap: true,
},
},
{
loader: 'sass-loader',
options: {
sassOptions: {
sourceMapRoot: '/'
}
},
},
]
},
```
Answers:
username_1: It is expected and it is breaking change, `/img/slider-arrow-left.png` can mean:
- Server relative URLs
- Absolute URLs
Try to open this in browser and you will got same problem. You need refactor your code, if you can't do it (vendor code), you can use `resolve.alias` https://webpack.js.org/configuration/resolve/#resolvealias
Status: Issue closed
username_1: Do not ignore the issue template, otherwise I will close without answer
username_2: Same issue after updating to css-loader V4. Webpack is no longer able to resolve paths in my SCSS like '/fonts/font-awesome/fontawesome-webfont.eot?v=4.7.0'.
@username_1 Do you have an example of how to fix this using a resolve alias?
username_0: I found the way to fix it using resolve alias, but this is a not a convenient way. I want to store all my images (that used in project html templates and in css/scss stylesheets) in one public path (`public_html/img`) but V4 doesn't allow this anymore and i compelled to store my images in 2 different directories - one for direct access (public_html/img) and one in frontend source dir for compile css stylesheets. Another inconvenience is that i can't replace this files in public dir without recompiling whole project (by simple replacing images in production server public dir).
I really do not understand what caused the need for this breaking change. Why couldn't you make a way to enable/disable root relative url resolution by some config param?
Now i have to use inconvenient way of images store. For now i decided to downgrade to V3 because V4 is unusable with this "improvement".
@username_2 this workaround works (change path for you way) if you want to follow the new way of using:
```
resolve: {
alias: {
'/fonts': path.resolve(__dirname, '../public_htlm/fonts/'),
}
}
```
After this change webpack copies all resource files that found in this url to dist/assets (or another that you set in config) on every recompilation. This is not big deal in frontend only applications there all resources you can store in source dir and compiler every time copies it in dist folder. But my project (big CMS with templatest generated by php backend) not allowing this.
username_1: I am ready to disappoint you, in the near future you will have to update, because `style-loader` and `mini-css-extract-plugin` will not work with css-loader@3.
Everyone who got this error is time to review the structure of assets and fix it.
`/font/font.woff2` never was and never can be `./font/font.woff2`, please read spec about URL resolving in CSS |
portapps/portapps | 499228304 | Title: Please help me to make an Ungoogled-chromium portable app for window 10 x64
Question:
username_0: * Name : Ungoogled-chromium
* Description : Browser
* Website : https://ungoogled-software.github.io/ungoogled-chromium-binaries/releases/windows/64bit/
* License (e.g. Freeware, OSS, GPL, MIT, etc) : BSD-3-clause
* Comment (anything else which might help) : Thank you very much.
Answers:
username_1: Please put this on top priority...
username_2: I have started to work on this. Keep you in touch.
username_2: Now available : https://portapps.io/app/ungoogled-chromium-portable/
Status: Issue closed
|
PrefectHQ/prefect | 723278294 | Title: Cannot restart flow run after mapped task
Question:
username_0: An example flow that maps and then fails:
```
from datetime import timedelta
from prefect import *
from prefect.engine.executors import DaskExecutor
from prefect.engine.results import GCSResult
from prefect.environments.storage import GCS
with Flow("TestFlow", result=GCSResult(bucket="model_bigquery_tmp"), storage=GCS(bucket="model_bigquery_tmp"), executor=DaskExecutor("brett-daskscheduler:8786")) as TestFlow:
@task(max_retries=1, retry_delay=timedelta(seconds=0.1))
def generate_random_list():
n = 10
return list(range(n))
@task(max_retries=1, retry_delay=timedelta(seconds=0.1))
def wait(n):
from time import sleep
sleep(n)
return n
@task(max_retries=1, retry_delay=timedelta(seconds=0.1))
def fail(values):
raise ValueError(f"n: {len(values)}")
values = wait.map(generate_random_list())
fail(values)
```
On restarting I get the following error:
```
brett_replicahq restarted this flow run
Submitted for execution: Job prefect-job-871b5d1e
Downloading testflow/2020-10-16t14-25-00-009370-00-00 from model_bigquery_tmp
Beginning Flow run for 'TestFlow'
Task 'wait': Starting task run...
Task 'wait': finished task run for task with final state: 'Mapped'
Unexpected error: TypeError("Cannot map over unsubscriptable object of type <class 'NoneType'>: None...")
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/prefect/engine/runner.py", line 48, in inner
new_state = method(self, state, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/prefect/engine/flow_runner.py", line 526, in get_flow_run_state
executors.prepare_upstream_states_for_mapping(
File "/usr/local/lib/python3.8/site-packages/prefect/utilities/executors.py", line 372, in prepare_upstream_states_for_mapping
raise TypeError(
TypeError: Cannot map over unsubscriptable object of type <class 'NoneType'>: None...
```
cc @cicdw seems related to #3322 ?
Answers:
username_1: Hey @username_0 I ran your example and was not able to encounter the error you posted. The first run ran, failed as expected, then when I restarted it successfully ran through the mapped tasks (via their results in GCS) and then failed as expected again. Any more information you could share about your environment / how you are running this?
username_0: Very strange...I was running in K8s but now I'm just running locally, here's everything else:
```
from prefect.environments import LocalEnvironment
TestFlow.environment = LocalEnvironment(
executor=prefect.engine.executors.DaskExecutor(address="localhost:8786")
)
TestFlow.register("Model")
TestFlow.run_agent(<token>)
# "Quick Run" in the UI
```
Versions:
```
prefect==0.13.12
dask==2.30.0
distributed=2.30.0
```
username_0: @username_1 update: I think I'm seeing it succeed/fail intermittently actually, maybe there's some kinda race condition......? Lemme try re-running a bunch and see what happens
username_0: ^ actually @username_1 it's not random: when I run with `LocalEnvironment(DaskExecutor(...))` like above (with a local dask scheduler+workers running in another tab), I reliably get `TypeError: Cannot map over unsubscriptable object of type <class 'NoneType'>: None...`. When I comment out the `environment` and run with the default executor, everything works as intended. That ring any bells for you?
username_1: @username_0 Thanks for the clarification! I am able to reproduce, looking into it now
username_1: Update: @username_0 we have identified the issue and are working on a fix, _should_ be in by the next release
Status: Issue closed
|
XiaoFaye/WooCommerce.NET | 431746684 | Title: product price
Question:
username_0: Hi
Can some one please set Product.price property readonly, we can not set price using this property, we can only use regular price or sale price,
Product.price is readonly field to get the current price..
Thanks.
Answers:
username_1: What error did you find with price property?
username_0: Hi
No error, its just not going to be used, its just an improvement
Thanks
Status: Issue closed
|
samoilovid/home-work-3 | 503003457 | Title: Что за import?
Question:
username_0: https://github.com/username_1/home-work-3/blob/5eb5e5918b9925b2580dd1874826dc202139b719/src/Main.java#L1
Status: Issue closed
Answers:
username_1: @username_0, Здравствуйте, скорее всего, когда с друга первые 4 строчки скопировал, чтобы не писать еще раз |
goaop/framework | 450723013 | Title: SelfValueVisitor throws exception on parsing class without namespace
Question:
username_0: ```
public function enterNode(Node $node)
{
if ($node instanceof Stmt\Namespace_) {
$this->namespace = $node->name->toString();
```
`$node->name` is null in a class without namespace.
Answers:
username_1: Could you please report your framework version, PHP version and steps to reproduce this isssue.
username_0: Yeah, the goaop/framework version is 2.3.1 and the PHP version is 7.1.23.
More details about the exception stack trace:
```
Stack trace:
#0 vendor/nikic/php-parser/lib/PhpParser/NodeTraverser.php(200): Go\Instrument\Transformer\SelfValueVisitor->enterNode(Object(PhpParser\Node\Stmt\Namespace_))
#1 vendor/nikic/php-parser/lib/PhpParser/NodeTraverser.php(91): PhpParser\NodeTraverser->traverseArray(Array)
#2 vendor/goaop/framework/src/Instrument/Transformer/SelfValueTransformer.php(32): PhpParser\NodeTraverser->traverse(Array)
#3 vendor/goaop/framework/src/Instrument/Transformer/CachingTransformer.php(121): Go\Instrument\Transformer\SelfValueTransformer->transform(Object(Go\Instrument\Transformer\StreamMetaData))
```
I simplify my project as below:
annotation/test.php
```
<?php
namespace AopTest;
use Doctrine\Common\Annotations\Annotation;
/**
* Pointcut annotation.
* @Annotation
* @Target("METHOD")
*/
class Test extends Annotation {
}
```
aspect/test.php
```
<?php
use Go\Aop\Intercept\MethodInvocation;
use Go\Lang\Annotation\Around;
/**
* Test Aspect
*/
class AspectTest implements \Go\Aop\Aspect {
/**
* @Around("@execution(AopTest\Test)")
*/
public function aroundTest(MethodInvocation $invocation) {
echo "Before around<br/>";
$res = $invocation->proceed();
echo "After around<br/>";
return $res;
}
}
```
service/profit/test.php
```
<?php
use AopTest\Test;
/**
* Example class to test aspects
*/
class ServiceProfitTest {
/**
* @Test
*/
public function test()
[Truncated]
$this->registry->set('service_' . str_replace('/', '_', $service), new $class($this->registry));
} else {
trigger_error('Error: Could not load service ' . $file . '!');
exit();
}
}
```
It would throws an exception if the class `ServiceProfitTest` dose not has a namespace. And it would be OK if I add a namespace for the `ServiceProfitTest`.
It is OK if I do some check in the SelfValueVisitor::enterNode method as below:
```
public function enterNode(Node $node)
{
if ($node instanceof Stmt\Namespace_) {
$this->namespace = $node->name->toString();
if(!empty($node->name)) {
$this->namespace = $node->name->toString();
}
``` |
maslick/keycloak-android-native | 446882789 | Title: The access token provided doesn't work
Question:
username_0: Hi username_1,
I'm testing the app with my own keycloak server. When I do the login in the browser and return to the app, It shows all the data (access token, refresh token, etc.) . However when I tried to use the provided access token in a request using Postman it is invalid.
Any suggestions? thank you!
Answers:
username_1: Hi @username_0 !
Can you provide some more info (REST endpoint, HTTP status code, etc.)? And what Keycloak client type are you using? In a basic scenario you would have two Keycloak clients:
- public (obtain token on the front-end)
- bearer-only (secure your REST API with this client)
I also recommend you to parse the token and analyze it using ``jwt.io``.
username_1: P.S. For debugging I have created a CLI tool called [brauzie](https://github.com/username_1/brauzie) that can help you fetch and analyse your JWT tokens (scopes, roles, etc.). It could be used for both public and confidential clients.
Status: Issue closed
|
barryvdh/laravel-elfinder | 110653256 | Title: Directory empty
Question:
username_0: Hi,
I connect elfinder to a remote directory with sftp flysystem. Upload and file / directory creation works fine as I can see files and directories in an ftp client, but directory always appear empty in elfinder, even after uploading a file or creating one (same for directory).
I've search but don't find any explaination about that, it's probably only bad configuration but I don't know what I'm missing.
Also, I don't really understand the difference between disks and roots.
In my elfinder config file I've added this in the roots section :
array(
'driver' => 'Flysystem',
'URL' => 'https://www.remote-url.com/',
'path' => 'uploads/',
'alias' => 'Remote disk',
'accessControl' => 'access',
'filesystem' => new \League\Flysystem\Filesystem(new \League\Flysystem\Sftp\SftpAdapter(array(
'host' => 'sftp-host-address',
'port' => 22,
'username' => 'sftp-username',
'password' => '<PASSWORD>',
'root' => 'vhosts/www.remote-url.com/htdocs/',
'timeout' => 30,
))),
Answers:
username_0: Ok, I made a confusion between elfinder disks, roots and filesystem disks :-/
In case someone else had the same confusion, after adding the flysystem sftp package, the only thing you have to do is adding a 'sftp' disk in your disks in the config/filesystem.php, you don't have to make new disks or roots in config/elfinder.php |
way-of-elendil/3.3.5 | 595799588 | Title: Bug Aura "marché de lok' lira" bug lorsque l'on rend certaines quêtes de la zone.
Question:
username_0: zone Brunnhildar, Suite de quêtes des fils d'Hodir :
Lorsque l'on rend plusieurs quêtes dans la zone, l'aura "marché de lok'lira " (https://fr.wowhead.com/spell=72914/march%C3%A9-de-loklira) qui nous rend amical auprès des pnj de la zone, se retire spontanément.
Notamment, celles relatives aux PNJ :
- Brijana (https://fr.wowhead.com/npc=29592/brijana) et Cros de glace (https://fr.wowhead.com/npc=29598/croc-de-glace)
- lok'lira la mégère (https://fr.wowhead.com/npc=29975/loklira-la-m%C3%A9g%C3%A8re)
Answers:
username_1: Le fix doit être fait coté core: https://github.com/TrinityCore/TrinityCore/issues/24408
username_0: ok. en attente de Trinity alors.. bon courage a vous.
username_1: Y'a aucun blocage coté quêtes, il suffit de sortir et revenir à la zone pour chopper le déguisement et continué.
username_2: C'est un nouveau bug ou c'est existant depuis avant ?
username_1: Existant depuis des années
username_3: Euh non, y'a eu une période avec mais me semble que c'est corrigé depuis pas mal de temps
username_0: Attention je n'ai pas dit que la suite de quête était bugée, juste que maintenant (et ce n’était pas comme ça avant..) l'aura s'enlève quand on rend certaines quêtes de la zone on doit sortir de la zone et revenir pour que l'aura se remette. mais quand on se trouve au centre du village on a l'ensemble des mob sur la tête après avoir rendu une simple quête.
username_2: J'arrive pas a le reproduire chez nous ce bug :(. C'est avec quelles quetes que c'est reproduisible ?
username_3: Brisons la glace
Status: Issue closed
|
cityofaustin/atd-data-tech | 637717267 | Title: Renew Knack Subscription
Question:
username_0: We're moving to a master agreement for Knack Enterprise.
Todo:
- Get sole-source letter from Knack
- Work with <NAME> to create MA
- collect signatures
Answers:
username_0: Steve/Knack is currently reviewing the master agreement. We're waiting for them to sign it.
username_0: OMG we have a purchase order. Moving to Review/QA pending payment.
Status: Issue closed
|
sul-dlss/exhibits | 798757075 | Title: Need italics to display for selected MD items in an exhibit
Question:
username_0: As an exhibit creator for the coming Martin Wong Catalog Raisoneé exhibit, I need italics to display in the metadata record for journal titles, book titles and titles of exhibitions, on both the PURL page and on the exhibit item page. It will be hard for art historians to understand bibliographic citations without these.
Example - italics need for Exhibition History and Related Publications:
https://purl.stanford.edu/fk360xf2847
Answers:
username_0: From: <NAME> <<EMAIL>>
Sent: Friday, February 26, 2021 2:50 PM
To: <NAME> <<EMAIL>>
Subject: RE: Questions re a Wong MD display request
Hello Cathy,
I can’t find anything that says HTML is not allowed in MARC values. My main concern would be for interoperability outside Stanford, but as we don’t currently share our MODS records externally in any systematic way I’m OK with letting that be a problem for future us. This may impact the modsulator as well in making sure that the output formats the HTML correctly.
Best,
Arcadia
*********
From: <NAME> <<EMAIL>>
Sent: Friday, February 26, 2021 12:56 PM
To: <NAME> <<EMAIL>>
Subject: Questions re a Wong MD display request
Hi Arcadia,
Today I met with Jack & Gary to discuss the following request from the Wong team:
https://github.com/sul-dlss/exhibits/issues/1981
Jack has a question and a comment:
- Question: Does MODs allow the inclusion of HTML? If so, and the following is true below, then we can display italics.
- Comment: Jack thinks this is the case, but would need to investigate -- to make sure the MODs display gem can handle display of HTML (I think I have this stated correctly).
If the above is sufficiently clear, please feel free to comment directly on the ticket. If you have follow-on questions, please email me or we can do a slack call.
Thanks,
Cathy
username_0: Response from Arcadia (edited as she mentions other items that don't apply here):
When I added the HTML tags the MODS did raise validation errors, so using standard HTML within the MODS is not feasible after all. I’ve attached a couple of sample records with two different workarounds – one encloses the HTML in an XML comment, and the other swaps out the HTML angle brackets for double curly brackets. Let me know if I can provide anything else.
username_1: Could we possibly use CDATA for this? e.g.
```xml
<mods xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.loc.gov/mods/v3"
version="3.7"
xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-7.xsd">
<titleInfo>
<title><![CDATA[<em>Firefly Evening</em>]]>, sketches</title>
</titleInfo>
<!-- snip -->
<note type="publications" displayLabel="Related publications"><NAME> and <NAME>, eds. <![CDATA[<em>Taiping Tianguo: A History of Possible Encounters: Ai Weiwei, Frog King Kwok, Tehching Hsieh, and Martin Wong in New York</em>]]>, (Berlin: Sternberg Press, 2015),137.</note>
</mods>
```
username_0: @username_2 writes:
The CDATA approach doesn’t seem to interfere with MODS validation, so it’s a possibility. My one concern would be to make sure it works with character encoding.
For example:
台湾 is entered in UTF-8 encoding in input data
In Argo those characters appear instead as numerical character references: 台湾
In SearchWorks, the display shows 台湾 (which also appears in the source code)
[Sidebar: apparently purl doesn’t display the vernacular form of the main title at all? That is not ideal.]
CDATA is parsed literally, so if the source metadata in Argo/DOR looks like this:
<title><![CDATA[<em>台湾</em>]]></title>
The display would show that exact string in italics:
台湾
So what we may need to do is enclose only the markup in CDATA so that the encoding of the actual value is not affected. Instead of the above, we could have:
<title><![CDATA[<em>]]>台湾<![CDATA[</em>]]></title>
Which should then display as:
台湾
username_2: In a code block so the syntax displays correctly:
```
The CDATA approach doesn’t seem to interfere with MODS validation, so it’s a possibility. My one concern would be to make sure it works with character encoding.
For example:
台湾 is entered in UTF-8 encoding in input data
In Argo those characters appear instead as numerical character references: 台湾
In SearchWorks, the display shows 台湾 (which also appears in the source code)
[Sidebar: apparently purl doesn’t display the vernacular form of the main title at all? That is not ideal.]
CDATA is parsed literally, so if the source metadata in Argo/DOR looks like this:
<title><![CDATA[<em>台湾</em>]]></title>
The display would show that exact string in italics:
台湾
So what we may need to do is enclose only the markup in CDATA so that the encoding of the actual value is not affected. Instead of the above, we could have:
<title><![CDATA[<em>]]>台湾<![CDATA[</em>]]></title>
Which should then display as:
台湾
```
username_1: To determine:
* [ ] What do we need to constrain CDATA to contain?
* [ ] How is the infrastructure side going to work with this?
username_2: Some things to consider:
1. Representing the italics markup in spreadsheet input
2. Representing the italics markup in MODS XML
3. Representing the italics markup in Cocina JSON
4. Representing the italics markup in the Argo descMetadata datastream (until migration off Fedora)
4. Ensuring that MODS<>Cocina transformations preserve the markup
5. Ensuring that metadata delivered to access systems preserves the markup
6. Ensuring that access systems can interpret the markup appropriately wherever the data is displayed
Ideally the markup itself would not display if the target system was unable to interpret it.
One possible approach, if feasible:
1. In spreadsheet note field: `This is a <i>Title</i> for display.`
2. In MODS XML: `<note>This is a <![CDATA[<i>]]>Title<![CDATA[</i>]]> for display.</note>`
3. In Cocina JSON: `note: [{ value: 'This is a <i>Title</i> for display.'}]`
Display: This is a _Title_ for display.
Italics is the only such formatting use case I know of, so the allowed markup/CDATA content could be constrained to the `<i>` and `</i>` tags. Martin Wong use cases include italics in the note and abstract field. Stanford University Press digital monographs have similar use cases. Amos Gitai also has a use case for italics in the title.
username_1: From today's meeting: As we understand it, items 1-6 in @username_2's first list in the previous comment fall the within the Infrastructure Team's portfolio, and item 7 is the Access Team's responsibility. @username_0 will follow up with Vivian to see what we need to do to move this work forward with the Infrastructure Team.
username_2: @username_1 Is `<em>` preferable to `<i>` as the italics tag?
username_1: @username_2 I probably miswrote above - `<i>` is preferable since we're trying to identify a set off title instead of indicating emphasis.
username_3: FWIW, based on the examples motivating this feature (that I'm aware of, anyway: `journal titles, book titles and titles of exhibitions`), I believe `<i>` is the more appropriate element to use, because we're usually trying to _set off_ the contained text, rather than **emphasize** it.
username_4: This is super helpful, @username_2, thank you. What all are the sources of these data that will contain the italics? I.e., will it all come from spreadsheets (flowing through modsulator)? I'm asking so I can get a handle on what all needs to be touched to accommodate this feature.
username_5: @username_2 in the spreadsheet ingest would you apply the italic style to the text, or would you be including literal html markup?
How should this work for non-html capable viewers of this data?
username_5: Wouldn't it have to interpret it to determine that it can't display it?
username_2: I was thinking of when users put in `<i>` tags in the past and they displayed in SearchWorks. But if SearchWorks, purl, and Spotlight all are capable of displaying italics when the source data includes a `<i>` tag in certain fields, and the source data is processed to strip out any `<i>` tags that aren't in those fields, it shouldn't be an issue.
username_5: I think we're moving toward making our data incompatible with any other system that uses MODS. To me the value in following standards is that our data is interoperable.
For example, lets say someone makes software that makes PDFs out of MODS. The PDF language doesn't know anything about HTML. It would not property handle this data.
This proposal is to basically expand the standard such that only readers who are aware of *Stanfords* version of MODS can make sense of these records. I think this sort of issue should be brought before the maintainers of the MODS standard so that this use case can be incorporated into the standard. Have we looked into doing that?
username_4: I am wondering if we might want a more Cocina-oriented solution, essentially extending the model to allow users to specify the value and specify how it should be formatted as a sidecar assertion. And then systems could use the formatting extension(s) if supported or ignore them if not? We have a rich, structured metadata model here and it occurs to me that we could leverage (and add to) that richness rather than stuff potentially not-well-formed data into metadata representations that downstream systems would then need to know about and handle.
username_2: @username_4 I like the idea of making it more Cocina-oriented, and can imagine how that might be modeled. My questions then would be:
1. What would a user enter into a spreadsheet in order to generate this kind of structure in Cocina?
2. How would this information be delivered to access systems as XML?
3. Does this work with the Martin Wong project timeline?
username_4: Great questions! I don't know what the Martin Wong project timeline is, myself.
cc: @username_6 @username_0 @username_1
I am wondering if this request is complex enough, and touches enough systems and data structures, that we might want to schedule dedicated team time to analyze and work on it. It's feeling to me like more than we can do in a one-off maintenance week, and work that would benefit from cross-team analysis and testing rather than having it come in as side work to our current work cycle.
username_1: Based on my understanding of @username_0's discussions with the Wong project team, we need to have something implemented by April. Our window is definitely closing to get this resolved.
username_0: We need to have this implemented by 1 April for the Wong project -- confirmed.
username_5: @username_1 I don't think we can make this change in SDR by April. We are in the middle of a workcycle that moves us away from persisting MODS. This change would be a substantial effort that we're just hearing about. This would touch a whole bunch of SDR code bases and would require us to put aside our current work to pivot to this.
username_0: @username_5 - hearing that you can't do this by April, so it would be helpful to know by what date you could feasibly accomplish this. Thank you.
username_4: On one hand, citation formatting is not trivial; it is semantically significant to many of our users and potentially misleading to render w/o intended formatting.
On the other, the solutions we've discussed seem kludgy---in that we're proposing to mix values with formatting information, but then only conditionally apply said formatting---with potentially large and leaky side effects. And we'd be attempting to tackle this work at the same time as we're putting significant effort into moving away from storing MODS in Fedora.
I wonder whether it might be possible to negotiate with our stakeholders on the scheduling of this one requirement. That would give us time to get our heads together (@sul-dlss/access-team, @sul-dlss/infrastructure-team, @username_2, @username_7), do some planning, and figure out how best to support this.
username_0: It would be helpful to know by what date you all could feasible accomplish this requested change. Thank you.
username_5: I think there's a good likelyhood that we'd have to patch long dead libraries like activefedora, om and rubydora and possibly Fedora 3 itself, because those do a lot of normalization and I know they haven't been tested to support CDATA tags.
username_5: It seems to me that given the potential scope of work we're looking at, we wouldn't be doing our due diligence without making an attempt to ask the MODS editors what the preferred way to cite a book chapter is. Is there a deficiency in MODS here? I don't think that I'm the most qualified/knowledgeable person to make this inquiry, but I will happily do so if there are no other volunteers.
username_6: I would encourage the team to collect the information and analyze the different approaches.
username_2: @username_5 I am on the MODS editorial committee and could bring this up for discussion at the next meeting. But I think that in terms of what gets delivered to our discovery systems, this is less an issue of representing-formatting-in-MODS than representing-formatting-in-XML, and it appears the CDATA approach is the accepted way to do that.
username_5: Unfortunately, that's not a great solution when we use XSLT to transform the XML. Then the transformed output can be invalid.
username_7: This has probably been considered, but what about encoding the fact that a title is a title in the metadata and then leaving it up to consuming applications to determine what to do with that information? Then the display code could apply italics to any "title" data found in a note or abstract.
Encoding the title-ness of a title, rather than encoding one specific way to format it, should make the data more interoperable with other systems. I could see someone wanting to extract all titles from a bibliography, or wanting to add hyperlinks to all titles, in addition to wanting to italicize the titles. Encoding just the italics would make other uses more difficult, as consuming applications would need to understand the specifics of how we encoded the italics and would not have any clear indication in the encoding of why the text was italicized.
I don't know whether marking up titles as titles would be compatible with MODS or what impact this would have on Cocina. But every solution to this problem will require marking up the specific blocks of text that need special handling (formatting, linking, entity extraction, etc.), so it seems like a spreadsheet or other interface change that would allow users to declare "italicize this text" could also be used to declare "this text is a title."
If there are other use cases for italics than titles, then this suggestion wouldn't cover that. But if we're going to add markup to our description, I think there would be value in marking up the titles as titles and then handling other cases as appropriate to those data types and needs. |
hasura/graphqurl | 595225169 | Title: [bug] subscription not working with Authorization header
Question:
username_0: Without Authorization header (and server auth disabled), it works as expected:
[15:41:23] vagrant: tmp $ graphqurl http://localhost:4000 -q 'subscription {ottRightCreated {right { id } } }'
@oclif/config reading core plugin /usr/local/lib/node_modules/graphqurl +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/package.json +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/oclif.manifest.json +1ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/.oclif.manifest.json +1ms
@oclif/config reading user plugins pjson /home/vagrant/.local/share/graphqurl/package.json +0ms
@oclif/config loadJSON /home/vagrant/.local/share/graphqurl/package.json +1ms
@oclif/config config done +0ms
gq init version: @oclif/[email protected] argv: [ 'http://localhost:4000', '-q', 'subscription {ottRightCreated {right { id } } }' ] +0ms
Executing query... event received
{
"data": {
"ottRightCreated": {
"right": {
"id": "ck8on7hw606vj0712rw45zxj5"
}
}
}
}
Waiting... \u28f7
^C
[15:42:04] vagrant: tmp $
Once server auth is enabled, adding the auth header works with queries
[15:44:09] vagrant: tmp $ graphqurl http://localhost:4000 -H 'Authorization: Bearer <KEY>' -q 'query {envAll {backendVersion}}'
@oclif/config reading core plugin /usr/local/lib/node_modules/graphqurl +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/package.json +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/oclif.manifest.json +2ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/.oclif.manifest.json +0ms
@oclif/config reading user plugins pjson /home/vagrant/.local/share/graphqurl/package.json +0ms
@oclif/config loadJSON /home/vagrant/.local/share/graphqurl/package.json +1ms
@oclif/config config done +0ms
gq init version: @oclif/[email protected] argv: [ 'http://localhost:4000', '-H', 'Authorization: <KEY>', '-q', 'query {envAll {backendVersion}}' ] +0ms
Executing query... done
{
"data": {
"envAll": {
"backendVersion": "465",
"__typename": "EnvPayload"
}
}
}
[15:44:46] vagrant: tmp $
But fail with subscription
[15:44:46] vagrant: tmp $ graphqurl http://localhost:4000 -H 'Authorization: Bearer <KEY>' -q 'subscription {ottRightCreated {right { id } } }'
@oclif/config reading core plugin /usr/local/lib/node_modules/graphqurl +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/package.json +0ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/oclif.manifest.json +2ms
@oclif/config loadJSON /usr/local/lib/node_modules/graphqurl/.oclif.manifest.json +0ms
@oclif/config reading user plugins pjson /home/vagrant/.local/share/graphqurl/package.json +0ms
@oclif/config loadJSON /home/vagrant/.local/share/graphqurl/package.json +1ms
@oclif/config config done +0ms
gq init version: @oclif/[email protected] argv: [ 'http://localhost:4000', '-H', 'Authorization: <KEY>', '-q', 'subscription {ottRightCreated {right { id } } }' ] +0ms
Executing query... error
^C
[15:45:57] vagrant: tmp $ |
grpc/grpc | 86761519 | Title: Mono installation sometimes fails on travis
Question:
username_0: seen in PR #1995
https://travis-ci.org/grpc/grpc/jobs/65982373
Failed to fetch http://download.mono-project.com/repo/debian/pool/main/m/mono/libmono-system-net-http-formatting4.0-cil_4.0.1.44-0xamarin1_all.deb Hash Sum mismatch
Status: Issue closed
Answers:
username_0: There's not much to do about this one, and our primary CI is now Jenkins anyway. |
NVIDIA/spark-rapids | 833273406 | Title: Add new rule to push down the foldable expressions through CaseWhen/If
Question:
username_0: There is a change in the BinaryExpression that might lead to changes on our side.
**Additional context**
Please look at Spark [commit](https://github.com/apache/spark/commit/06b1bbbbab) for more details
Answers:
username_1: This is all at the logical plan level, does not affect the RAPIDS Accelerator which operates on the physical plan.
Status: Issue closed
|
kesla/sort-json | 230215999 | Title: CLI version doesn't seem to sort the given file (since v1.4.1)
Question:
username_0: Thanks for this tool. I have been using it for a while, and got the same error as reported in #10. I then updated to 1.4.1, but now the JSON file I was trying to sort is not actually sorted anymore. There are no error messages and `sort-json` returns a 0 exit code, but the file doesn't appear to be touched.
Answers:
username_1: Please provide a sample file and would look into this issue with you? also it will help if you share your env?
username_0: At the moment, I can't access the machine where I was having this problem, but I just installed the latest version on a different machine and it's working fine. I suspect this means the problem was something to do with my set-up; sorry.
username_2: please reopen if this is still an issue
Status: Issue closed
|
ThemsAllTook/libstem_gamepad | 195690249 | Title: make - cannot find -lglut, -lGLU -lGL
Question:
username_0: When I perform make it returns with the errors
/usr/bin/ld: cannot find -lglut
/usr/bin/ld: cannot find -lGLU
/usr/bin/ld: cannot find -lGL
although I have freeglu3-dev installed and can link to those libraries. |
timotheeg/NESTrisStatsUI | 660460129 | Title: Stop Rendering board when game is over
Question:
username_0: It currently give a disco effect (because the moving rocket is OCRed to nearest color based on the level), but it's not particularly great or useful, and in fact confuses viewers:
 |
monarch-initiative/mondo | 859162504 | Title: Orphanet:182067 move to Glioma
Question:
username_0: **Mondo term (ID and Label):**
MONDO:0015917 malignant glioma
**Xref that should be fixed (ID and label):**
Orphanet:182067 (MONDO:equivalentTo)
https://www.orpha.net/consor/cgi-bin/OC_Exp.php?Expert=182067 Orphanet maps to the Glioma term (based on MESH, GARD ID and CUI C0017638) Move the OrphaID to MONDO:0021042 "glioma"
**Your nano-attribution (ORCID)**
If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
**Other comments:**<issue_closed>
Status: Issue closed |
ebimodeling/ghgvcR | 108432415 | Title: includeANTH
Question:
username_0: In [ghgvc.R:26](https://github.com/ebimodeling/ghgvcR/blob/master/R/ghgvc.R), we have:
26: includeANTH = 1
...
120: if (includeANTH == 0)
121: F <- c(ecosystem[['F_CO2']] + ecosystem[['F_anth']], ecosystem[['F_CH4']], ecosystem[['F_N2O']])
122: else if (includeANTH == 1)
123: F <- c(ecosystem[['F_CO2']] + ecosystem[['F_anth']], ecosystem[['F_CH4']], ecosystem[['F_N2O']])
There is no difference between the two calculations in `121` and `123`, and it is hardcoded to 1, so I wonder if there is an error in `121` or `123`?
Answers:
username_1: Error in 121. Please remove '+ ecosystem[['F_anth']].
Thanks for catching this.
username_0: Fixed, thanks for your quick response. One more question on this. `includeANTH` is hard coded to 1 in line 28. Should it be taken from the options in some way instead?
username_1: Yes, it should be included in the options under "Settings"-- see issue #12
username_0: Thanks, I'll double check that it takes the value from settings before closing this issue.
username_0: Just tidying up. This now defaults to 1, but takes the value from the settings if it is not null.
Status: Issue closed
|
kleros/escrow-react | 740941351 | Title: Generate MetaEvidence and Add to IPFS
Question:
username_0: Upon submission of the transaction MetaEvidence should be generated and included in the transaction. Can use same MetaEvidence as is used with escrow.
Evidence display interface can be found here: https://github.com/kleros/escrow-evidence-display
https://github.com/kleros/escrow-react/blob/4394ff4d32a68d5f9808d9c127334c16dfd611f7/src/index.tsx#L71 |
USAID-OHA-SI/TeamTracking | 407669163 | Title: KP Target Setting Worksheet for West Africa Region
Question:
username_0: Many of the F-Op teams (new to PEPFAR) were having trouble setting and informing KP testing and prevention targets. Using the ECT IV KP target setting guidance to build a rough worksheet for country teams to create targets.
Created two options to account for country teams that don't have much program data and will need to use their national estimates.
Will attach the excel sheet once completed.
Answers:
username_0: Google [link](https://docs.google.com/spreadsheets/d/1VGN_vLtFuMpc9IEoFH2Nqn1QwNVJnxjVfmfCRrXvchA/edit?usp=sharing) to the draft KP Target Setting worksheet. Haven't found anything on how to target for KP index testing and positivity. <NAME> reminded me that for General Population we estimate 1.5 test with yield ranging between 20%-50%, depending on ART coverage. I haven't found anything on the number of tests to estimate for FSW, MSM, TG, PWIDs, and Prisoners. I'll keep an eye out.
Status: Issue closed
|
projectdiscovery/nuclei | 611461701 | Title: Accep-Encoding decompression fails
Question:
username_0: By default Nuclei adds the header `Accept-Encoding: gzip` and it works good if the user don't add the `Accept-Encoding: gzip` header manually to the template.
To reproduce the bug, you only have to add that header to a template and you will see that matchers seems that stop working.
I can confirm that adding `Accept-Encoding: deflate` works good, i can add this header and body response is decompressed correctly.
And to finish, i already expected that the tool don't support Brotli compression, thats why the header `Accept-Encoding: br` doesn't wok in websites that have brotli compression enabled.
Answers:
username_0: Hello,
Is there progress with this bug?
username_1: Watch out for https://github.com/projectdiscovery/nuclei/pull/76, this should be handled in the next major update.
username_2: Hi @username_0 - Good catch! I think this is caused by the standard behavior of golang net/http library. The transport layer has a flag called `DisableCompression` which is false by default:
```
// DisableCompression, if true, prevents the Transport from
// requesting compression with an "Accept-Encoding: gzip"
// request header when the Request contains no existing
// Accept-Encoding value. If the Transport requests gzip on
// its own and gets a gzipped response, it's transparently
// decoded in the Response.Body. However, if the user
// explicitly requested gzip it is not automatically
// uncompressed.
```
Seems like we need to add automatic decompression in case the user manually provide an encoding.
Brotli compression doesn't appear to be so widespread, did you encounter cases where it would have been necessary such encoding?
username_0: @username_2 Honestly i never saw any website forcing to use Brotli or a case that requires Brotli enabled. From my point of view, handle correctly gzip and deflate is all that we need.
Glad to see that you locate the issue :+1:
username_2: Implemented in #76 - Closing the issue
Status: Issue closed
|
Java-Bom/ReadingRecord | 558634525 | Title: [아이템 13] 믹스인 인터페이스
Question:
username_0: 믹스인 인터페이스가 뭘 말하는거고 Cloneable 인터페이스가 왜 믹스인 인터페이스인지! 찾아봤는데 명확히 모르겠다 ㅠㅠ
Answers:
username_1: 믹스인은 클래스가 본인의 기능 이외에 추가로 구현할 수 있는 자료형으로, 어떤 선택적 기능을 제공한다는 사실을 선언하기 위해 쓰여. 예를들어 Comparable을 구현한 클래스는 그 클래스가 다른 객체와 비교 가능함을 선언하는 것이고 Cloneable의 같은 경우도 Cloneable을 상속받은 클래스는 clone() 함수를 사용할 수 있다는 것을 알려주는 거겠지?
Status: Issue closed
|
InteriAR/master | 527797071 | Title: integrate Wayfair API
Question:
username_0: - [ ] test the GET routes of the wayfair models
- [ ] get the app to display specific models/categories
- [ ] tapping on a model will "select" it for use with AR
- [ ] download the model through the app and store it somewhere
- [ ] user can use AR functionality with the model they selected |
tree-sitter/tree-sitter | 374784437 | Title: Discovered seed with parser failures
Question:
username_0: ```
$ git rev-parse HEAD
8f526e6c981360e1d583cbea51b76eb0451c5d6f
$ ./script/test -s 1540750569
...
Random seed: 1540750569
Executed 124 tests.
Regenerating the javascript parser...
Executed 125 tests.
Regenerating the python parser...
Executed 11920 tests.
Regenerating the json parser...
Executed 12770 tests.
Regenerating the html parser...
Executed 14638 tests.
Regenerating the c parser...
Executed 21524 tests. 21523 succeeded. 1 failed.
Regenerating the cpp parser...
Executed 35343 tests. 35342 succeeded. 1 failed.
Regenerating the bash parser...
Executed 41906 tests. 41905 succeeded. 1 failed.
There were failures!
the c language parses function calls vs parenthesized declarators vs macro types: repairing an insertion of "[" at 11:
test/integration/real_grammars.cc:112: Expected: equal to (translation_unit (function_definition (primitive_type) (function_declarator (identifier) (parameter_list)) (compound_statement (comment) (expression_statement (call_expression (identifier) (argument_list (identifier)))) (comment) (declaration (type_identifier) (identifier)))))
Actual: (translation_unit (function_definition (primitive_type) (function_declarator (identifier) (parameter_list)) (compound_statement (comment) (macro_type_specifier (identifier) (type_descriptor (type_identifier))) (comment) (declaration (type_identifier) (identifier)))))
Test run complete. 41906 tests run. 41905 succeeded. 1 failed.
```
Answers:
username_0: The [failing C test](https://github.com/tree-sitter/tree-sitter-c/blob/62c9f7e1648feb1c071a7d226c910053c958372c/corpus/ambiguities.txt#L82-L111) is in `./corpus/ambiguities.txt`. I'm unsure about the `cpp` and `bash` failures.
username_1: Thanks for reporting this! I think that no bash or cpp tests actually failed, the "1 failure" was just being re-printed out after the "regenerating..." message.
Status: Issue closed
username_1: Closing this out because the seed won't work with the current test suite, and a bunch of stuff has changed. If we find the corresponding test failing again, we can add the new seed to https://github.com/tree-sitter/tree-sitter/issues/18. Thanks! |
deeplearning4j/deeplearning4j | 170456047 | Title: Asynchronous Stochastic Gradient Descent MultiLayerNetwork and ComputationGraph
Question:
username_0: Enable async gradient descent on MultiLayerNetwork and ComputationGraph.
See:
[paper 1](https://arxiv.org/abs/1505.04956)
[Hogwild](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwjboLionbfOAhVJKGMKHeQhB0MQFggyMAI&url=https%3A%2F%2Fwww.eecs.berkeley.edu%2F~brecht%2Fpapers%2FhogwildTR.pdf&usg=AFQjCNE9XrK7aEEQxMC2XkWxXrfyC90y2A&sig2=E-tEQOSf3RoTNbGbfK2uYQ)
[paper 3](https://papers.nips.cc/paper/5751-asynchronous-parallel-stochastic-gradient-for-nonconvex-optimization.pdf)
Answers:
username_1: This is handled by the new parameter server. I am consolidating this topic under that.
Status: Issue closed
|
crowdbotics-apps/square-sun-26805 | 893502635 | Title: Comment Moderation
Question:
username_0: This feature allows the post author or other admin to prevent comments from appearing on the post without their express approval. This feature is useful in addressing and moderating comment spam. Each comment generally has a clickable settings option that is viewable only to the author of the post. This clickable view has further options such as blocking, reporting, or deleting the comment.
User Stories
As a post author, I would like to moderate each individual comment that is posted on my post. |
kubernetes/release | 202692094 | Title: support reading binary output from bazel-bin/
Question:
username_0: Currently we assume that release tars are under `$KUBE_ROOT/_output/release-tars`, binaries are under `$KUBE_ROOT/_output/release-stage`, and that we can locally stage things to `$KUBE_ROOT/_output/gcs-stage`.
With Bazel builds, the release tars are under `$KUBE_ROOT/bazel-bin/build/release-tars`, the binaries are under `$KUBE_ROOT/bazel-bin/...` (not one central location), and it's not clear where `gcs-stage` should go.
I'm not sure whether we should try to emulate the non-bazel directory structure for bazel-build binaries (copy/symlink everything into `_output/bazel` or equivalent), update `lib/releaselib.sh` to support bazel paths, or something else entirely.
It's also not clear how bazel crossbuilds might work, and where their output would end up.
Answers:
username_0: cc @spxtr @username_1
username_1: done?
username_0: nope, not yet. `push-build.sh` still won't handle bazel-built artifacts correctly.
Status: Issue closed
username_0: This is fixed. |
atomist/sdm-pack-checkstyle | 465885145 | Title: Code Inspection: npm audit on atomist-update-latest-20190709163756
Question:
username_0: ### marked:>=0.3.14 <0.6.2
- _(warn)_ [Regular Expression Denial of Service](https://npmjs.com/advisories/812) _Upgrade to version 0.6.2 or later._
- `marked:0.4.0`:
- `typedoc>marked`
[atomist:code-inspection:atomist-update-latest-20190709163756=@atomist/atomist-sdm]
Answers:
username_0: Issue closed because branch `atomist-update-latest-20190709163756` was deleted.
Status: Issue closed
|
ikedaosushi/tech-news | 713324121 | Title: 東証の記者会見は技術がわかる経営者受け答えが理路整然と絶賛する感想が集まるなお横山CIOは落研出身 - Togetter
Question:
username_0: 東証の記者会見は「技術がわかる経営者」「受け答えが理路整然」と絶賛する感想が集まる。なお横山CIOは落研出身 - Togetter<br>
<br>
https://ift.tt/2EQLoS1 |
Apicurio/apicurio-studio | 1013726452 | Title: Issue on mocking with Microcks
Question:
username_0: After updating to version 0.2.50.Final the Mock feature stopped to work. An error message return on screen and the following message appear in apicurio-studio-api logs:
```2021-09-30 19:30:26,175 ERROR [io.undertow.request] (default task-93) UT005023: Exception handling request to /designs/33/mocks: org.jboss.resteasy.spi.UnhandledException: java.lang.RuntimeException: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('-' (code 45)) in numeric value: expected digit (0-9) to follow minus sign, for valid numeric value
at [Source: (String)"---
openapi: 3.0.2
info:
title: fis-api-sample
version: "1"```
It seems trying to parse the yaml as it was a Json.
Answers:
username_1: I am having the same issue :(
username_2: Can anyone provide more information on this, like a full stack trace for example? I am having trouble reproducing the problem locally.
username_2: Release is done. I'm in the process of upgrading https://studio.apicur.io/ Should be done today barring any unexpected problems.
username_0: Thank you @username_2 ! |
GoogleChrome/lighthouse | 568987250 | Title: DevTools Error: PROTOCOL_TIMEOUT
Question:
username_0: **Initial URL**: https://summarizer.legalmind.tech/
**Chrome Version**: 73.0.3683.103
**Error Message**: PROTOCOL_TIMEOUT
**Stack Trace**:
```
LHError: PROTOCOL_TIMEOUT
at eval (chrome-devtools://devtools/remote/serve_file/@e82a658d8159cabbd4938c1660f9bb00b4a82a23/audits2_worker/audits2_worker_module.js:1027:210)
``` |
chrum/ngx-autosize | 573245696 | Title: Line height / row size is different is not perfect
Question:
username_0: @username_1 hey
did you try setting native textarea prop 'rows' to 1 ?
Answers:
username_1: Hi! Any update on this? In our app we have lot of textareas with 1 row and the layout looks very weird.
Thanks!
username_0: @username_1 hey
did you try setting native textarea prop 'rows' to 1 ?
username_1: Thanks for your fast reply!
Yes I already tried, but nope. For instance, with 0.8rem font-size and 1rem line-height, in chrome height is 33px, in firefox is 39px.


I know is a small difference, but we have some forms where fields on same row, in Firefox, are not aligned because of this
username_1: Ok I got it
The problem is that in FF if a textarea has only one row, then setting overflow:'auto' or overflow:'hidden' will result in two different heights (this doesn't happen in Chrome)
In the code there's a point (line 168) where you set the overflow of the textarea to 'hidden', but you already calculated the height using the clone that had overflow: 'auto'.
So, a simple fix is to set also overflow of clone to 'hidden' and then recalculate computed height of clone.
If it's something that you can do (it's only few lines), or I can open a PR, as you prefer!
Thanks
username_0: hey @username_1
awesome, i will check your finding shortly :)
username_0: shiiiiit :D its dead obvious that clone should have overflow: hidden
username_0: I just noticed that 'useImportant' is broken... are you in a super hurry? I will deal with that one too and then make a build... what do you say?
username_0: ok check @next channel
`npm install ngx-autosize@next`
username_1: Just tried, works like a charm! Perfect!
We'll use the 'next' until the 1.8.0 will be released, thanks for your fast help!
Status: Issue closed
|
cloudant/python-cloudant | 174345412 | Title: add asynchronous mode
Question:
username_0: The python-cloudant client (version 0.5.10) was providing async mode for database operations. However, in the 2.x.x version async mode is not supported.
Last week, we experience long delays for put, get operations ranging from (72 seconds to 1 hour) using python client v 0.5.10 w/o asycn mode. Support team recommended us to move to version 2.x.x To prevent, similar problems happening, we ask to have async mode. For details see Case #72985.
PS: We also look forward to contributing to this repo, once we finished moved to new client version.
Thanks
Answers:
username_1: Recent versions of the Requests library (which provides the HTTP for python-cloudant) do not support [non-blocking behaviour](http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking). So it looks like adding async support would require using one of the suggested projects that combine Requests with an async framework.
username_2: Is there any plan to implement this feature, for example with the excellent [requests-futures](https://github.com/ross/requests-futures)?
username_3: Hi @username_2, we have no immediate plans to implement this enhancement.
username_2: Right, I just realized that this issue was put into the icebox :smiley: Thank you for letting me know.
username_1: We won't add this feature here.
Our new [cloudant-python-sdk](https://github.com/IBM/cloudant-python-sdk/)(beta) doesn't yet have async either. However, it is built on the https://github.com/IBM/python-sdk-core and we hope to get asynchronous operation via that eventually.
Status: Issue closed
|
jlippold/tweakCompatible | 619782217 | Title: `StickyNote` working on iOS 13.3
Question:
username_0: ```
{
"packageId": "com.twickd.gabriel-siu.stickynote",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.twickd.gabriel-siu.stickynote",
"deviceId": "iPhone8,2",
"url": "http://cydia.saurik.com/package/com.twickd.gabriel-siu.stickynote/",
"iOSVersion": "13.3",
"packageVersionIndexed": false,
"packageName": "StickyNote",
"category": "Tweaks",
"repository": "Twickd",
"name": "StickyNote",
"installed": "1.0.0-5+debug",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.twickd.gabriel-siu.stickynote",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Add a handy sticky note to your device's lock screen",
"latest": "1.0.0-5+debug",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": "Thanks 🙏 "
}
```<issue_closed>
Status: Issue closed |
vimeo/psalm | 1091062999 | Title: Missing support for templates resolution in closures/callables
Question:
username_0: * See https://psalm.dev/r/0df3da05ef
* Additional context : https://github.com/vimeo/psalm/discussions/7212
The templates on closures and callables are not resolved into the actual provided types when a `FuncCall` is performed on it.
This results in following issues:
Output:
```
INFO: Trace - 9:32 - $stages: pure-Closure(Closure(T:fn-a::pipe as mixed):T:fn-a::pipe as mixed...):Closure(T:fn-a::pipe as mixed):T:fn-a::pipe as mixed
ERROR: InvalidArgument - 10:21 - Argument 1 expects Closure(T:fn-a::pipe as mixed):T:fn-a::pipe as mixed, "hello" provided
INFO: Trace - 11:30 - $_res: Closure(T:fn-a::pipe as mixed):T:fn-a::pipe as mixed
```
Example code:
```php
<?php
/**
* @psalm-suppress ForbiddenCode
*/
function test(): void
{
$a = [A::class , 'pipe'];
$stages = Closure::fromCallable($a);
/** @psalm-trace $stages */;
$_res = $stages('hello');
/** @psalm-trace $_res */;
}
class A
{
/**
* @template T
*
* @param Closure(T): T ...$stages
*
* @return Closure(T): T
*
* @pure
*/
function pipe(...$stages): callable
{
return array_pop($stages);
}
}
test();
```
Answers:
username_1: I think my code example with callable was wrong, here's the fix: https://psalm.dev/r/75a4ef108e
But the conclusion is the same...
username_0: @username_1
This problem also applies to first class callable syntax. Which is a bit annoying, since you'dd expect it to resolve the template types.
Is there any entrypoint in code I can look at to check if I can create a fix for this issue?
Or would it be quite complex to implement?
```php
<?php
/**
* @template T
* @param T $i
* @return T
*/
function debug(mixed $i): mixed {
return $i;
}
$x = debug('hello');
$y = debug(...)($x);
/** @psalm-trace $y */
```
username_1: I'm not sure :(
But I'd guess the first thing to begin would be to make sure templates are kept through closures:
https://psalm.dev/r/01e2acbe39
This should not be mixed if we want this to work
username_0: @username_2 : would it (theoretically) be possible to apply the higher order logic for first class callables as well?
It kinda behaves like a higher order function at this moment?
```php
list_filter(...) -> Closure(list<T>, callable (T): bool): list<T>
```
If we were able to dynamically pass the generics from the underlying type - it could fix template resolving in first class callables?
Additional possible case that might need a fix:
As described in #7471, there is also a case for nesting variadic generic functions:
```php
pipe(
$numbers,
partial_left(custom_array_map(...), fn($i) => $i + 1)
)
```
where both `pipe`, `partial_left` and `custom_array_map` are functions with variadic generic templates.
How can psalm, in this case, map the result of custom_array_map(...) to a dynamic function signature?
username_2: @username_0
Check examples for `partialRight` and `partialLeft`:
https://github.com/username_2/psalm/blob/partiall-application-example/partial-left-example.php
https://github.com/username_2/psalm/blob/partiall-application-example/partial-right-example.php
It works with first-class-callable syntax. But plugin for partial use many internal api:
https://github.com/username_2/psalm/blob/partiall-application-example/tests/Config/Plugin/Hook/PartialFunctionStoragePlugin.php
In my case. I only care about the name of the function that will be partially applied.
By this reason the first-class-callable syntax works fine. I just catch function name from `foo(...)`.
I would even start using this plugin. But for some people, using Psalm internal api will not give rest.
Any ideas how to make tempalte related api as public?
Or do you have radically different ideas for plugin implementation?
P.S. I would postpone difficult cases, like `custom_array_map`.
username_0: @username_2,
Quickly scrolled through the code.
Since I dont understand everything in the plugin yet, I will go in details somewhere next week to see if I can give a pov on a public api.
It looks nice, but my main concern is this:
Now, your plugin deals with the first class callables placed in the first arguments.
Depending on the partial function, it might take multiple first class callables.
E.g.:
```php
partialLeft(mapMany(...), map1(...), map2(...), map3(...))
```
So my general feeling here, is that psalm should provide a system that deals with templates in first class callable syntax by itself. Otherwise you'll have to always take care of this stuff in your plugin. I currently don't have a clue how this should work.
Would a similar approach in psalm internal analyzers be able to dynamically fetch+pass the storage to the first-class called function be possible?
Alternatively, there could be a helper function for these kind of actions in plugins. But that most likely would be a "hacky" solution. |
github-vet/rangeloop-pointer-findings | 771463054 | Title: IBM/go-with-wakeup-profile: src/cmd/vet/testdata/rangeloop.go; 5 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/IBM/go-with-wakeup-profile/blob/148bfb5744d2aa76bcf0eb65c09f3c48e66cfe12/src/cmd/vet/testdata/rangeloop.go#L28-L32)
<details>
<summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary>
```go
for _, v := range s {
go func() {
println(v) // ERROR "loop variable v captured by func literal"
}()
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 148bfb5744d2aa76bcf0eb65c09f3c48e66cfe12 |
DeployGate/deploygate-issues | 65643550 | Title: グループへのアプリのアップロード時にチームを設定できるようにしたい
Question:
username_0: グループに新しいアプリをアップロードしたあと、チームがまだ紐付いておらず、あとから管理者がいちいちWebからアサインしないといけないのが辛い。
ブランチ毎に自動的にアプリを分けてビルドする仕組みがあると尚更辛い。
アップロード時にアサインできるようにしたい。
* 別途、チームを管理できる API が欲しい、アプリとチームのアサインができる API がほしい、は別 issue で |
JetBrains/ideolog | 296261114 | Title: What are Patterns for?
Question:
username_0: Thanks for the plugin.
Bit confused about the requirement for Patterns - the examples quote matching on severity e.g. info, etc.. Do I need these if I have a capture group for Severity?
Here's a sample of my log:
```bash
[SOME_USER] 16:45:52.222 INFO VehicleController - Vehicle search found 1 results
```
Here's my "Message pattern" for my Log Format:
```bash
^\[([A-Z_]+)\]\s([0-9:\.]+)\s([[:word:]]+)[[:space:]]+([[:word:]]+)(.*)$
```
Why is this stuff not on the wiki or should I be able to figure this out (self documenting?!).
Many thanks
Answers:
username_1: The "Patterns" part of configuration is for setting up highlighting. Probably we should rename them to "Highlighting patterns" or something.
The three bundled patterns highlight lines with severities of Error, Warning and Info. You can also add your own highlighting patterns here, for example to highlights all log messages from VehicleController.
These operate after log message was parsed using your Log Format. Patterns are applied to each field (specified by capture groups), and perform their corresponding action if there is a match.
I've create a [wiki page](https://github.com/JetBrains/ideolog/wiki/Highlighting-Patterns) documenting this part.
Hope it helps!
As a side note, I haven't seen [[:word:]] and [[:space:]] character classes in official Java documentation, and they don't seem to work on some random online regex validator. Consider replacing them with \w and \s respectively. Other than that, it looks just fine.
Status: Issue closed
username_0: Thanks guys - got it working now. Good job on the quick response + wiki page. |
FGRibreau/mailchecker | 118971400 | Title: Some other domains are missing
Question:
username_0: Hello,
I found that some other email domains related to "@kmhow.com" are missing from the list:
- @pooae.com
- @foxja.com
- @kloap.com
You can find it on https://10minutemail.net/history.html
Cheers :beer:
Answers:
username_1: @username_0 will happily accept a PR for this :)
username_0: #44
@username_1 Thanks~ :)
Status: Issue closed
|
doctrine/DoctrineORMModule | 61052012 | Title: zf2, doctrine 2 redis cache
Question:
username_0: Hello,
We have big project on zf2 with doctrine 2 integrated.
Now we have two problems:
1. we change cache adapter from filesystem to redis and project stand slower, why so? Redis cache must be faster than filesystem cache!
2. we cache only metadata and query, and doctrine use all redis memory limit. On our server memory limit is 3Gb and it is two much for metadata and query. I think something doesnot work properly. Can you help us ?
Answers:
username_1: Hi @username_0, I'm closing this issue as it doesn't look like an issue with the repository, rather than a tech support request: this is where you usually hire someone to get support on your own codebase instead.
Status: Issue closed
|
fact-project/shifthelper | 191854377 | Title: 3 times: DriveInErrorDuringDataRun - ERROR
Question:
username_0: * 2016-11-26 19:09:30
* 2016-11-26 19:19:31
* 2016-11-26 21:35:20
traceback:
```
Exception while running check. Traceback: Traceback (most recent call last): File "/opt/conda/lib/python3.5/site-packages/custos/checks/__init__.py", line 83, in wrapped_check self.check(*args, **kwargs) File "/opt/conda/lib/python3.5/site-packages/shifthelper/checks.py", line 41, in check if all([f() for f in self.checklist]): File "/opt/conda/lib/python3.5/site-packages/shifthelper/checks.py", line 41, in <listcomp> if all([f() for f in self.checklist]): File "/opt/conda/lib/python3.5/site-packages/wrapt/wrappers.py", line 522, in __call__ args, kwargs) File "/opt/conda/lib/python3.5/site-packages/shifthelper/debug_log_wrapper.py", line 9, in log_call_and_result result = wrapped(*args, **kwargs) File "/opt/conda/lib/python3.5/site-packages/shifthelper/conditions.py", line 95, in is_data_run sfc.main_page().system_status AttributeError: 'NoneType' object has no attribute 'groups'
```
Answers:
username_1: This is actually the same as #157 and i opened the issue here: https://github.com/fact-project/smart_fact_crawler/issues/18
username_0: closed as a dupe of #157
Status: Issue closed
|
atom/apm | 58052394 | Title: Cannot install a package on Windows 8.1
Question:
username_0: Hello,
I am having problems with installing anything on my windows 8.1 box with apm.
My setup: Python 2.6 (but also tried with 2.7), git, Visual Studio 2013. I tried installing with
a) setting switch --msvs_version=2013
b) setting variable to set VCTargetsPath=C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120
Please, see the output:
C:\>apm install api-blueprint-preview
[email protected] install C:\Users\MM~1\AppData\Local\Temp\apm-install-dir-11511
8-12708-1q9i926\node_modules\api-blueprint-preview\node_modules\pathwatcher\node
_modules\runas
node-gyp rebuild
C:\Users\MM~1\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_m
odules\api-blueprint-preview\node_modules\pathwatcher\node_modules\runas>node "C
:\Users\mm\AppData\Local\atom\app-0.179.0\resources\app\apm\node_modules\
npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild
Building the projects in this solution one at a time. To enable parallel build,
please add the "/m" switch.
main.cc
runas_win.cc
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\nan\nan.h(62
3): error C2039: 'ExternalAsciiStringResource' : is not a member of 'v8::String'
(..\src\main.cc) [C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-
12708-1q9i926\node_modules\api-blueprint-preview\node_modules\pathwatcher\node_m
odules\runas\build\runas.vcxproj]
C:\Users\mm\.atom\.node-gyp\.node-gyp\0.21.0\deps\v8\include\v8
.h(1809) : see declaration of 'v8::String'
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\nan\nan.h(62
3): error C2065: 'ExternalAsciiStringResource' : undeclared identifier (..\src\m
ain.cc) [C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i
926\node_modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\run
as\build\runas.vcxproj]
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\nan\nan.h(62
3): error C2065: 'resource' : undeclared identifier (..\src\main.cc) [C:\Users\m
makowski\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_modules\ap
i-blueprint-preview\node_modules\pathwatcher\node_modules\runas\build\runas.vcxp
roj]
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\nan\nan.h(62
3): error C2448: 'NanNew' : function-style initializer appears to be a function
definition (..\src\main.cc) [C:\Users\mm\AppData\Local\Temp\apm-install-d
ir-115118-12708-1q9i926\node_modules\api-blueprint-preview\node_modules\pathwatc
her\node_modules\runas\build\runas.vcxproj]
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
modules\api-blueprint-preview\node_modules\pathwatcher\node_modules\nan\nan.h(67
2): warning C4244: 'return' : conversion from 'int64_t' to 'int', possible loss
of data (..\src\main.cc) [C:\Users\mm\AppData\Local\Temp\apm-install-dir-
115118-12708-1q9i926\node_modules\api-blueprint-preview\node_modules\pathwatcher
\node_modules\runas\build\runas.vcxproj]
C:\Users\mm\AppData\Local\Temp\apm-install-dir-115118-12708-1q9i926\node_
[Truncated]
.179.0\\resources\\app\\apm\\node_modules\\npm\\bin\\npm-cli.js" "--globalconfig
" "C:\\Users\\mm\\.atom\\.apm\\.apmrc" "--userconfig" "C:\\Users\\mmakows
ki\\.atom\\.apmrc" "install" "C:\\Users\\MM~1\\AppData\\Local\\Temp\\d-11511
8-12708-hu4qvd\\package.tgz" "--target=0.21.0" "--arch=ia32" "--msvs_version=201
3"
npm ERR! node v0.10.35
npm ERR! npm v2.3.0
npm ERR! code ELIFECYCLE
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! This is most likely a problem with the runas package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls runas
npm ERR! There is likely additional logging output above.
Answers:
username_1: This is an issue in that package, it needs to upgrade this line to be `^3.3.1` instead of `~2.0.7`.
https://github.com/danielgtaylor/atom-api-blueprint-preview/blob/9e3948f5a58fe571aa7fb0eddd99d5a0502e2126/package.json#L13
I would recommend opening an issue or pull request on that package's repo: https://github.com/danielgtaylor/atom-api-blueprint-preview/issues/new
Status: Issue closed
username_0: Please, reopen.
Upgrading to ^3.3.1 didnot help |
DavBfr/dart_pdf | 782501556 | Title: Migrate to sound null safety
Question:
username_0: ### **Description**
- Migrate to sound null safety
- Bump up package version
Answers:
username_1: That's planned, as soon as the dependent packages `image` and `archive` are sound null safe.
```
Showing dependencies that are currently not opted in to null-safety.
[✗] indicates versions without null safety support.
[✓] indicates versions opting in to null safety.
Package Name Current Upgradable Resolvable Latest
direct dependencies:
archive ✗2.0.13 ✗2.0.13 - ✗2.0.13
barcode ✗1.17.1 ✗1.17.1 - ✓2.0.0-nullsafety
crypto ✗2.1.5 ✗2.1.5 - ✓3.0.0-nullsafety.0
image ✗2.1.19 ✗2.1.19 - ✗2.1.19
meta ✗1.2.4 ✗1.2.4 - ✓1.3.0-nullsafety.6
path_parsing ✗0.1.4 ✗0.1.4 - ✓0.2.0-nullsafety.0
vector_math ✗2.0.8 ✗2.0.8 - ✓2.1.0-nullsafety.5
xml ✗4.5.1 ✗4.5.1 - ✓5.0.0-nullsafety.1
dev_dependencies:
pedantic ✗1.9.2 ✗1.9.2 - ✓1.10.0-nullsafety.3
test ✗1.15.7 ✗1.15.7 - ✓1.16.0-nullsafety.13
You are already using the newest resolvable versions listed in the 'Resolvable' column.
Newer versions, listed in 'Latest', may not be mutually compatible.
```
username_2: `image` and `archive` now have null-safety packages available:
https://pub.dev/packages/image/versions/3.0.0-nullsafety.0
https://pub.dev/packages/archive/versions/3.0.0-nullsafety.0
Would be great, if this package also supports sound null-safety
username_1: Excellent, I'll work on that.
username_1: Migrated to null-safety. Let me know if you encounter any issues.
username_2: That's great, thank you. We'll test right away.
username_3: How can I help test this? I'm getting this error: `Because archive >=3.0.0 depends on crypto ^3.0.0 and pdf >=1.3.14 <2.1.0 depends on crypto ^2.0.6, archive >=3.0.0 is incompatible with pdf >=1.3.14 <2.1.0.` and I'm just assuming it should be fixed with the migration?
username_3: Nevermind, misread the error where it says `<2.1.0` not `<=`
Using 2.1.0 fixed it. Thank you
Status: Issue closed
|
webpack/enhanced-resolve | 245007350 | Title: Error when using webpack in watch mode
Question:
username_0: I have updated to the latest release of this library (3.4.0), and that caused the `watch` script I've added to `package.json` (actually it is just `webpack -d -w`) to break. It breaks with this error message:
```
/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:145
if(key.startsWith(what))
^
TypeError: Cannot read property 'startsWith' of undefined
at Storage.purge (/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:145:10)
at Storage.purge (/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:150:9)
at CachedInputFileSystem.purge (/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:259:20)
at Watchpack.watcher.once (/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/webpack/lib/node/NodeWatchFileSystem.js:42:26)
at Object.onceWrapper (events.js:318:30)
at emitTwo (events.js:125:13)
at Watchpack.emit (events.js:213:7)
at Watchpack._onTimeout (/Users/daniel.rotter/Development/massiveart/sulu-minimal/node_modules/watchpack/lib/watchpack.js:142:7)
at ontimeout (timers.js:488:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:283:5)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @ watch: `webpack -d -w`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the @ watch script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
```
The initial build works, but this error occurs after I save a file being watched. Is there some compatability issue with this release?
If I downgrade to version 3.3.0 of this library the watch task is working again.
Answers:
username_1: Just wanted to report the same issue.
username_2: Also getting this issue.
username_3: Looks like we have an `undefined` key here, likely from commit 03ef8f2:
```javascript
for(var key of this.data.keys()) {
if(key.startsWith(what))
this.data.delete(key);
}
```
In a rush so I had to edit it like below as a temporary workaround:
```javascript
for(var key of this.data.keys()) {
if (typeof key !== "string") {
continue;
}
if(key.startsWith(what))
this.data.delete(key);
}
```
Hope this can be fixed soon.
username_4: Getting the same problem using version 3.4.0
username_5: +1
username_6: +1
username_7: +1
username_8: Same error here.
webpack: Compiling...
/Users/Gustavo/Desktop/canvas-test/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:145
if(key.startsWith(what))
^
TypeError: Cannot read property 'startsWith' of undefined
username_9: Same issue, had to downgrade again.
Seems to work in 3.2.0.
username_1: It even works in 3.3.0 @username_9
username_10: +1
username_11: +1
username_9: @username_1 yeah sorry, yarn was still installing 3.4.0.
username_12: Just facing the same error in 3.3.0
My script: `cross-env NODE_ENV=development webpack-dev-server -d --inline --hot`
When I edit my code and trigger the hot reload, the error shows and break the dev server
username_13: +1
username_14: +1
username_15: +1
username_16: +1
username_17: +1
username_18: +1
username_19: I tried downgrade to v3.3.0 and v3.2.0 with no success. I had to revert my upgrade from webpack-dev-server v2.6.1 back to v2.5.1. Now it is working again.
username_5: @username_3's fix works for me right now
username_20: Downgrading worked for me. If you're using *Yarn*, think about editing your `yarn.lock` file when you install 3.3.0 manually :
```
enhanced-resolve@^3.3.0:
version "3.3.0"
resolved "https://registry.yarnpkg.com/enhanced-resolve/-/enhanced-resolve-3.3.0.tgz#950964ecc7f0332a42321b673b38dc8ff15535b3"
dependencies:
graceful-fs "^4.1.2"
memory-fs "^0.4.0"
object-assign "^4.0.1"
tapable "^0.2.5"
```
username_9: adding `"enhanced-resolve": "3.3.0"` to your package.json dependencies and running yarn is not enough? I can't try it out, because I deleted the yarn.lock file before.
username_21: Fixed in https://github.com/webpack/enhanced-resolve/releases/tag/v3.4.1
username_20: @username_9 No since webpack expects "enhanced-resolve": "^3.3.0" so Yarn will resolve two different packages since patterns are different when you install it. You need to explicitly link them
username_9: @username_20 that's weird, but ok. thank you. :)
username_0: Works for me, thanks for fixing!
Status: Issue closed
username_15: why do you close it?!
username_0: @username_15 Because of that comment: https://github.com/webpack/enhanced-resolve/issues/97#issuecomment-317410074
There is a new release which is fixing this issue. |
flickr-downloadr/flickr-downloadr-gtk | 53297357 | Title: Getting list of photos
Question:
username_0: The program is launched, I click to continue and I always get this message "Getting list of photos" and nothing else appears. My OS : ubuntu 14.04 x64 with french language.
I found a work arround, launch the program whith LANG=C on command line.
Answers:
username_1: Could you please run the app from the Terminal and see if that gives any error we can look at?
username_0: erros on terminal :
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 7 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 9 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 8 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 5 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 56 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 55 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 54 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 120 was not found when attempting to remove it
(flickr-downloadr:10577): GLib-CRITICAL **: Source ID 121 was not found when attempting to remove
username_0: and with LANG=C :
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 9 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 7 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 8 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 5 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 28 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 65 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 66 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 30 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 29 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 111 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 119 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 108 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 118 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 117 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 115 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 116 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 141 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 129 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 133 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 131 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 142 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 132 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 135 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 140 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 130 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 120 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 126 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 139 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 134 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 121 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 125 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 124 was not found when attempting to remove it
(flickr-downloadr:10597): GLib-CRITICAL **: Source ID 123 was not found when attempting to remove it
username_1: These messages does not affect the functionality of the application and are therefore okay.
Since the workaround of setting the locale to `en-US` seems to be working, I am guessing this is something to do with running the application on other locales.
If you could please follow these steps to help troubleshoot the issue, that would be great:
1. Run the application with the `LANG=C`option and change the `Log Level` in Preferences to `ALL`
2. Quit the application and re-run without the `LANG=C` option and please post the content of the log file
username_0: I have the logs but they are a little big to post them, So I send them to you via email.
username_1: Thank you very much for opening this issue and helping to troubleshoot it.
Looks like there is some code that uses the current locale/culture on the system the app is running on, to interpret some numbers to strings. This was a mistake and it should not have been using the user's current locale as all the number-string and string-number conversions are just internal within the application logic.
Shall fix this and update here soon - I would appreciate if you would do one more round of testing after that to make sure it works without the `LANG=C` option
username_0: I tested your new version : 1.1.0.2 and I have got the same problem.
-------- Message original --------
Sujet : Re: [flickr-downloadr-gtk] Getting list of photos (Hari Menon8)
username_1: @username_0 - I think the latest version (`1.1.1.3`) will work fine on any locales - could you please check when you get time?
username_0: Perfect. I have a question did you plan this feature : photos selection by albums ?
Status: Issue closed
username_1: Thanks for the feedback.
Would you please open an another issue for tracking the new feature of viewing/downloading photos from albums/sets? |
Kotlin/kotlinx.coroutines | 367200369 | Title: If JavaFx dispatcher is present in the classpath, using any other dispatcher will start the JavaFxPlatform.
Question:
username_0: Hello,
Please consider the following code:
```kotlin
suspend fun main() {
withContext(Dispatchers.IO) {}
println("done")
}
```
This code has the following output:
```
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by kotlinx.coroutines.javafx.JavaFxDispatcherKt (file:~/.gradle/caches/modules-2/files-2.1/org.jetbrains.kotlinx/kotlinx-coroutines-javafx/0.30.1-eap13/6eec25d3a9961d45fd2c097b0e038d348b3cc243/kotlinx-coroutines-javafx-0.30.1-eap13.jar) to method com.sun.javafx.application.PlatformImpl.startup(java.lang.Runnable)
WARNING: Please consider reporting this to the maintainers of kotlinx.coroutines.javafx.JavaFxDispatcherKt
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
done
```
Here I'm not talking about the warning which is already addressed in #463.
No, here the concern is the fact that the JavaFx platform has been started. And since the JavaFx platform thread is a user-thread, it prevents the program to terminate.
This behavior only happens if `kotlinx-coroutines-javafx` is in the classpath.
Answers:
username_1: We should instantiate `DIspatchers.Main` lazily |
gridsome/gridsome.org | 631787493 | Title: Adding canonicalLink to blog post does not seem to add a canonical link to the page.
Question:
username_0: I just created a new blog post for gridsome.org. I wanted to specify the canonical link for my site. After adding canonicalLink tag to .md file, I could not locate the canonical link on the page.
[My PR](https://github.com/gridsome/gridsome.org/pull/439)
Answers:
username_1: It looks like that's a feature that was intended but never implemented. There are no other references to `canonicalLink` in the gridsome.org or gridsome repos.
You can set it manually like this:
```javascript
metaInfo() {
return {
link: [{ rel: "canonical", href: "http://yourcanonicalurl.com" }],
};
}
```
username_0: @username_1 Can I add that metaInfo function from within the .md file? Cause that is what I am adding to the repo.
Status: Issue closed
|
open-mmlab/mmsegmentation | 856730602 | Title: AttributeError: 'ConcatDataset' object has no attribute 'evaluate'
Question:
username_0: Hi, when I evaluate after trained 4000 Iter, there have a error AttributeError: 'ConcatDataset' object has no attribute 'evaluate',
so how to solve this problem.

Answers:
username_1: It looks like mmdetection had the [same problem](https://github.com/open-mmlab/mmdetection/issues/3220) and was solved by [PR 3522](https://github.com/open-mmlab/mmdetection/pull/3522). Can the solution be directly ported into mmsegmentation or does it need to change from what's in mmdetection? |
easylist/easylist | 625290662 | Title: Consider adding push4site.com as fingerprinting
Question:
username_0: Fingerprinting scripts from push4site.com are embedded on several websites. The live version of the script is available here: https://happywear.push4site.com/Static/Script/happywear.js?v=4
It was found on https://happywear.ru/
Canvas fingerprinting code from the script:
```
y = function(e) {
var t = [], a = document.createElement("canvas"), n;
return a.width = 2e3,
a.height = 200,
a.style.display = "inline",
n = a.getContext("2d"),
n.rect(0, 0, 10, 10),
n.rect(2, 2, 6, 6),
t.push("canvas winding:" + (!1 === n.isPointInPath(5, 5, "evenodd") ? "yes" : "no")),
n.textBaseline = "alphabetic",
n.fillStyle = "#f60",
n.fillRect(125, 1, 62, 20),
n.fillStyle = "#069",
n.font = e.dontUseFakeFontInCanvas ? "11pt Arial" : "11pt no-real-font-123",
n.fillText("Cwm fjordbank glyphs vext quiz, 😃", 2, 15),
n.fillStyle = "rgba(102, 204, 0, 0.2)",
n.font = "18pt Arial",
n.fillText("Cwm fjordbank glyphs vext quiz, 😃", 4, 45),
n.globalCompositeOperation = "multiply",
n.fillStyle = "rgb(255,0,255)",
n.beginPath(),
n.arc(50, 50, 50, 0, 2 * Math.PI, !0),
n.closePath(),
n.fill(),
n.fillStyle = "rgb(0,255,255)",
n.beginPath(),
n.arc(100, 50, 50, 0, 2 * Math.PI, !0),
n.closePath(),
n.fill(),
n.fillStyle = "rgb(255,255,0)",
n.beginPath(),
n.arc(75, 100, 50, 0, 2 * Math.PI, !0),
n.closePath(),
n.fill(),
n.fillStyle = "rgb(255,0,255)",
n.arc(75, 75, 75, 0, 2 * Math.PI, !0),
n.arc(75, 75, 25, 0, 2 * Math.PI, !0),
n.fill("evenodd"),
a.toDataURL && t.push("canvas fp:" + a.toDataURL()),
t
}
```
The script URL varies for each website.
For example: https://prophotos.push4site.com/Static/Script/prophotos.js?v=4 is embedded on https://prophotos.ru/
Some other websites on which it was embedded:
```
http://icomarks.com/
https://alpari.com/ru/
https://alpariforex.org/fa/
https://happywear.ru/
https://icobench.com/
https://klops.ru/
https://mastervision.su/
https://prophotos.ru/
https://skyway.capital/
https://www.kiabi.ru/
https://www.mdm-complect.ru/
https://www.onlime.ru/
https://www.tez-tour.com/
```<issue_closed>
Status: Issue closed |
achillesrasquinha/pipupgrade | 512694944 | Title: Support upgrading only user-site packages
Question:
username_0: I would like to use pipupgrade to upgrade all user site packages (as listed by `pip list --user`).
Currently it does not seem possible according to the manual.
Answers:
username_1: `pipupgrade --user`
Status: Issue closed
username_1: I would like to use pipupgrade to upgrade all user site packages (as listed by `pip list --user`).
Currently it does not seem possible according to the manual.
Status: Issue closed
|
Project-MONAI/MONAI | 1056643149 | Title: MONAI loading a NRRD file into wrong orientation
Question:
username_0: **Describe the bug**
When I load a NRRD file, the array is not in the right orientation according to the affine. Here is the text header of the NRRD file.
```
NRRD0004
# Complete NRRD file format specification at:
# http://teem.sourceforge.net/nrrd/format.html
type: short
dimension: 3
space: left-posterior-superior
sizes: 512 512 321
space directions: (0.767578125,0,0) (0,0.767578125,0) (0,0,1)
kinds: domain domain domain
endian: little
encoding: raw
space origin: (-197.1162109375,-371.1162109375,-403)
```
**To Reproduce**
```py
import monai
sub = {'img': 'image.nrrd'}
sub = monai.transforms.LoadImaged('img')(sub)
sub['img'].shape
# (321, 512, 512)
sub['img_meta_dict']['affine']
# array([[ 0.76757812, 0. , 0. , -197.11621094],
# [ 0. , 0.76757812, 0. , -371.11621094],
# [ 0. , 0. , 1. , -403. ],
# [ 0. , 0. , 0. , 1. ]])
```
**Expected behavior**
The array to be the shape of (512, 512, 321) to match the affine.
**Environment**
================================
Printing MONAI config...
================================
MONAI version: 0.4.0
Numpy version: 1.19.5
Pytorch version: 1.8.1
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.18.2
Pillow version: 8.2.0
Tensorboard version: 2.7.0
gdown version: 4.2.0
TorchVision version: 0.9.1
ITK version: 5.2.1
tqdm version: 4.61.0
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: 5.8.0
[Truncated]
Command: ['/usr/local/Cellar/[email protected]/3.7.10_3/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python', '-c', 'import monai; monai.config.print_debug_info()']
Open files: []
Num physical CPUs: 6
Num logical CPUs: 12
Num usable CPUs: UNKNOWN for given OS
CPU usage (%): [14.6, 8.2, 50.0, 6.2, 10.4, 8.2, 8.3, 8.2, 37.5, 8.2, 26.5, 6.2]
CPU freq. (MHz): 3200
Load avg. in last 1, 5, 15 mins (%): [19.0, 18.5, 17.4]
Disk usage (%): 38.2
Avg. sensor temp. (Celsius): UNKNOWN for given OS
Total physical memory (GB): 64.0
Available memory (GB): 42.7
Used memory (GB): 20.2
================================
Printing GPU config...
================================
Num GPUs: 0
Has CUDA: False
cuDNN enabled: False
Status: Issue closed
Answers:
username_0: Looks like I accidentally downgraded to an old version of MONAI, fixed with upgrade to 0.7.0. |
adobe/xdm | 325097670 | Title: Campaign's address extension has wrong $id in meta:extends
Question:
username_0: ## What are the schemas that are affected by the issue
address.schema.json under extension/adobe/experience/campaign
## What are examples of products that are impacted by the issue
platform XDM and tooling
Answers:
username_0: PR merged.
Status: Issue closed
|
eurosecom/eurosecom | 219208369 | Title: Roč štatistika Mzdy UNP 101
Question:
username_0: Pridali jeden riadok 68.
Answers:
username_1: Spravuje Trexima, preto možnosť načítať .xml
username_0: Už ma informovali, že to len Trexima, keď som sa na štatistike domáhal XSD definícií pre ďalšie štatistické výkazy. To by sme od Kaliho ESA za tie peniaze chceli moc.
username_0: Už ma informovali, že to len Trexima, keď som sa na štatistike domáhal XSD definícií pre ďalšie štatistické výkazy. To by sme od Kaliho ESA za tie peniaze chceli moc.
username_0: Dnes chcem robiť na UNP101. Ak máš prípadné zmeny daj ich pls na GitHub. |
TonyZhangND/GoOvid | 511637186 | Title: Implement replica agent
Question:
username_0: Replica as in the "replica" in Paxos Made Moderately Simple
Answers:
username_0: Implemented some logic in Fig1 of Paxos Made Moderately complex. Remaining tasks
1. Handle decision message
2. `perform()` procedure
username_0: Done. What remains is handling some controller commands
Status: Issue closed
|
alterfw/alter | 37483878 | Title: namespace for classes
Question:
username_0: Begin using the PSR-0 standard classes of Alter.
http://www.php-fig.org/psr/psr-0/
Enhancing compatibility and avoiding conflicts of classes.
Answers:
username_1: Alter has turned into a theme base for his dependencies (see [Hero](https://github.com/alterfw/hero), [Ampersand](https://github.com/alterfw/ampersand) and [options-page](https://github.com/alterfw/ampersand)) and all of them follows PSR-4 when is possible. So, this issue doesn't make sense anymore.
Thank you anyway.
Status: Issue closed
|
yidongnan/grpc-spring-boot-starter | 522036913 | Title: Grpc怎么做服务端的限流?
Question:
username_0: 比如服务端的最大处理能力为1000qps,但有可能在忙时出现15000 qps 的情况,每秒多出的 5000 目前会堆积在内存里面,最终导致 OOM,不知道各位是怎么处理这个问题的?
Answers:
username_1: This question should rather be asked in the [grpc-java](https://github.com/grpc/grpc-java/issues) repository. If you link the question to this issue and it gets answered, then I might be able to add a default implementation for it.
I found [this](https://github.com/danielbryantuk/ambassador-java-rate-limiter/blob/master/src/main/java/io/datawire/ambassador/ratelimiter/simpleimpl/RateLimitServer.java) example that demonstrates a rate limited service impl. If you move this rate limiting logic to an interceptor you could protect your entire server without extra code in your grpc service impls.
username_0: @username_1 Thanks for reply. I've tried to apply the bucket4j rate limit in my server-side application as you linked above. But it can't stop the memory explode, even if I set the rate limit threshold to 100qps. My interceptor code:
`@GRpcGlobalInterceptor
public class RateLimitInterceptor implements ServerInterceptor {
private static final Logger LOG = LoggerFactory.getLogger(RateLimitInterceptor.class);
@Value("${rate-limit.threshold}")
private long threshold;
@Value("${rate-limit.window-ms}")
private long windowMs;
@Value("${rate-limit.check-interval-ms}")
private long interval;
private Bucket bucket;
@Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(ServerCall<ReqT, RespT> call, Metadata headers, ServerCallHandler<ReqT, RespT> next) {
LOG.info("Now token left: {}.", bucket.getAvailableTokens());
while(!bucket.tryConsume(1)) {
LOG.info("Throttling reached, please slow down the client side send rate or adjust the configuration.");
try {
TimeUnit.MILLISECONDS.sleep(interval);
} catch (InterruptedException e) {
LOG.error("Throttling interrupted.", e);
}
}
return next.startCall(call, headers);
}
@PostConstruct
private void initialBucket() {
Bandwidth limit = Bandwidth.simple(threshold, Duration.ofMillis(windowMs));
this.bucket = Bucket4j.builder().addLimit(limit).build();
}
}
`
The messages didn't throttled and ate all the memory JVM occupied.

username_1: You either have a [typo](https://www.javadoc.io/doc/net.devh/grpc-server-spring-boot-autoconfigure/latest/net/devh/boot/grpc/server/interceptor/GrpcGlobalServerInterceptor.html) in your example or you are using a different library.
Instead of sleeping you have to close the call.
Im not sure but the [`RESOURCE_EXHAUSTED`](https://grpc.github.io/grpc-java/javadoc/io/grpc/Status.Code.html#RESOURCE_EXHAUSTED) might be the correct response code.
Status: Issue closed
username_1: IMO this should be solved in the upstream issue. |
igvteam/igv | 232975163 | Title: HaplotypeCaller IGV plugin
Question:
username_0: I know in the past there has been talk about including a HaplotypeCaller plugin for IGV that could produce a reassembled view of reads at a specific location. This is useful for reviewing variant calls from HaplotypeCaller. From my admittedly vague memory of the subject, the proprietary license of HaplotypeCaller was the major factor preventing the development of such a plugin. Now that all of Gatk4 is BSD licensed, would there be interest in collaborating on a plugin for HaplotypeCaller / Mutect2? This would be a very valuable tool for a lot of GATK users.
Answers:
username_1: Yes, sounds really interesting, lets do it. I don't recall discussing
this before.
Status: Issue closed
|
dotnet/roslyn | 648403697 | Title: Do not use $Program and $Main for generated top-level code type/method names
Question:
username_0: `$Program` and `$Main` are valid identifiers in EE context, so there might be a potential confusion.
It would be better to use the pattern already established for other generated names in C# compiler:
http://sourceroslyn.io/#Microsoft.CodeAnalysis.CSharp/Symbols/Synthesized/GeneratedNames.cs,25
Answers:
username_0: @username_1
username_1: What particular in that pattern gives guarantee to avoid a conflict? Any particular character that EE never uses? Something else?
username_0: I do not have a specific scenario. I'm just saying that EE parses `$Xyz` as an identifier and am saying that we should keep a consistent pattern for generated names. I don't see why we wouldn't, if only to avoid confusion. E.g. as I pointed out [GeneratedNames.IsGeneratedMemberName](`http://sourceroslyn.io/#Microsoft.CodeAnalysis.CSharp/Symbols/Synthesized/GeneratedNames.cs,25`) will currently return `false` for these names, because they don't follow the pattern.
username_0: See [`GeneratedNames.TryParseGeneratedName`](http://sourceroslyn.io/#Microsoft.CodeAnalysis.CSharp/Symbols/Synthesized/GeneratedNames.cs,0b290335001208e9,references)
username_1: We don't necessarily want them to follow the pattern or be recognized by the Generatedname API(s). In fact, we want them to be available as constants from WellKnownMemberNames so that analyzers and other consumers can get to them. I am fine with changing what characters are used in the names. Would ```<$Program>``` and ```<$Main>``` work for you?
username_0: That would work - we could define `GeneratedName.Other` category for these (currently the category is determined by the character following the closing `>`. Since there is none `GeneratedNames.TryParseGeneratedName` will return false for these. But that can be easily changed to return `GeneratedName.Other`.
BTW, just noticed another new generated name in WellKnownMemberNames:
```
internal const string CloneMethodName = "<>Clone";
```
Technically it matches the generated pattern. `GeneratedNames.TryParseGeneratedName` will return true but with `GeneratedNameKind` equal to `C`, which is not in the `GeneratedNames` enum.
username_1: '''GeneratedName''' is an internal API and I don't want to spend any time on it, unless there is a real-world scenario that is going to be affected. So far, I am not aware of any.
username_0: Then I propose we change the names to
`<Program>$`, `<Main>$`, `<Clone>$`
or
`<>$Program`, `<>$Main`, `<>$Clone`
which would require no changes in GeneratedName APIs, other than adding `$` to the enum.
Status: Issue closed
|
aihara001/client_appli02 | 360794887 | Title: シンボルで書こう
Question:
username_0: https://github.com/username_1/client_appli02/blob/b2b6b168f51db0e9d7c408a38610197beb065867/app/controllers/clients_controller.rb#L23
https://docs.ruby-lang.org/ja/latest/class/Symbol.html
Answers:
username_1: 修正しました
'new' → :new
Status: Issue closed
|
planningalerts-scrapers/vincent | 244973503 | Title: No data received since - 2017-07-04
Question:
username_0: Looks like the page is no longer exist
Ruby error:
```
/app/vendor/bundle/ruby/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:304:in `fetch': 404 => Net::HTTPNotFound for https://www.vincent.wa.gov.au/Your_Community/Whats_On/Community_Consultation/Planning_Applications -- unhandled response (Mechanize::ResponseCodeError)
from /app/vendor/bundle/ruby/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:949:in `response_redirect'
from /app/vendor/bundle/ruby/1.9.1/gems/mechanize-2.5.1/lib/mechanize/http/agent.rb:299:in `fetch'
from /app/vendor/bundle/ruby/1.9.1/gems/mechanize-2.5.1/lib/mechanize.rb:407:in `get'
from scraper.rb:7:in `<main>'
```
Answers:
username_0: New link but it no longer provide `council_reference`
https://www.vincent.wa.gov.au/consultations/
username_1: Resolved by commit 499b77e.
Status: Issue closed
|
fedarovich/qbittorrent-net-client | 658447967 | Title: Pausing/Stopping/Deleting?
Question:
username_0: I've been playing with the API and I can't find any functions for actually stopping or removing torrents from the client. Am I missing something or is this not implemented?
Answers:
username_1: Here you are:
[Delete/Remove](https://username_1.github.io/qbittorrent-net-client-docs/api/QBittorrent.Client.QBittorrentClient.html#QBittorrent_Client_QBittorrentClient_DeleteAsync_System_Boolean_System_Threading_CancellationToken_)
[Pause](https://username_1.github.io/qbittorrent-net-client-docs/api/QBittorrent.Client.QBittorrentClient.html#QBittorrent_Client_QBittorrentClient_PauseAsync_System_Threading_CancellationToken_)
Status: Issue closed
|
codelibs/fess | 479317886 | Title: Any settings for synonyms with single character words?
Question:
username_0: Thanks
Answers:
username_1: Try `アメリカ,米=>アメリカ`.
username_0: Thank you for reply.
Now the document is crawled and added without any errors.
However, アメリカ,米=>アメリカ made only a small difference. It doesn't seem to be working as synonyms nor mapping, unfortunately.
- Query with 米 matches only on 米 itself, not アメリカ.
- Query with アメリカ matches only on アメリカ itself; it did not match on 米.
I confirmed this by replacing アメリカ to 米, and vice versa in the crawl target file.
I tried some other cases,
- アメリカ=>アメリカ,米 caused the same crawl error as previous post.
- With [アメリカ,米米],[アメリカ,米々], or [アメリカ,米=>アメリカ,米々] query(アメリカ) and query(米) both matched アメリカ; query(アメリカ) did not match on 米.
Could you give me some advice?
username_1: Replacing 1 char, set `米=>アメリカ` in mapping.txt.
Status: Issue closed
username_0: Thank you.
mapping.txt worked very fine for me! |
mrkkrp/megaparsec | 248385963 | Title: Incorrect rendering of offending line in parseErrorPretty' in the presence of tabs
Question:
username_0: We apparently have forgotten that tabs may have different width. This has led to rather unsatisfactory renderings where the line is displayed with actual tab character with width that may be different depending on environment (terminal, editor, etc.), while the line containing the caret `^` uses plain spaces:
```
λ> parseTest' (char '\t' *> char 'a' :: Parser Char) "\t"
1:9:
|
1 | tttt
| ^
unexpected end of input
expecting 'a'
```
The `tttt` thing shows where actual tab character is printed.
I find the algorithm that prints the caret line correct. After all, we have column position which was calculated with respect to actual tab width set during parsing. However we should then replace tab in input stream with correct number of spaces before outputting it. This means we must know tab width.
It's unfortunate that we can't change the signature of `parseErrorPretty'` at this point to add tab width argument. Perhaps we could add yet another function to render parse errors and make `parseErrorPretty'` a special case (with default tab width) of a more general function.<issue_closed>
Status: Issue closed |
womenwhocoderecife/micropigmentacao-solidaria | 523837257 | Title: [MPS-01] Footer
Question:
username_0: Contexto:
Desenvolver o layout ```<Footer/>``` conforme o design definido.
---
Design:
<img width="927" alt="image" src="https://user-images.githubusercontent.com/7841344/68992953-d60fff80-0850-11ea-8d2e-8ac19b9de8ea.png">
<img width="280" alt="image" src="https://user-images.githubusercontent.com/7841344/68992956-e4f6b200-0850-11ea-9701-18f8c478bd1a.png">
[
Link do figma](https://www.figma.com/file/Oa68xOB3uoL7s6evDE3qLK/Wireframe-MPS?node-id=0%3A1)
---
Critérios de aceitação:
- Quando a pessoa acessar o site via seu notebook ele poderá visualizar o ```<Footer/>``` conforme definido no design para desktop
- Quando o acesso for a partir de um celularo conteúdo será adaptado para esse dispositivo conforme layout definido no design para mobile. |
LightTable/LightTable | 72895748 | Title: LightTable 0.7.2 unresponsive in mac os X 10.3
Question:
username_0: 1) Keyboard bindings do not work.
2) Cannot quit application from Menu
Answers:
username_1: @username_0 is the OS X version in this issue title correct? I ask because I can't run Light Table on my MacBook running OS X 10.6. [Light Table requires OS X 10.7 or newer.](http://docs.lighttable.com/#other)
username_0: Sorry. Typo on my part is is 10.7
username_1: @username_0 thanks. Is LT always unresponsive after opening? If you would detail the steps you take when you observe this issue I will try to reproduce it.
username_0: Start either from command line or via icon.
Startup window appears. No options appear with any permutation of ctontrol keys and other keys
Menu quit does not quit application can only kill via pid or taskbar.
Status: Issue closed
username_1: @username_0 For the next release (whenever that is), Light Table will be using Atom/Electron instead of node-webkit/NW.js. Unfortunately, Electron requires version 10.8+ of Mac OS X. I'm going to close this as it doesn't seem feasible to resolve. |
FluxML/Zygote.jl | 609446560 | Title: Flux.train error when using reduce hcat in loss
Question:
username_0: ```
using Flux, Optim
function loss(x, y)
_pred = zeros((20, 56))
pred = reduce(hcat,[_pred[:,i] for i in 1:size(_pred,2)])
loss = sum(abs2, pred .- y)
loss
end
v0 = zeros(15)
sol_data = zeros((20, 56))
display(loss(v0, sol_data))
dataset = [(v0, sol_data)]
p = 0
Flux.train!(loss, p, dataset, ADAM())
```
The loss function runs properly when it is separately called, but when using the loss function in Flux.train gives a `Mutating arrays is not supported` error.
Answers:
username_1: Can you try if #501 fixes this?
username_2: You may also like [SliceMap](https://github.com/username_2/SliceMap.jl) for this, which besides a gradient for `reduce(hcat` should be more efficient than doing each `_pred[:,i]` independently.
username_0: @username_1 How do I test #501 locally?
username_0: @username_1 How do I try it out?
username_3: You might find this helpful https://stackoverflow.com/questions/27567846/how-can-i-check-out-a-github-pull-request-with-git
username_0: I have the pull request on my computer now, but I'm not sure how to test the code using it.
username_0: @username_1 I am still getting the same error when I try it with #501
username_0: This fixes it! |
Sylius/Sylius | 454209406 | Title: Admin: Removing products from taxons
Question:
username_0: **Sylius version affected**: 1.4, 1.5
**Description**
If the taxon code in integer (technically it's string of course but contains integer value e.g. "6"), The products cannot be unassigned from such taxon. It can be however added to it.
**Steps to reproduce**
- Create taxon with code "6"
- Edit any product and assign it to that taxon
- Edit product again and try to unassign it from that taxon
**Workaround**
Change the taxon's code so it does not contain digits only.
Answers:
username_1: [This](https://github.com/Sylius/Sylius/blob/master/src/Sylius/Bundle/AdminBundle/Resources/private/js/sylius-lazy-choice-tree.js#L108-L126) is the code causing the issue.
In the `onUnchecked` function `const value = checkboxElement.data('value');` will result in integer values for numbers because jQuery will always try to convert the attribute's string value to JS value: http://api.jquery.com/data/#data-html5
Variable `value` being integer instead of string will cause `checkedValues.indexOf(value)` to return -1 because it's searching in an array of strings, so the numeric codes can't be removed.
This can be easily fixed by replacing `.data('value')` with `.attr('data-value')`, or vanilla JS `.dataset.value`.
Status: Issue closed
|
IOSD/HackDTU | 244780884 | Title: The Venue
Question:
username_0: In the venue section which is just below the timeline....a grey box appears which is supposed to be map...somehow it got messed up in initial commits..please fix..i have updated the coordinates to DTU's location in js file
Answers:
username_1: @username_2 Look into it!
username_2: It is a key error. To use google maps on websites , you have to take a key. I have added the placeholder for that. Just issue a key and add that.
https://developers.google.com/maps/documentation/javascript/get-api-key
Status: Issue closed
|
cloud-hypervisor/cloud-hypervisor | 1032220519 | Title: AArch64 unit test got blocked frequently in CI
Question:
username_0: In recent weeks (not sure when it began), the AArch64 unit test job got pending frequently. When this happened, all subsequent PR CI were blocked. And we have to login to the CI server and kill the pending containers manually.
An example: https://cloud-hypervisor-jenkins.westus.cloudapp.azure.com/blue/organizations/jenkins/cloud-hypervisor/detail/PR-3236/4/pipeline/248
Any idea why this happen? Could it be a cargo issue?
Answers:
username_0: More observation:
- The issue is really random. I used a 100-time-loop to run the unit test, the problem was reproduced at the 79th shot.
- It was only seen on the CI server. I run the unit test for hundreds times on my local server, never saw it.
I attached to the cargo process when it was pending, seemingly it stopped at a WRITE syscal: https://code.woboq.org/userspace/glibc/sysdeps/unix/sysv/linux/write.c.html#26
See the gdb printings: [gdb.txt](https://github.com/cloud-hypervisor/cloud-hypervisor/files/7394287/gdb.txt)
username_1: Any update on this?
username_0: @username_1 Just now I installed kernel 5.16 on the arm64 CI server and rebooted. Could you help recover the Jenkins on it?
I will monitor if the new kernel can fix the pending issue in following days.
Status: Issue closed
username_0: The problem was not seen again after upgrading the kernel. I think it's time to close this issue. |
orange-cloudfoundry/paas-templates | 409829369 | Title: end-user visibility to the dedicated database services backups
Question:
username_0: ### Expected behavior
As a platform user, using a dedicated data services in the marketplace, i need to have visibiliy on automated backups success. Ideally, i want to be able to trigger backup in self services.
### Observed behavior
The shield backup for coab dedicated db is automated since v37. However, no access is given to end user
<!--
### Affected release
Reproduced on version x.y
-->
<!-- specify release note version here -->
<!--
### Traces and logs
Remember this is a public repo. DON'T leak credentials or Orange internal URLs.
Automation may be applied in the future.
* [ ] I have reviewed provided traces against secrets (credentials, internal URLs) that should not be leake, manually of using some tools such as [truffle-hog file:///user/dxa4481/codeprojects/mytraces.txt](https://github.com/dxa4481/truffleHog#truffle-hog)
-->
Answers:
username_1: With operator profil, you can :
- consult backup
- run an restore backup
You need
- change password for <PASSWORD>
- to create a user with profil enginer
- attach user on tenant <deployment name> with operator role
username_1: restore PITR : https://github.com/orange-cloudfoundry/cf-oss-service-providers-best-practices/blob/master/backup-restore-PITR-instructions-with-shield-v7-v8-eng.md
username_2: related and somewhat overlapping with #302
/CC @username_0
username_0: close as duplicate
Status: Issue closed
|
wunderio/elasticsearch_helper | 507760445 | Title: Deleting translation from Drupal does not delete the document
Question:
username_0: ### Bug
When using multilingual index, deleting a translation does not delete the document from Elasticsearch. This is because deleting en entity translation does not call the `hook_entity_delete` hook and therefore no delete query is executed.
### Proposed solution
Add `hook_entity_translation_delete` which calls the index processor plugin's `deleteEntity` method<issue_closed>
Status: Issue closed |
google/flatbuffers | 839283419 | Title: SIGSEGV when generating Rust code (flatc 1.12.0)
Question:
username_0: Flatbuffer IDL:
```
namespace flatbuffer_ast;
enum Tag: uint8 {
PlusBK,
B,
BK,
C,
CL,
CP,
D,
F,
FL,
FR,
FT,
FQ,
FQA,
H,
ID,
IDE,
ILI,
ILI2,
IP,
IS1,
K,
LI1,
M,
MI,
MS1,
MT1,
MT2,
MT3,
NB,
P,
PC,
PI1,
Q1,
Q2,
Q3,
QS,
S1,
SP,
TOC1,
TOC2,
TOC3,
V,
WJ,
X,
XO,
XT,
}
table Uint16 {
value: uint16;
}
enum AttributeKey: uint8 {
[Truncated]
However, when I run `flatc -o generated/ --rust fbs/schema.fbs`, I get the following coredump:
```
Reading symbols from /usr/bin/flatc...
(No debugging symbols found in /usr/bin/flatc)
[New LWP 340653]
Core was generated by `flatc -o generated/ --rust fbs/schema.fbs'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x0000562f9ac5dcae in flatbuffers::rust::RustGenerator::GenTable(flatbuffers::StructDef const&) ()
(gdb) bt
#0 0x0000562f9ac5dcae in flatbuffers::rust::RustGenerator::GenTable(flatbuffers::StructDef const&) ()
#1 0x0000562f9ac5f4e8 in flatbuffers::rust::RustGenerator::generate() ()
#2 0x0000562f9ac4c65a in flatbuffers::GenerateRust(flatbuffers::Parser const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
#3 0x0000562f9ac7a4ad in flatbuffers::FlatCompiler::Compile(int, char const**) ()
#4 0x0000562f9ab0693f in main ()
(gdb)
```
Is this possibly caused by the `Node => List: .children: => NodeContainer: .node => Node` circular definition?
Answers:
username_1: Circular definitions should be fine, and should definitely not crash.
@username_2
username_0: @username_2 It is flatc 1.12.0, as posted in the title of this issue, as installed from Arch. After work I will check what the exact version (as well as what the source Arch repo is) installed is. Does `flatc --version` list a commit?
username_2: I also tried compiling and testing with `-fsanitize=address` at HEAD and nothing came up
username_2: I'm gonna close this as not reproducible
Status: Issue closed
|
focallocal/fl-maps | 957329619 | Title: Saw a post about this project on reddit...
Question:
username_0: Hey I remember seeing a reddit post about this project over a week ago, apparently you needed a react Dev to help with the home page.
I'm a totally newb at reddit so I can't find the post again haha, this last weekend was super hectic and I knew I wouldn't be able to help, however, I'll have some free time tomorrow and probably more over the next weekend, so I'd like to know if you guys still need any help with the homepage? I know reddit has support for DMs so if you want to reach me over there my nickname is the same as the one I use in GitHub (@username_0)
Also sorry if I shouldn't be opening issues about this, I just didn't know what's the best way to reach you, hope you're all doing great, cheers!
Answers:
username_1: Hi @username_0 thanks for finding us here, i'd love to have you join in.
We've got two React based tasks remaining before launch.
Here's the 1st: https://publichappinessmovement.com/t/topic/2348/16
and here's the 2nd: https://publichappinessmovement.com/t/topic/2600
Kento has begun the homepage, although you are very welcome to chat to them and offer to build it together. Or you could take on the 2nd task solo. I'll add you to our Git, and the main place we communicate is on the platform the links above point to. There's a getting started guide in the Reactjs section.
Welcome :)
Status: Issue closed
username_0: Thanks! Also thanks a lot for the warm welcome!
I'll be using the forum for communication from now and onwards, so I think we can safely close this issue.
🙌 |
bbc/simorgh | 719276532 | Title: Release Ukrainian, UKChina & Tigrinya most-watched pages
Question:
username_0: **Is your feature request related to a problem? Please describe.**
We need to release the following services to live:
- Ukrainian
- Ukchina (simp & trad)
- Tigrinya
**Describe the solution you'd like**
Services are routed to:
https://www.bbc.com/ukchina/simp/media/video
https://www.bbc.com/ukchina/trad/media/video
https://www.bbc.com/ukrainian/media/video
https://www.bbc.com/tigrinya/media/video
- The services are enabled for toggles on Most Watched Pages and MAPs (Ukrainian is already enabled on MAPS)
- The services will be set for a max 5 items
- [x] This feature is expected to need manual testing.<issue_closed>
Status: Issue closed |
openshift/origin | 187052963 | Title: e2e Flake: Probing container [It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
Question:
username_0: https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/8232/consoleFull
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:373
Answers:
username_1: ```
[k8s.io] EmptyDir volumes
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:793
should support (root,0777,default) [Conformance]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
------------------------------
SSSSSSS
Summarizing 1 Failure:
[Fail] [k8s.io] Probing container [It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:373
Ran 160 of 569 Specs in 1508.849 seconds
FAIL! -- 159 Passed | 1 Failed | 0 Pending | 409 Skipped
```
username_2: adding to this, because it seems to be apart of the same test suite.
i alos got this as a bonus
```
[Fail] [k8s.io] Probing container [It] should *not* be restarted with a /healthz http liveness probe [Conformance]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:373
[Fail] [k8s.io] Probing container [It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:373
```
i don't think i need to open another issue for this, so just wanted to update this.
also full log here
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/8480/consoleFull
username_3: Closing as dup of: https://github.com/openshift/origin/issues/11016
Status: Issue closed
|
ballerina-platform/lsp4intellij | 481855947 | Title: Language server connection breaks, if one of the watched (connected ) files are moved/copied/renamed
Question:
username_0: **Description:**
Please note the $subject since aforementioned events are not implemented in `FileEventManager`.
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --><issue_closed>
Status: Issue closed |
laravel/nova-issues | 1084842157 | Title: MorphTo relation not work fine with readonly and searchable on edit screen.
Question:
username_0: - Laravel Version: 8.38
- Nova Version: 3.30
- PHP Version: 8.0
- Browser type and version: 96.0.4664.110 (google chrome)
### Description:
MorphTo relation not work fine with readonly and searchable on edit screen. It only shows morph type but not showing the related morph model when i used ->readonly() along with ->searchable(). Other than that it's working fine.

Code being used:
`
MorphTo::make('Access Type', 'accessTypes')->types([
Team::class,
User::class,
])
->searchable()
->readonly(function ($request) {
return $request->isUpdateOrUpdateAttachedRequest();
}),
`
### Detailed steps to reproduce the issue on a fresh Nova installation:
1. Add morph relation with readonly and searchable.
2. Go to resource edit
3. Related morph model shows empty.<issue_closed>
Status: Issue closed |
canhnd58/duosnake | 902605328 | Title: Create simple server to connect two players
Question:
username_0: 1. Provide a link
2. When player 1 connects, his/her console should display "Player 1 connected"
3. When player 2 connects, the console of player 1 should display "Player 2 connected"
Answers:
username_0: https://www.quora.com/What-is-the-most-popular-language-in-game-server-coding
GO may worth a try. |
cblomart/vsphere-graphite | 349819520 | Title: Kibana Dashboard ?
Question:
username_0: Hello,
It works great, but do you have kibana (6.x) dashboard template please ??
Thanks for your help.
Status: Issue closed
Answers:
username_1: Hello,
It works great, but do you have kibana (6.x) dashboard template please ??
Thanks for your help.
username_1: @username_2: don't you use elasticsearch... would there be some dashboard template that can be shared?
username_2: I use elastic yes, but have a personal preference for Grafana. There's no technical reason why one could develop the same in Kibana.
The Grafana dashboards are available here: https://grafana.com/dashboards/6902
Let me know if you would like the updated Cluster, Host and VM level dashboards.
username_3: Anyone have dashboards for Grafana of the Prometheus exported metrics from vsphere-graphite.
I have created a VM level dashboard, but not for Host and Cluster level.
username_3: I have published a vm level dashboard based on Prometheus datasource for vsphere-graphite, you can find it here : [https://grafana.com/dashboards/9929](https://grafana.com/dashboards/9929)
username_1: Thanks @username_3 I will close this for now.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.