repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Activiti/Activiti | 770273460 | Title: Unknown property used in expression: ${execution.processBusinessKey}
Question:
username_0: We are trying to migrate from Activiti 5.22 to 6.0.0. According to the migration guide, it should execute the process instances that were created in the version 5.x if we set **activiti5CompatibilityEnabled** to true and having the required jars in the class path. I set all the configuration correctly, but it's not working.
If i change my process definition with ${execution.processInstanceBusinessKey} , Activiti 6.x is not complaining. We want to avoid this change to our old processes. We are happy to do it for the new process definitions that will be deployed with 6.x.
`Caused by: javax.el.PropertyNotFoundException: The class 'org.activiti.engine.impl.persistence.entity.ExecutionEntityImpl' does not have the property 'processBusinessKey'.
at javax.el.BeanELResolver.getBeanProperty(BeanELResolver.java:576)` |
coar-repositories/registry | 530918000 | Title: As a data aggregator I want a list of all the OAI-PMH feeds and their related universities
Question:
username_0: What I specifically want (and we have actually mostly built for ourselves) is a list of organisational IDs and the URL for the OAI-PMH endpoint.
On reflection, it might make sense to change 'related universities' to 'organisational identifier' in the title to make it clear that this is an ID to ID relation? Or are there other user stories that are different that should be captured?
Answers:
username_1: What means this "list of all the OAI-PMH feeds" A list of the OAI-PMH interface information (basicurl) or something else?
username_0: What I specifically want (and we have actually mostly built for ourselves) is a list of organisational IDs and the URL for the OAI-PMH endpoint.
On reflection, it might make sense to change 'related universities' to 'organisational identifier' in the title to make it clear that this is an ID to ID relation? Or are there other user stories that are different that should be captured?
username_1: I see. I agree totally with your issue. And indeed a identifier could be more appropriate. Hierarchies could be addressed more efficient ( I mean for example a repository of a faculty or a institute of an university).
From my perspective is relevant for bibliometric reasons too.
username_2: I also agree that this should be re-worded to 'organisational identifiers', so I have just taken the liberty of rewording it :-) |
codesmithtools/Templates | 98777577 | Title: Limiting Class Properties of Primary/Foreign Keys
Question:
username_0: ```
Tables:
Products (ProductId, Description, Price, LastUpdatedByPersonId)
People (PersonId, FirstName, LastName)
Current:
Products: Id, Description, Price, LastUpdatedByPerson (person who last
updated the product)
People: Id, FirstName, LastName, Products (list of products updated by
this person)
Desired:
Products: Id, Description, Price, LastUpdatedByPerson (person who last
updated the product)
People: Id, FirstName, LastName
Possible Solution: Add an ExtendedProperty on the class describing the
directionality of the relationship on the generated classes (e.g.
cs_direction: both (default), primary, foreign).
```
Original issue reported on code.google.com by `<EMAIL>` on 24 Aug 2009 at 4:10<issue_closed>
Status: Issue closed |
CatalaLang/catala | 872836416 | Title: Feature: scope context variables decorators (input, output, internal)
Question:
username_0: ## The problem
Scopes in Catala can have many context variables. But as the number of context variables grows, it is more and more difficult to figure out which of these variables are output, which are input and which ones are intermediate variables that are not relevant from outside the scope. Catala users have already started using comments to annotate scope context variable declarations with this classification.
While it is the essence of Catala that context variables are neither input nor outputs by default, since they can be redefined by a calling scope, we could benefit from user annotations to enable helper lints and better code generation in the different backends.
## Specification of the decorations
A regular context variable declaration looks like this:
```
scope Foo:
context a content bool
```
This proposal would allow replacing the `context` with the following keywords:
* `input`: this scope variable be defined by the caller, cannot be defined in the scope, and does not appear in the outputs
* `output`: this scope variable cannot be defined by the caller, has to be defined in the scope, and appear in the outputs
* `internal`: this scope variable cannot be defined by the caller, has to be defined in the scope, and does not appear in the outputs
* `context`: this scope variable can defined by the caller, can be defined in the scope, and appears in the outputs
This specification defines an informal sort of permissiveness lattice between the kinds of scope variables. Here it is, the most permissive being at the top:
```
CONTEXT
/ \
/ \
INPUT OUTPUT
\ /
\ /
INTERNAL
```
## Linting
If we have these four keywords, we can enforce their specification in three different ways.
1. When calling a subscope `Foo`, we can ensure that all the variables of `Foo` redefined in the caller are either `context` or `input`
2. When calling a subscope `Foo`, we can ensure that all the variables of `Foo` used (as outputs) in defining variables of the caller are either `context` or `outputs`
3. Inside a scope, we can ensure all variables defined are either `context`, `output` or `internal`
4. Inside a scope, we can check that all variables markes as `output` or `internal` have at least one definition.
## Code generation
The decorations can also help us generate code that has easier signatures than the current compilation scheme that exposes all `context` variables in both the output and input structs of the scope. More specifically:
* The input struct should contain the `context` and `input` variables
* The output struct should contain the `context` and `output` variables
## Implementation
The implementation of this feature will impact quite a lot of areas of the compiler:
* Adding syntax keywords
* Extending the `surface`, `desugared` and `scopelang` intermediate representations with the kind for each scope variable
* Implement the lints presented above in the `scopelang` intermediate representation
* Modify the `dcalc` and `lcalc` translations using the variable kind information according to the specification above
* Fix the OCaml backend
Answers:
username_1: Thanks Denis for summarizing the proposal! Here's a few suggestions to simplify, or at least have a first simplified design that can be refined incrementally.
- Can we (for the time being) eliminate `context` since it's the default behavior?
- Linting:
- caller-redefined variables cannot be `internal` or `output`
- caller-bound variables cannot be `internal` or `input`
- points 3. and 4. become optional
- Code-gen
- `input` and `internal` variables do not appear in the output struct
- `output` and `internal` variables do not appear in the input struct
What do you think?
username_0: These are all good suggestions, I updated the design post above accordingly.
username_2: I don't really get the differences between the `no keyword` and the current `context` keyword.
username_0: Having no keyword for the `context` case is a nudge for programmers to clarify the role of their scope variables. Compare
```
declaration scope Foo:
internal x content integer
output y content boolean
context z content date
```
with
```
declaration scope Foo:
internal x content integer
output y content boolean
z content date
```
It is more obvious in the second version that something is missing to qualify `z`, which we want to encourage the programmer to do since it clarifies the use. Also in the case where the programmer has not yet labeled the scope parameters, it is more convenient to write
```
declaration scope Foo:
x content integer
y content boolean
z content date
```
rather than
```
declaration scope Foo:
context x content integer
context y content boolean
context z content date
```
All of these observations lead me to refine my proposal. I propose that we allow both (no keyword) and `context`, both having the same semantics.
username_0: This is definitely more ambitious than the wildcard issue, since you have to go down the entire compilation stack. The general architecture is presented here https://catala-lang.org/ocaml_docs/catala/index.html, and the formalization is here https://hal.inria.fr/hal-03159939. I guess you can take a look a those, and we can schedule a call next week to sync up before you start. Is that good for you ?
username_2: Yes, thanks. I guess I can start to look at it and write down some questions.
Status: Issue closed
username_0: Implemented in #185 and #189. |
fortran-lang/fpm | 1060600818 | Title: Skip slow tests
Question:
username_0: ### Description
`pytest` has decorators which allow for a variety of conditional executions. It would be useful to have a way of marking slow tests, or other kinds of conditional test runs.
### Possible Solution
Technically each test can check environment variables (as noted by @username_1), but it would be nicer to have this at the test runner level.
### Additional Information
N/A.
Answers:
username_1: The use case we have is generating data for plots into an article. Some of the convergence studies might take a longer time (say several minutes) and we would like the default "fpm test" to finish in a few seconds. One approach is to have "slow tests" that are not executed by default. We can then internally run the full convergence study in the slow test, and a faster version of the same study in the fast test (to ensure that the study works).
username_2: I tend to break my tests into a confidence test that is very quick but exercises a few key features, a collection of short tests that do coverage, and what I usually call benchmark tests that are long-running. When I have moved those to fpm I have been breaking them up by name with the globbing feature like "run test 'bench_*', run test 'general_*' and run test confidence; and in one case I made the test program a program that takes parameters that select which directory to run and put the other test programs into directories and did not find any of those quite hit the mark, so I think I will take a look at how pytest does it. Any other examples of models to try? I was thinking about lists of tests or directories that you could give a name to in the fpm.toml file, and then do something like "run test -type plot" or "run test -type benchmark", and earlier I had a PR that allowed for a "y/n" reply to a prompt that no one else seemed to like so I didn't pursue it. |
jlippold/tweakCompatible | 419204329 | Title: `WiFi - The Strongest Link` not working on iOS 12.1.1
Question:
username_0: ```
{
"packageId": "com.hackyouriphone.wifitweak",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.hackyouriphone.wifitweak",
"deviceId": "iPhone6,1",
"url": "http://cydia.saurik.com/package/com.hackyouriphone.wifitweak/",
"iOSVersion": "12.1.1",
"packageVersionIndexed": false,
"packageName": "WiFi - The Strongest Link",
"category": "HYI - Tweaks",
"repository": "HackYouriPhone",
"name": "WiFi - The Strongest Link",
"installed": "1.1.1",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.hackyouriphone.wifitweak",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.4",
"shortDescription": "Automatically switch to the strongest WiFi network around you... and more",
"latest": "1.1.1",
"author": "Samball",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": ""
}
``` |
AdoptOpenJDK/openjdk-website | 461954820 | Title: Create a Windows installer for openjdk8 with jfx included
Question:
username_0: <!--
Thank you for reporting an issue.
Please fill in as much of the template below as you're able.
To find out what version your browser is:
http://www.bbc.co.uk/accessibility/guides/which_browser.shtml
-->
* **Browser**:
<!-- Enter your issue details below this comment. -->
Answers:
username_1: We have issues covering this in openjdk-build and openjdk-installer.
Status: Issue closed
|
appbaseio/reactivesearch | 373296984 | Title: Working with floats
Question:
username_0: **Issue Type:**
Bug
**Platform:**
Web
**Description:**

The search is returning a range of results between 0.3 and 8.27 and and I am trying to display them using the following component. I am assuming this is to do with the lack of floats in Javascript. Is there something I can do to resolve this?
```
<DynamicRangeSlider
componentId="RangeSliderCarat"
dataField="weight"
showHistogram={false}
title="Carat"
snap={false}
step={1}
rangeLabels={function (min, max) {
const labels = {
start: `${min}`,
end: `${max}`,
}
return labels
}}
/>
```
**Minimal reproduction of the problem with instructions:**
Render the following component with a floating point number.
```
<DynamicRangeSlider
componentId="RangeSliderCarat"
dataField="weight"
showHistogram={false}
title="Carat"
snap={false}
step={1}
rangeLabels={function (min, max) {
const labels = {
start: `${min}`,
end: `${max}`,
}
return labels
}}
/>
```
**Reactivesearch version:**
`2.12.1`
**Browser:**
all
Answers:
username_1: I guess, you can use `defaultQuery` prop on `DynamicRangeSlider` wherein, you should get the range values in the params and you can round off the floating numbers there in the query. The query looks like this:
```js
defaultQuery = (value, props) => {
if (Array.isArray(value) && value.length) {
return {
range: {
[props.dataField]: {
gte: value[0], // fix the floating point here
lte: value[1], // fix the floating point here
boost: 2.0,
},
},
};
}
return null;
};
```
Let me know if this works.
username_0: Hey @username_1
Thanks for the suggestion, unfortunately it doesn't work. Is `defaultQuery` definitely a prop of `DynamicRangeSlider` I could see it in the docs, and it doesn't seem to work at all. I couldn't even console.log the value.
username_1: My bad, `customQuery` should work.
Status: Issue closed
username_2: @username_0 It would be amazing if you can share how you figured out :) |
kubernetes-sigs/kubebuilder | 487005740 | Title: PersistentVolume operations with kubebuilder
Question:
username_0: Hi folks, I have run into issues listing/accessing PersistentVolumes/PersistentVolumeClaims via the client.List api. I was expecting something on the lines of:
getmyPVs := &corev1.PersistentVolumeList{}
err := <ref to client.Client>.List(context, getmyPVs, <namespace>)
would get the list of PVs but does not seem to work. I am guessing others must have used PV/PVCs with kubebuilder apis but did not turn up any examples. Does the kubebuilder client support PV/PVC operations ? I was going on the assumption that the client can handle any type of resource including PV/PVCs. Thanks for any clarification !
Answers:
username_0: I found that listing PersistentVolumeClaims (as opposed to PersistentVolumes) in the same manner as above worked OK. Perhaps the issue is not with kubebuilder but rather the type of PV/PVCs that I am using. In any case would love to hear how PV/PVCs have worked for others with kubebuilder.
username_1: I ran into a similar problem and was confused util I remembered that PVs are cluster scoped.
[Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) are not namespaced while Persistent Volume Claims are. So make sure you are not setting a namespace in your request
```golang
var pv corev1.PersistentVolume
err = a.client.Get(ctx, types.NamespacedName{
Name: pvc.Spec.VolumeName,
}, &pv)
```
username_2: @username_0 if you don't have any options, the following should just work:
```go
var myPVs corev1.PersistentVolumeList
err := cl.List(context.Background(), &myPVs)
```
(basically what you listed), unless you don't have permissions to fetch cluster-scoped objects with whatever you're using to connect to the cluster
username_0: @username_1 & @username_2 Thanks for the pointers ! I had been passing in a namespace to the List API and like Michael pointed out, PVs do not have a namespace association. Getting rid of it worked for me.
Status: Issue closed
|
cyprusjs/CyprusJS | 549326042 | Title: Introduction to GitHub Actions 🤖
Question:
username_0: In this talk you can learn how to create your first GitHub Action and the things you can do with them including:
- automation with 🤖 probot
- manage labels and projects
- help developers with structure
Answers:
username_0: Additional links:
- https://github.com/actions/github-script
- https://github.com/probot/probot
- https://help.github.com/en/actions/building-actions/creating-a-javascript-action
- https://github.com/marketplace
Status: Issue closed
|
OpenTreeOfLife/feedback | 153669213 | Title: Coniferophyta exists with no children. Conifers presumably in Pinales
Question:
username_0: Some mix up here with https://tree.opentreeoflife.org/opentree/opentree5.0@ott4736806/Pinales
================================================
Metadata | Do not edit below this line
:------------|:----------
Author | [None](https://github.com/username_0)
Upvotes | 0
URL | [tree.opentreeoflife.org/opentree/opentree5.0@ott994093/Coniferophyta](https://tree.opentreeoflife.org/opentree/opentree5.0@ott994093/Coniferophyta)
Target node label | Coniferophyta
Synthetic tree id | opentree5.0
Synthetic tree node id | ott994093
Source tree id(s) |
Open Tree Taxonomy id |
Supporting reference | N/A
Answers:
username_1: I think we can just delete Coniferophyta. It is NCBI-only, contains no species, and seems to be based mainly on a single environmental sequence.
username_0: Agreed
Status: Issue closed
username_1: It looks like Pinales is an order under subclass Pinidae = Coniferophyta. https://tree.opentreeoflife.org/taxonomy/browse?name=Coniferophyta
(odd name for a subclass.)
I think this will work, but let me know if I've got it wrong. |
voila-dashboards/voila | 499298560 | Title: Moving to the voila-dashboards organization
Question:
username_0: Just a quick note to inform everyone following this repo that it has now been moved to the [voila-dashboards](https://github.com/voila-dashboards) organization:
https://github.com/voila-dashboards/voila
The `voila-dashboards` organization will also hosts other `voila` related projects and templates.<issue_closed>
Status: Issue closed |
prestodb/presto | 280332365 | Title: Improve invalid WKT exception handling
Question:
username_0: ```
```
java.lang.IllegalArgumentException: undefined
at com.esri.core.geometry.WktParser.lineStringStart_(WktParser.java:479)
at com.esri.core.geometry.WktParser.nextToken(WktParser.java:87)
at com.esri.core.geometry.OperatorImportFromWktLocal.polygonText(OperatorImportFromWktLocal.java:500)
at com.esri.core.geometry.OperatorImportFromWktLocal.polygonTaggedText(OperatorImportFromWktLocal.java:179)
at com.esri.core.geometry.OperatorImportFromWktLocal.importFromWkt(OperatorImportFromWktLocal.java:112)
at com.esri.core.geometry.OperatorImportFromWktLocal.executeOGC(OperatorImportFromWktLocal.java:76)
at com.esri.core.geometry.ogc.OGCGeometry.fromText(OGCGeometry.java:470)
```
Answers:
username_1: Fixed by #9515
Status: Issue closed
|
MikhailPozdeev/Credit-Card-Number-Validator | 945623032 | Title: Программа не принимает номера карт состоящих не из 16 символов
Question:
username_0: ### Шаги по воспроизведению:
1. Открыть IntelliJ IDEA
2. Вставить код
3. В 4-ой строке кода вставить номер карты **American Express (AMEX)** = 340224594486354
4. Запустить программу одновременным нажатием клавиш Ctrl+Shift+F10
### Ожидаемый результат:
IDEA выдаст сообщение Result is Ok
### Фактический результат:
IDEA выдает сообщение Result is Fail (см. скриншот)
### Программное окружение:
Windows 10, Версия 20H2, 64
Java 11.0.11
IntelliJ IDEA Community Edition
### Скрин
 |
mahmoud/boltons | 98158372 | Title: PermissionError with AtomicSaver on Windows
Question:
username_0: If I try this little code on Windows 7:
```python
from boltons.fileutils import AtomicSaver
with AtomicSaver('foo.txt') as f:
f.write('whatever')
```
I get the following Exception:
```bash
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\boltons\fileutils.py", line 272, in setup
overwrite=self.overwrite_part)
File "C:\Python34\lib\site-packages\boltons\fileutils.py", line 194, in _atomic_rename
os.rename(path, new_path)
PermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem ande
ren Prozess verwendet wird: 'C:\\Windows\\System32\\tmphia0tzzm' -> 'foo.txt.part'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\boltons\fileutils.py", line 281, in __enter__
self.setup()
File "C:\Python34\lib\site-packages\boltons\fileutils.py", line 274, in setup
os.unlink(tmp_part_path)
PermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem ande
ren Prozess verwendet wird: 'C:\\Windows\\System32\\tmphia0tzzm'
```
Answers:
username_1: Interesting, looks like a Windows file locking issue? I'll have to dust off my Windows machine and give it a closer look. I bet it affects all versions of Windows, but just in case, is this Windows 7 or?
username_0: Yes it' s Windows 7...
username_1: Hey username_0, give it another shot, seems to work great for me on Windows 7 (and Unix).
username_0: That is strange. For me it is still not working. Actually I discovered the buck when using the *pip-tools* library which uses the *AtomicSaver*.
username_1: Interesting. Does that mean you're using the pip installed boltons or are you working off the git trunk, or easiest probably, just downloading [fileutils.py](https://raw.githubusercontent.com/username_1/boltons/master/boltons/fileutils.py) and importing it?
username_0: Currently I' m using the pip install boltons (0.6.5)
username_1: Ah, yes, the PyPI version hasn't been updated yet. To test the fix just run `pip install -e git+https://github.com/username_1/boltons.git#egg=boltons` and try the fileutils code in question. Or, if you'd like, I can roll out 0.6.6 and you can test that (I did test on Windows 7 earlier today ;) )
username_0: ... f.write('whatever')
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: 'str' does not support the buffer interface
```
Am I doing something wrong by passing just the String to the write function?
username_1: Ah, Python 3. You need to encode that string to bytes before writing. `f.write('mystring'.encode('utf8'))` OR `f.write(b'mystring')`.
username_0: Ah, great. That works nice now. I didn' t think about byte encoding. Thank you very much for your help here :+1:
Status: Issue closed
username_1: no problem, enjoy! |
FUSEEProjectTeam/Fusee | 1048614100 | Title: Request High-Performance GPU on multi GPU systems.
Question:
username_0: This is a problem on laptops.
https://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement
Answers:
username_1: ---
Therefore, I currently see no way to implement this easily.
Suggestion: Wontfix & not our department
username_0: Here are some more resources I found.
1. Via Windows registry https://stackoverflow.com/a/59732413
2. Using NvApi https://stackoverflow.com/a/17277085
3. 🥲😨😱☠️ http://lucasmagder.com/blog/2014/05/exporting-data-symbols-in-c-for-nvidia-optimus/ |
hjoon0510/research-journal | 713276317 | Title: 로그인 버그
Question:
username_0: 이곳은 프로젝트의 작업 내용들을 기록 및 보관하는 곳입니다.
수행한 작업들을 체계적으로 기록하면 문제들을 효과적으로 해결이 가능하고, 반복되는 실수들을 최소화하는데 큰 도움을 제공합니다.
작업내용
=======
회원가입을 마치면 회원가입을 성공했다거나 중복이다, 등의 메시지가 떠야하는데 아무거도 안뜨고 백지로 나온다
문제원인
=======

해결방법
=======
코드중에 회원가입을 했는데 아무거도 안뜨는경우가 나올수있는 부분이 있는지 찾아봐야겠다
이상.
Answers:
username_0: 
코드의if 문 부분을 봐도 아무거도 안뜰 경우가 없다.
username_0: * 해결 방법: 회원가입후에 로그인이 안되었던 이유는 mysql/mariadb에 테이블들이 정상적으로 추가되지 않았기 때문이었다. |
hfaran/slack-export-viewer | 309027063 | Title: Mistake In Messages.CS CreateSlackMessageHtmlArchiveFile
Question:
username_0: Good day,
First off thank you very much for this it helped my tremendously.
I found a small mistake in your code in the procedure: static void CreateSlackMessageHtmlArchiveFile the write to HTML was taking the first 250 lines every time instead of being incremented with messageIndexPosition.
Also the HTML file being produced did not have any CR LF in the file causing it to become unmanageable "wide".
` StringBuilder fileBody =new StringBuilder();
fileBody.Append("<body>");
for (int i = messageIndexPosition; i < messageIndexPosition + numOfMessagesToTake; i++)
{
var messageAsHtml = MessageToHtml(messageList[i], channelsMapping);
fileBody.AppendLine(messageAsHtml);
}
fileBody.AppendLine("</body>");
messageIndexPosition += numOfMessagesToTake;
w.WriteLine(fileBody);`
Answers:
username_0: crap wrong project.. please close.
Status: Issue closed
|
openboleto/openboleto | 614023722 | Title: Santander - Desconto duplo (pagt em diferentes datas)
Question:
username_0: No boleto Santander, existe a possibilidade de agendar descontos mediante data de pagamento?
Exemplo:
Pagar na data 01/XX/XXXX - Desconto de R$ XX,XX
Pagar na data 05/XX/XXXX - Desconto de R$ XX,XX
Se existir a possibilidade, é possível transformar em uma feature para desenvolvimento?
Obrigado.
Answers:
username_1: Ola, em qualquer boleto esse você pode incluir esses textos nas mensagens, porem o banco tem que suportar pelo arquivo de remessa, se não fica sem efeito se o usuário for pagar o boleto por meios eletrônicos, o banco Santander suporta essas opções e na minha biblioteca quilhasoft\opencnabphp também é suportado
username_0: Bom dia! Obrigado pela resposta. Não precisa estar disponível no boleto a informação? Somente no cnab?
username_1: Bom dia, no boleto é só informativo, e é sempre bom colocar mas se não colocar no cnab, não tem efeito
username_0: Entendi, obrigado. Acho que precisa apenas então atualizar o readme de como adicionar dois descontos no cnab.
Status: Issue closed
|
notion-enhancer/notion-enhancer | 839231837 | Title: Dark+ Mode doesn't show the code block properly
Question:
username_0: **problem**
why is this feature necessary? how will it help? what existing shortcomings does it address?
Dark+ mode is most beautiful, but the code block is almost invisible due to Dark Black.
See the attached image.
Is there way to change the code block background to light or grey, so we can identify the code block clearly

**solution**
how will this feature appear/act?
Changing code block background to grey/light color will be visible in dark+ mode
**cons**
We can identify code block vs regular text
**alternatives**<issue_closed>
Status: Issue closed |
swagger-api/swagger-ui | 56509634 | Title: in a a post method with query and formData parameters, query param is not sent
Question:
username_0: we were using header and query params in our service.
After adding file input via formData as shown below-bold part-, swagger UI does not send supplied audio_text var, in query,
**{
"name": "audio_file",
"in": "formData",
"description": "call audio content.",
"required": false,
"type": "file"
},**
{
"name": "audio_text",
"in": "query",
"description": "call text",
"required": false,
"type": "string"
},
Status: Issue closed
Answers:
username_0: Sorry. my local changes caused this.. |
npgsql/efcore.pg | 1153262753 | Title: lambda contains convert to sql error
Question:
username_0: list contains convert to sql has a question 42601, converted sql misses “in” expression, can you fix it?
Answers:
username_1: @username_0 please post a runnable code sample.
username_0: before:this is query lambda, list contains is used.
`sysMenuList = await _sysMenuRep.DetachedEntities
.Where(u => menuIdList.Contains(u.Id))
.Where(u => u.Status == CommonStatus.ENABLE)
.Where(u => u.Application == appCode)
.Where(u => u.Type != MenuType.BTN)
.OrderBy(u => u.Sort).ThenBy(u => u.Id).ToListAsync();`
after:this is converted sql, and question 42601 happened.
`Failed executing DbCommand (3ms) [Parameters=[@__menuIdList_0={ '142307070918746', '222058916036677', '222058916487237', '222058916487238', '222058916487239', ... } (DbType = Object), @__appCode_1='system'], CommandType='"Text"', CommandTimeout='30']
SELECT s."Id", s."Application", s."Code", s."Component", s."CreatedTime", s."CreatedUserId", s."CreatedUserName", s."Icon", s."IsDeleted", s."Link", s."Name", s."OpenType", s."Permission", s."Pid", s."Pids", s."Redirect", s."Remark", s."Router", s."Sort", s."Status", s."Type", s."UpdatedTime", s."UpdatedUserId", s."UpdatedUserName", s."Visible", s."Weight"
FROM sys_menu AS s
WHERE ((s."Id"@__menuIdList_0 AND (s."Status" = 0)) AND (s."Application" = @__appCode_1)) AND (s."Type" <> 2)
ORDER BY s."Sort", s."Id"`
username_1: @username_0 that is not a runnable code sample - I need to be able to reproduce the error in order to investigate. I also don't know which version of the EF Core provider you are using.
See the runnable code sample below which I've prepared, where everything works fine. You can try tweaking that code to make it produce your failure; once you do that, please submit it.
<details>
<summary>Attempted code sample</summary>
```c#
await using var ctx = new BlogContext();
await ctx.Database.EnsureDeletedAsync();
await ctx.Database.EnsureCreatedAsync();
var menuIdList = new[] { 1, 2 };
var appCode = "foo";
var sysMenuList = await ctx.DetachedEntities
.Where(u => menuIdList.Contains(u.Id))
.Where(u => u.Status == CommonStatus.ENABLE)
.Where(u => u.Application == appCode)
.Where(u => u.Type != MenuType.BTN)
.OrderBy(u => u.Sort).ThenBy(u => u.Id).ToListAsync();
public class BlogContext : DbContext
{
public DbSet<DetachedEntity> DetachedEntities { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
=> optionsBuilder
.UseNpgsql(@"Host=localhost;Username=test;Password=<PASSWORD>")
.LogTo(Console.WriteLine, LogLevel.Information)
.EnableSensitiveDataLogging();
}
public class DetachedEntity
{
public int Id { get; set; }
public string Name { get; set; }
public CommonStatus Status { get; set; }
public string Application { get; set; }
public MenuType Type { get; set; }
public int Sort { get; set; }
}
public enum CommonStatus
{
ENABLE,
DISABLE
}
public enum MenuType
{
BTN
}
```
</details> |
jillytot/remote-control | 474769271 | Title: Follow icon is white on page load but red after mouseover
Question:
username_0: 

<issue_closed>
Status: Issue closed |
MicrosoftDocs/azure-docs | 812018479 | Title: MFA methods & Hardware token vs Security Key & Passwordless
Question:
username_0: Hello,
1. Please explain more the MFA methods
Right now if we look in this section in Azure cloud MFA: Verification code from mobile app or hardware token

Not really clear what kind of hardware token we can use, the one that shows code or hardware token as a security key?
Or does it mean we can use or code from a mobile app or code from the hardware token?
2. Please help to explain the behavior
If we enable the tenant to use only MFA Methods available to users:
- Notification through a mobile app
- Verification code from mobile app or hardware token
So no SMS or Call, we could onboard users to Security Key by Excluding user from Conditional Access policy that requires MFA, but when adding a security key in MySignins page, the message shows that user must be signed-in using MFA authentication, reload page to page where it says "More information required" and user again stuck if the user cannot use the phone...
Would be great to have some feedback on the questions above and request document review in combination with the security keys.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f132318f-8670-fd0d-2524-01305ccff2a1
* Version Independent ID: 0849b4a9-ff14-1d61-47f2-a2daecf3cde9
* Content: [Deployment considerations for Azure AD Multi-Factor Authentication](https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-getstarted?redirectedfrom=MSDN#)
* Content Source: [articles/active-directory/authentication/howto-mfa-getstarted.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/authentication/howto-mfa-getstarted.md)
* Service: **active-directory**
* Sub-service: **authentication**
* GitHub Login: @Justinha
* Microsoft Alias: **justinha**
Answers:
username_1: @username_0
Thanks for your feedback! We will investigate and update as appropriate.
username_2: Hi @username_0 , I've reached out to our engineering group about this to find you a solution.
Best,
James
Status: Issue closed
|
crkn-rcdr/sapindale | 616193905 | Title: Port 6 "d10n" tools to Sapindale.
Question:
username_0: After cleanup we currently have 6 remaining d10n tools, 3 which are packaging related. As an exercise in learning how Sapindale tools are created, as well as making the tools easier to use, these tools will be ported.
Status: Issue closed
Answers:
username_0: Makes no sense to have 1 issue for 6 tools, so closing and opening appropriate issues. |
sparcs-kaist/otlplus | 277236025 | Title: [타임테이블] class_title이 없을 때 javascript 에러
Question:
username_0: /Users/hsh0908y/Desktop/스크린샷 2017-11-28 오전 11.17.51.png
Answers:
username_0: <img width="352" alt="2017-11-28 11 17 51" src="https://user-images.githubusercontent.com/13213569/33299390-b25ae83c-d42e-11e7-9810-6d05b402c0f7.png">
Status: Issue closed
|
rust-lang/rust | 542628511 | Title: assertion failure with -Zmir-opt-level=2 on ./ui/array-slice-vec/arr_cycle.rs
Question:
username_0: `rustc ./ui/array-slice-vec/arr_cycle.rs -Zmir-opt-level=2`
file:
````rust
// run-pass
use std::cell::Cell;
#[derive(Debug)]
struct B<'a> {
a: [Cell<Option<&'a B<'a>>>; 2]
}
impl<'a> B<'a> {
fn new() -> B<'a> {
B { a: [Cell::new(None), Cell::new(None)] }
}
}
fn f() {
let (b1, b2, b3);
b1 = B::new();
b2 = B::new();
b3 = B::new();
b1.a[0].set(Some(&b2));
b1.a[1].set(Some(&b3));
b2.a[0].set(Some(&b2));
b2.a[1].set(Some(&b3));
b3.a[0].set(Some(&b1));
b3.a[1].set(Some(&b2));
}
fn main() {
f();
}
````
````
thread 'rustc' panicked at 'assertion failed: `(left != right)`
left: `Const`,
right: `Const`: UnsafeCells are not allowed behind references in constants. This should have been prevented statically by const qualification. If this were allowed one would be able to change a constant at one use site and other use sites could observe that mutation.', src/librustc_mir/interpret/intern.rs:167:17
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /home/matthias/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /home/matthias/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1057
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
[Truncated]
at /home/matthias/vcs/github/rust_debug_assertions/src/librustc_interface/util.rs:126
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: internal compiler error: unexpected panic
note: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports
note: rustc 1.42.0-dev running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=2
query stack during panic:
#0 [optimized_mir] processing `f`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
````
rustc @ bbf13723bc22f1a850438bf0b103d09e474a1ef5
Answers:
username_1: cc @wesleywiser @username_2
username_2: Heh, the first victim of const prop acting as if it were creating constants.
Previous discussion on zulip: https://rust-lang.zulipchat.com/#narrow/stream/189540-t-compiler.2Fwg-mir-opt/topic/const.20prop.20breaking.20const.20rules/near/183548094
Status: Issue closed
|
paladin-t/b8 | 281660565 | Title: Awesome tool.Will support more language?
Question:
username_0: I saw this tool in steam,like it so much.
Will you add more language-support in future?
Wish more people like it.(●'ω'●)丿❤
Answers:
username_1: Python is on a [mindstrom list](https://github.com/username_1/b8/projects/2), which would be major versions later even if determined.
The main factor is, there are risks to introduce new languages until it could run across platforms.
username_0: Feel sad.It seems like practicing my English is necessary.
Good luck.(●'ω'●)丿❤
username_1: Thanks, I can use Chinese natively, and daily English.
中文英文都行,不过建议英文,不然大家看上去就像两个人在打乒乓球却不能参与了。
username_1: ;D
Status: Issue closed
|
microsoft/fluentui | 785104208 | Title: Opened dropdown is not updating checkbox id when item gets inserted
Question:
username_0: <!--
Thanks for contacting us! We're here to help.
Before you report an issue, check if it's been reported before:
* Search: https://github.com/microsoft/fluentui/search?type=Issues
* Search by area or component: https://github.com/microsoft/fluentui/issues/labels
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
-->
### Environment Information
- **Package version(s)**: 7.155.3
- **Browser and OS versions**: (fill this out if relevant)
### Please provide a reproduction of the bug in a codepen:
https://codepen.io/xinychen/pen/eYdPoNZ
<!--
Providing an isolated reproduction of the bug in a codepen makes it much easier for us to help you. Here are some ways to get started:
* Go to https://aka.ms/fluentpen for a starter codepen
* You can also use the "Export to Codepen" feature for the various components in our documentation site.
* See http://codepen.io/dzearing/pens/public/?grid_type=list for a variety of examples
Alternatively, you can also use https://aka.ms/fluentdemo to get permanent repro links if the repro occurs with an example.
(A permanent link is preferable to "use the website" as the website can change.)
-->
#### Actual behavior:
Open the dropdown when "No Apple" is displayed, and keep the dropdown open until a new item "Apple" is inserted.
When there is a new item ("Apple") inserted before an existing item ("Banana"), the checkbox id of existing item is not changed.

This is causing duplicate checkbox id between "Apple" and "Banana".

And confusing interface behavior: When user clicks on "Banana", "Apple" gets toggled.

<!-- fill this out -->
#### Expected behavior:
From UX perspective, user should never have such confusing interface behavior. If you open the dropdown when "Has Apple" is displayed, you can see what is expected from end user.

<!-- fill this out -->
### Priorities and help requested:
Are you willing to submit a PR to fix? Not yet
Requested priority: Normal
Products/sites affected: (if applicable)
Answers:
username_0: Ping.
username_1: @username_0 - thanks for the issue submitted. I see you are maybe willing to submit a PR (not yet).
If you decide to do one - you can submit a feature to this by submitting a pull request to this github. This [section ](https://github.com/microsoft/fluentui/blob/master/packages/react/README.md#building-the-repo) of the readme provides some good getting started information! Additionally we are certainly here to help.
@username_2 - Would you be able to determine if this is a regression or if this behavior is an issue with the dropdown?
username_2: I think this is happening due to React's quirkness of not updating things if the key hasn't changed, which is what's happening with the Dropdown's options here. You can go around it by passing different keys as can be seen [in this codepen](https://codepen.io/username_2/pen/RwoeYWw?editors=0010). Given this, I don't think there's any action items on our side without significant rework of how the component works internally. Let us know if there's anything else you need from us! |
go-echarts/statsview | 743904519 | Title: feature request: instead of port number have a route
Question:
username_0: Hi!
The access to the graphs is now by using a special port number. This is ok for local apps, but difficult for apps deployed remotely. Because those are behind firewalls and stuff. Or, in the case of Heroku, it is not possible to open (or select) a second port. A special route like '/debug/vars' would work better in those cases.
Answers:
username_1: PING @username_0
The std profiler has been integrated into viewstats since v0.2.0 hence you can use statsview as the only profiler on your program now. :)
Enjoy it!
Status: Issue closed
username_0: Aha, sorry, did only see the port mentioned in the docs! |
libssh2/libssh2 | 234560257 | Title: Unresolved external symbols - libssh2 NuGet
Question:
username_0: Hi.
I downloaded libssh2 from NuGet, but I can't make it work. (Using Visual Studio 2017)
```
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_init referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_exit referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_session_init_ex referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_session_handshake referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_session_disconnect_ex referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_session_free referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_hostkey_hash referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_userauth_password_ex referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>s7c_SFTP.obj : error LNK2019: unresolved external symbol _libssh2_session_set_blocking referenced in function "private: int __thiscall s7c_SFTP::Connect(void)" (?Connect@s7c_SFTP@@AAEHXZ)
1>c:\users\7catt\documents\visual studio 2017\Projects\s7c_SFTP\Debug\s7c_SFTP.exe : fatal error LNK1120: 9 unresolved externals
```
I already have in the code:
```
#include <libssh2.h>
#include <libssh2_sftp.h>
#pragma comment(lib, "Ws2_32.lib")
```
Status: Issue closed
Answers:
username_1: You should take that up with whoever built that version. The libssh2 project doesn't ship any binaries at all, we ship source code. |
SpiderStrategies/node-tweet-stream | 49315234 | Title: Track multiple words at once?
Question:
username_0: Can we add a new function (or alter the current track() function) that could accept multiple words at once.
Instead of:
tweet_stream.track("node");
Something like:
tweet_stream.track(["node']);
Where the array could contain more than one term.
Answers:
username_1: Thanks @username_0
In 1.7.0
Status: Issue closed
|
google/ground-platform | 935084973 | Title: [Feature list] Show feature list for each layer
Question:
username_0: @gauravchalana1 :
As an initial prototype:
- [ ] On layer click in the layer list, replace layer list with "feature list" side panel (similar to feature details panel). To do this you'll need to update the URL via NavigationService. URL format might be `#fl=layerId`.
- [ ] Show scrolling list of all features in that panel. We don't labels for features yet - @parulraheja98 to provide code for this soon. (Gentle ping :) For now just show uuid to test.
- [ ] Clicking on a feature in the list open the feature details panel for that feature. (use `NavigationService`).
We don't have mocks or proper designs for this yet; @jacobmclaws is this something you can work on? We basically need to show a list of features for a particular layer. The header might show the layer name and "Features" heading. In the list we'll show the feature name (generated from imported ID or label) or user defined. Wdyt?
Answers:
username_0: @jacobmclaws Gentle ping. Gaurav has an interim solution in https://github.com/google/ground-platform/pull/780, but it would be nice to have your input before he switches on the feature in his next PR.
username_0: @gauravchalana1, I believe @parulraheja98 implemented the feature label logic. Would you be able to build it into the feature list now as its label?
username_0: Removing assignees due to lack of activity. @DaoyuT @os-micmec we can come up with a draft UX for this just to get the feature working end-to-end and refine iteratively. |
sindresorhus/caprine | 401179570 | Title: Move to TypeScript?
Question:
username_0: @CvX Sounded like you want this and I don't mind. I find TypeScript much easier to work with in larger codebases.
Answers:
username_1: why not coffeescript 🤔
username_0: @username_1 I considered [Dogescript](http://dogescript.io/).
Status: Issue closed
|
kingarthur91/PyCoalTBaA | 551997780 | Title: Drill head - unknown key - phosphate mine
Question:
username_0: I've got a phosphate mine on top of ancient remains. I'm seeing "consumes unknown key: fuel-category-name.drill" on the deployed mine. The icon appears to be a drill head? Simultaneously, when I go to the drill head in the production menu, I see "unknown key: fuel-category-name.drill" in the mouseover.
Is this a locale bug/omission?<issue_closed>
Status: Issue closed |
discord-net/Discord.Net | 593327408 | Title: DiscordWebhookClient spams rate limit warning
Question:
username_0: I am using DiscordWebhookClient to send a message to various channels. My Discord.Net package is version 2.1.1 (not preview).
For the most part it is fine, but problem starts with pre-emptive rate limit. The retry seems to be done in a loop, without really waiting for rate limits to expire. Using RetryMode.RetryRatelimit (for other cases I am using exceptions to handle manually).
I am listening to Log event, and then I use DataDog to aggregate structured logs, and my logs are spammed with this (each of the generated logs is for the exact same message):

`try
{
using (DiscordWebhookClient client = new DiscordWebhookClient(target.Address))
{
if (onClientLog != null)
client.Log += onClientLog;
await client.SendMessageAsync(
ping.Content,
ping.IsTTS,
ping.Embeds.Select(embed => embed.ConvertToEmbed()),
ping.DisplayName,
ping.DisplayAvatarUrl,
new DiscordNet.RequestOptions()
{
CancelToken = cancellationToken,
RetryMode = DiscordNet.RetryMode.RetryRatelimit
});
return true;
}
}
catch (TimeoutException ex) when (LogWarningWithScope(logger, ex, "A request to Discord has timed out"))
{
await Task.Delay(discordOptions.TimeoutRetryInterval);
return false;
}
catch (Discord.Net.HttpException ex) when (LogWarningWithScope(logger, ex, "HttpException occured when making a request to Discord"))
{
await Task.Delay(discordOptions.HttpExceptionRetryInterval);
return false;
}`
My Code tries to send the message only once (unless an exception has occured, which isn't the case here). The calling code for reference:
`uint attemptsCount = 0;
for (; ;)
{
if (await DiscordWebhookSenderHelper.TrySendPingAsync(discordPing, target, _discordOptions.CurrentValue, _log, OnClientLog, cancellationToken))
return;
if (++attemptsCount >= _discordOptions.CurrentValue.MaxAttemptsCount)
{
_log.LogError("Sending message failed after {AttemptsCount} attempts. Sending aborted", attemptsCount);
return;
}
}`
Answers:
username_1: I'm having the same problem, but whit role creations.
Executing an API request form a different IP address will make the problem go away temporarily.
The only reason i can think of is that it's sending a lot of invalid requests resulting in a temporarily IP ban or something.
This is a really bad bug, since it's basically making my bot unusable at some times.
username_2: Does this still happen in Discord.Net 3.0? |
ibm-openbmc/dev | 683643392 | Title: PFP::presence - Design JSON configuration for logging events and PELs for fans missing
Question:
username_0: Determine a design to allow an optional configuration for logging events and PELs for fans missing. The configuration should be similar to how PDM is configured to log events for fans missing where a timer is configured to log the event when the fan has been missing after a period of time.
- [ ] Determine JSON format and location(additional JSON file or contained within current `config.json`)
- [ ] Design class structure based on the determined configuration details
Status: Issue closed
Answers:
username_1: Going to handle this in #2533 |
kubernetes-sigs/azuredisk-csi-driver | 798053563 | Title: Support pre-created Service Accounts in the Helm chart
Question:
username_0: **Is your feature request related to a problem?/Why is this needed**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like in detail**
<!-- A clear and concise description of what you want to happen. -->
Currently this driver would create new service accounts, we should also support pre-created Service Accounts in the Helm chart
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. --> |
spinnaker/spinnaker | 327828236 | Title: Print useful error when attempting to display the BOM versions with invalid Spinnaker version set in halconfig
Question:
username_0: ### Issue Summary:
Print useful error when attempting to display the BOM versions with invalid Spinnaker version set in halconfig
### Cloud Provider(s):
### Environment:
local git
### Feature Area:
halyard
### Description:
Currently, halyard doesn't provide any useful information when spinnaker's version is set to an bad value and `hal version bom` is called:
```
➜ hal version bom
+ Get current deployment
Success
+ Get Spinnaker version
Success
+ Get current deployment
Success
+ Get Spinnaker version
Success
! ERROR 400
? Try the command again with the --debug flag.
```
We would like to add more helpful error messages:
- invalid verision because
- version number does not exist
### Steps to Reproduce:
### Additional Details:
---
**_Please delete the instructions below this line prior to submitting_**
Instructions:
* An issue is not a place to ask questions. Please use [Slack](http://join.spinnaker.io) or [Stack Overflow](http://stackoverflow.com/questions/tagged/spinnaker).
* Before you open an issue, please [check if a similar issue already exists](https://github.com/spinnaker/spinnaker/issues) or has been closed before.
* Make sure you have read through the [Spinnaker FAQ](https://www.spinnaker.io/community/faqs/) and [Halyard FAQ](https://www.spinnaker.io/setup/quickstart/faq/) to provide as much information as possible.
Descriptions:
* Issue Summary: A brief description of what you're seeing.
* Cloud Provider: AWS, GCP, Kubernetes, Azure, Cloud Foundry, etc. Please assign a label from the right so your issue can be properly sorted.
* Environment: As much information about your Spinnaker environment and configuration that might be relevant to the issue. For example: "I am running Spinnaker using the Amazon images to deploy into AWS and GCP."
* Feature Area: Notifications, Pipelines, UI, Jenkins, etc. Please assign a label from the right so your issue can be properly sorted.
* Description: The behavior you expect to see, and the actual behavior.
* Steps to reproduce: Ideally, an isolated way to reproduce the behavior (example: GitHub repository with code isolated to the issue that anyone can clone to observe the problem). If not possible, as much information as possible to see this behavior.
* Additional Details: Additional information such as screenshots and exception logs.
Answers:
username_0: @lwander Could you assign me please
Status: Issue closed
|
Azure/azure-sdk-for-java | 924266587 | Title: Create AzureApplicationCredential
Question:
username_0: This issue is to track the design and implementation of the `AzureImplementationCredential` for applications calling service APIs not suited for `DefaultAzureCredential`. Details can be found [here](https://gist.github.com/9793dbc036d16708fb7d8fe411fe0f1f)
Answers:
username_0: Related PR: https://github.com/Azure/azure-sdk-for-python/pull/19403
username_0: @username_1 could you update this issue please?
Status: Issue closed
|
sparklemotion/mechanize | 55564336 | Title: If multiple submit buttons are present on a form, all of them are included in the query
Question:
username_0: The following line ( https://github.com/sparklemotion/mechanize/blob/master/lib/mechanize/form.rb#L259 ) should NOT include ```+submits``` — otherwise, if you have multiple submit buttons:
```
<input type="submit" value="Delete">
<input type="submit" value="Next">
```
then the query string will include both, whereas they are exclusive — after all, you can't click both at once with a mouse.
Answers:
username_1: This was reverted in d0cbc8f in July 2015 after a conversation in 1<PASSWORD>
Status: Issue closed
|
9958/rinblog | 136617649 | Title: ecshop显示商品已销售数量
Question:
username_0: 统计商品出售总数,和半年或者三个月,一个月的出售数量。复制此函数粘贴到根目录下goods.php末尾。然后回到该文件241行给变量赋值,
$smarty->assign('goods_count', get_goods_coun($goods_id)); ,
最后在模板文件goods.dwt中 {$goods_count} 直接调用就行了。
<!--more-->
/**
* 获取商品出售总数
*
* @access public
* @param integer $goods_id
* @return integer
*/
function get_goods_count($goods_id)
{
/* 统计时间段
$period = intval($GLOBALS['_CFG']['top10_time']);
if ($period == 1) // 一年
{
$ext = "AND o.add_time >'" . local_strtotime('-1 years') . "'";
}
elseif ($period == 2) // 半年
{
$ext = "AND o.add_time > '" . local_strtotime(‘-6 months’) . "'";
}
elseif ($period == 3) // 三个月
{
$ext = " AND o.add_time > '" . local_strtotime('-3 months’) . "'";
}
elseif ($period == 4) // 一个月
{
$ext = " AND o.add_time > '" . local_strtotime('-1 months') . "'";
}
else
{
$ext = '';
}*/
/* 查询该商品销量 */
$sql = 'SELECT IFNULL(SUM(g.goods_number), 0) ' .
'FROM ' . $GLOBALS['ecs']->table('order_info') . 'AS o, ' .
$GLOBALS['ecs']->table('order_goods') . 'AS g ' .
"WHERE o.order_id = g.order_id " .
"AND o.order_status = '" . OS_CONFIRMED . "'" .
"AND o.shipping_status " . db_create_in(array(SS_SHIPPED, SS_RECEIVED)) .
" AND o.pay_status " . db_create_in(array(PS_PAYED, PS_PAYING)) .
" AND g.goods_id = '$goods_id'";
$sales_count = $GLOBALS['db']->getOne($sql);
return $sales_count;
}
去掉注释调用指定时间段内。 |
OpenAPITools/openapi-generator | 350803665 | Title: Multiple examples in yaml end as response that is array - generrating Spring Boot
Question:
username_0: <!--
Please follow the issue template below for bug reports and feature requests.
Also please indicate in the issue title which language/library is concerned. Eg: [JAVA] Bug generating foo with bar
-->
##### Description
<!-- describe what is the question, suggestion or issue and why this is a problem for you. -->
When model has multiple examples.
```
example: # Sample objects
one: # Sample object 1
id: 'd7f94ec-072b-4187-0000-6d4a8aa824d9'
sometype: 'Some1'
descritpion: 'Decription1'
two: # Sample object 2
id: 'aaa94ec-072b-8787-0000-6d4a8bb824d9'
sometype: 'Some2'
descritpion: 'Decription2'
```
You end with something like this in generated Interface.
`ApiUtil.setExampleResponse(request, "application/json", "{ \"one\" : { \"id\" : \"d7f94ec-072b-4187-0000-6d4a8aa824d9\", \"sometype\" : \"Some1\", \"descritpion\" : \"Decription1\" }, \"two\" : { \"id\" : \"aaa94ec-072b-8787-0000-6d4a8bb824d9\", \"agencytype\" : \"Some2\", \"sometype\" : \"Decription2\" }}");`
##### openapi-generator version
<!-- which version of openapi-generator are you using, is it a regression? -->
3.2.1
##### OpenAPI declaration file content or url
<!-- if it is a bug, a json or yaml that produces it.
If you post the code inline, please wrap it with
```yaml
example: # Sample objects
one: # Sample object 1
id: 'd7f94ec-072b-4187-0000-6d4a8aa824d9'
sometype: 'Some1'
descritpion: 'Decription1'
two: # Sample object 2
id: 'aaa94ec-072b-8787-0000-6d4a8bb824d9'
sometype: 'Some2'
descritpion: 'Decription2'
```
(for YAML code) or
-->
##### Command line used for generation
<!-- including the language, libraries and various options -->
simple:
-g spring -o directory
##### Steps to reproduce
<!-- unambiguous set of steps to reproduce the bug.-->
##### Related issues/PRs
<!-- has a similar issue/PR been reported/opened before? Please do a search in https://github.com/openapitools/openapi-generator/issues?utf8=%E2%9C%93&q=is%3Aissue%20 -->
##### Suggest a fix/enhancement
<!-- if you can't fix the bug yourself, perhaps you can point to what might be
causing the problem (line of code or commit), or simply make a suggestion -->
Answers:
username_1: Don't refer to swagger documentation. Multiple examples should be handled in the future: https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/java/org/openapitools/codegen/examples/ExampleGenerator.java#L64 |
SAP/ui5-webcomponents | 637528725 | Title: ui5-dialog cannot be rendered on mobile simulator
Question:
username_0: **Describe the bug**
[https://webclient430mdkdavid10344sampl-x0z1n26k24.dispatcher.int.sap.eu2.hana.ondemand.com/?hc_commitid=6a4f3952248316def660640fed1038c2155c6d95](url)
There are no issues on a desktop browser but error messages are produced on any mobile browser (both ios and android emulator) when contents of ui5-dialog are rendered
**To reproduce**
Steps to reproduce the behavior:
1. Open the link on both a desktop browser and a mobile emulator
2. Click on the Hello World button.
**Expected behavior**
Contents of ui5-dialog should be rendered in the same way in both the desktop and mobile browsers
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Context**
- UI5 Web Components version 0.0.0-934b4df24
**Affected components** *(if known)*
- ui5-dialog
**Log output / Any errors** in the console
- As attached
Answers:
username_0: https://webclient430mdkdavid10344sampl-x0z1n26k24.dispatcher.int.sap.eu2.hana.ondemand.com/?hc_commitid=6a4f3952248316def660640fed1038c2155c6d95
URL if the above does not work.
username_1: Hello, @username_0
Thanks very much for your report! We'll look into this.
How to reproduce (the URL above gives 403 Forbidden to me)
https://sap.github.io/ui5-webcomponents/master/playground/main/pages/Dialog/
Use chrome simulator
Open the **stretched** dialog (first button).
username_1: Hello again,
Actually I was wrong, we still can't reproduce it:

Please open the test page in your simulator:
https://sap.github.io/ui5-webcomponents/master/playground/main/pages/Dialog/
and try the dialogs on the page.
Regards,
Vladi
username_0: Hi @username_1,
https://github.com/username_0/ui5-dialog
Could you try this link instead?
How to reproduce:
1. git clone the repo
2. unzip the node modules zip file
3. run command parcel src/index.html
4. check local host on both a mobile simulator and on desktop browser
It should work fine on a desktop browser. The problem lies on the mobile simulator where the ui5-dialog is not rendered properly. Thank you!
Status: Issue closed
username_0: Hi @username_2 , understand that you have already merged a fix. On my end, it still seems that the problem persists. May I know how to update to get the fix?
Thank you! |
OGRECave/ogre-next | 973216415 | Title: Cannot load vulkan rendersystem on archlinux
Question:
username_0: #### System Information
- Ogre Version: ogre-next revision f5ff301701de70d9efc458899aeb17a870eb71e4
- Operating System / Platform: ArchLinux x64
- RenderSystem: Vulkan
- GPU: AMD RX480
#### Detailled description
Vulkan rendersystem missing symbol when try to start samples.
Tried both radv and amdvlk driver.
#### Ogre.log
```
Loading library /usr/lib/OGRE/RenderSystem_Vulkan
An exception has occured: OGRE EXCEPTION(7:InternalErrorException): Could not load dynamic library /usr/lib/OGRE/RenderSystem_Vulkan. System Error: /usr/lib/OGRE/RenderSystem_Vulkan.so.2.3.0: undefined symbol: _ZN7glslang8TProgram10getInfoLogEv in DynLib::load at /mnt/arch_cache/SRC/pkgs/ogre-next-git/src/ogre-next/OgreMain/src/OgreDynLib.cpp (line 108)
```
#### Callstack
Answers:
username_1: My best guess is that Vulkan is being linked against a system-installed `shaderc` instead of the one built by [ogre-next-deps](https://github.com/OGRECave/ogre-next-deps)
Do you have your `CMakeCache.txt`? I'm particularly interested in the values of `Vulkan_SHADERC_LIB_REL`, `Vulkan_SHADERC_LIB_DBG` and `Vulkan_LIBRARIES`
username_0: You are right
CMakeCache.txt:
```
# this library belongs to system installed shaderc
Vulkan_SHADERC_LIB_REL:FILEPATH=/lib/libshaderc_combined.a
# this library belongs to vulkan-icd-loader
Vulkan_LIBRARY:FILEPATH=/lib/libvulkan.so
# no Vulkan_SHADERC_LIB_DBG, did not enable debug build
```
ldd /usr/lib/OGRE/RenderSystem_Vulkan.so:
```
linux-vdso.so.1 (0x00007ffc755a5000)
libOgreMain.so.2.3.0 => /usr/lib/libOgreMain.so.2.3.0 (0x00007f6f7e865000)
libvulkan.so.1 => /usr/lib/libvulkan.so.1 (0x00007f6f7e804000)
libxcb-randr.so.0 => /usr/lib/libxcb-randr.so.0 (0x00007f6f7e7f2000)
libX11-xcb.so.1 => /usr/lib/libX11-xcb.so.1 (0x00007f6f7e7ed000)
libX11.so.6 => /usr/lib/libX11.so.6 (0x00007f6f7e6ac000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f6f7e496000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f6f7e479000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f6f7e2ad000)
libXt.so.6 => /usr/lib/libXt.so.6 (0x00007f6f7e242000)
libXaw.so.7 => /usr/lib/libXaw.so.7 (0x00007f6f7e1cc000)
libXrandr.so.2 => /usr/lib/libXrandr.so.2 (0x00007f6f7e1bf000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f6f7e19e000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f6f7e195000)
libfreeimage.so.3 => /usr/lib/libfreeimage.so.3 (0x00007f6f7e0d2000)
libzzip-0.so.13 => /usr/lib/libzzip-0.so.13 (0x00007f6f7e0c8000)
libz.so.1 => /usr/lib/libz.so.1 (0x00007f6f7e0ae000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f6f7df6a000)
/usr/lib64/ld-linux-x86-64.so.2 (0x00007f6f7f056000)
libxcb.so.1 => /usr/lib/libxcb.so.1 (0x00007f6f7df40000)
libXau.so.6 => /usr/lib/libXau.so.6 (0x00007f6f7df39000)
libXdmcp.so.6 => /usr/lib/libXdmcp.so.6 (0x00007f6f7df31000)
libSM.so.6 => /usr/lib/libSM.so.6 (0x00007f6f7df27000)
libICE.so.6 => /usr/lib/libICE.so.6 (0x00007f6f7df0a000)
libXext.so.6 => /usr/lib/libXext.so.6 (0x00007f6f7def5000)
libXmu.so.6 => /usr/lib/libXmu.so.6 (0x00007f6f7deda000)
libXpm.so.4 => /usr/lib/libXpm.so.4 (0x00007f6f7dec4000)
libXrender.so.1 => /usr/lib/libXrender.so.1 (0x00007f6f7deb7000)
libjpeg.so.8 => /usr/lib/libjpeg.so.8 (0x00007f6f7de25000)
libjxrglue.so.0 => /usr/lib/libjxrglue.so.0 (0x00007f6f7ddff000)
libOpenEXR-3_1.so.30 => /usr/lib/libOpenEXR-3_1.so.30 (0x00007f6f7daf2000)
libIex-3_1.so.30 => /usr/lib/libIex-3_1.so.30 (0x00007f6f7da74000)
libImath-3_1.so.29 => /usr/lib/libImath-3_1.so.29 (0x00007f6f7da1f000)
libopenjp2.so.7 => /usr/lib/libopenjp2.so.7 (0x00007f6f7d9bd000)
libraw.so.20 => /usr/lib/libraw.so.20 (0x00007f6f7d8b0000)
libpng16.so.16 => /usr/lib/libpng16.so.16 (0x00007f6f7d879000)
libtiff.so.5 => /usr/lib/libtiff.so.5 (0x00007f6f7d7e5000)
libwebpmux.so.3 => /usr/lib/libwebpmux.so.3 (0x00007f6f7d7d9000)
libwebp.so.7 => /usr/lib/libwebp.so.7 (0x00007f6f7d768000)
libuuid.so.1 => /usr/lib/libuuid.so.1 (0x00007f6f7d75f000)
libjpegxr.so.0 => /usr/lib/libjpegxr.so.0 (0x00007f6f7d71b000)
libIlmThread-3_1.so.30 => /usr/lib/libIlmThread-3_1.so.30 (0x00007f6f7d712000)
libjasper.so.4 => /usr/lib/libjasper.so.4 (0x00007f6f7d6b0000)
liblcms2.so.2 => /usr/lib/liblcms2.so.2 (0x00007f6f7d64c000)
libgomp.so.1 => /usr/lib/libgomp.so.1 (0x00007f6f7d608000)
libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007f6f7d4f9000)
liblzma.so.5 => /usr/lib/liblzma.so.5 (0x00007f6f7d4d1000)
```
username_0: I am building ogre-next from AUR: https://aur.archlinux.org/packages/ogre-next-git
Modified PKGBUILD to link ogre against shaderc from ogre-next-deps and problem is gone
For those facing the same problem, I have posted my workaround in the above link.
username_1: So, should I close then?
Looks like a packaging problem
username_0: It seems to be imcompatibility caused by new shaderc version, so i describe my solution as workaround. I think it is still a problem to be solved
Anyway I changed my title to describe the real problem.
username_0: OK I think I found the problem. It is a packging problem, Archlinux does not include libSPIRV into libshaderc_combined.a
So either link to libshaderc_shared.so or link to a custom built libshaderc_combined.a
Status: Issue closed
|
themyth92/ngx-lightbox | 327331917 | Title: when centerVertically, initial loading view does not show
Question:
username_0: Hi, thanks for this great lib. But this problem still exists, could you look into it?
username_1/angular2-lightbox#46
Answers:
username_1: Can you try it without the semnatic ui ? Most likely it is css conflict but I am not really sure.
username_1: Closed, feel free to open if you still experience any issues
Status: Issue closed
|
ikedaosushi/tech-news | 560996561 | Title: 覚えておきたいPythonの基本Excelのセルやシートを自在に操る方法 | 日経 xTECHクロステック
Question:
username_0: 覚えておきたいPythonの基本、Excelのセルやシートを自在に操る方法 | 日経 xTECH(クロステック)<br>
<br>
https://ift.tt/2Usk2Hl |
jdi-testing/jdi-light | 677484298 | Title: Implement native Android element: Checkbox
Question:
username_0: Reference to Epic issue - #2245
Implement native Android element: Checkbox
See its definition here: https://developer.android.com/guide/topics/ui/controls/checkbox
Cover its methods with "kind of unit" tests that verify it works properly (they should contain AssertThat and similar non-JDI methods)
Create test examples of how to use this element (with some tests)
Create an article in documentation with test examples from item 3. (Use iOS element article as an example)<issue_closed>
Status: Issue closed |
DragonCherry/AssetsPickerViewController | 241436680 | Title: Install issue using cocoapods
Question:
username_0: I tried to install AssetsPickerViewController using cocoapods.
But it installs other several libraries such as Dimmer, FadeVeiw and OptionalTypes.
I got following issues.
Analyzing dependencies
Pre-downloading: `AssetsPickerViewController` from `https://github.com/username_1/AssetsPickerViewController`, branch `swift3`
Downloading dependencies
Installing AssetsPickerViewController (1.1.1)
Installing Dimmer (1.0.0)
Installing FadeView (1.0.2)
Installing OptionalTypes (1.0.3)
[!] Error installing OptionalTypes
[!] /usr/bin/git clone https://github.com/username_1/OptionalTypes.git /var/<KEY>T/d20170708-22171-brgj2f --template= --single-branch --depth 1 --branch 1.0.3
Cloning into '/var/<KEY>d20170708-22171-brgj2f'...
warning: Could not find remote branch 1.0.3 to clone.
fatal: Remote branch 1.0.3 not found in upstream origin
How can I solve this problem.
I am using Xcode 8.3.2 in iOS 10.12.4
Regards
Answers:
username_1: Did you tried 'pod repo update' before 'pod update'? It should have install 1.0.4 of OptionalTypes, not 1.0.3.
Status: Issue closed
username_1: Closing this. Please notify me if it occurs again. |
kaicataldo/material.vim | 421667055 | Title: Thank you
Question:
username_0: This is the best Material Theme for Vim, the others don't even come close.
Thank you :)
Status: Issue closed
Answers:
username_1: Thanks for the kind words. I'm going to close this just so it's not an outstanding an issue, but thanks for again for dropping me a line - it's very much appreciated! 😄 |
rethinkdb/rethinkdb | 13577902 | Title: Push nightly source builds to the download service
Question:
username_0: When Jenkins builds a nightly tgz, we need to collect them in a folder on `dr-doom` and `rsync` them to the download service.
A counter-proposal is welcome (particularly since it's not clear how many builds we should keep before purging old builds) but right now a link is broken on our site since we have no way to provide nightly builds.
Answers:
username_1: @username_0 are you still interestd in having this? There hasn't been much demand for it and we have gotten pretty fast at releasing point releases.
Status: Issue closed
username_0: I don't think there's a lot of demand for it, closing. |
lukewaite/logstash-input-cloudwatch-logs | 230645896 | Title: Docker AWS Logging driver
Question:
username_0: When using the AWS Logging driver, each container creates a log stream within a log group, if a container moves to a different host or gets re provisioned it creates a new log stream, what this results in is lots of log streams being generated.
Whilst running logstash in debug mode I can see that the recursive statement tries to read and parse all the log streams created before it runs out of memory and fails
:message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::CloudWatch_Logs region=>\"eu-west-1\"
How have others solved this problem?
Answers:
username_1: I'm running it at the moment with 1GB allocated for JVM heap. Didn't get "out of memory" issue yet, but I experience that it has *terrible* performance - scanning various log streams is very slow, as it takes as much as 200ms per stream to process. We'll still give it a try, but seems like this plugin is not useful for us. As another alternative I consider creating lambda function that reads from CloudWatch Logs and puts events to redis list, then use logstash to retrieve events from there.
username_0: We have a 1g heap too. Would be interesting to see if anyone else is getting similar issues
username_2: @username_0 Are you ingesting a single log group? Or are you using the ingest by prefix mode?
The plugin has largely not needed to be touched since it was written; there's definitely room for performance improvements. I've been meaning to begin a refactor. Will see how much headway I can make today while travelling.
username_1: @username_2 for case with AWS ECS, each container creates a new stream in the same log group. If you're unlucky and push there container which fails to start - it will be restarted in endless loop and it's not uncommon that such container will create several thousands of streams under one log group
username_0: @username_1 so this is exactly the issue :)
username_2: Ah - Thanks! I haven't run in production against ECS as the log source myself, so hadn't considered the possibility of a failing container in a service generating thousands of streams.
I thought I had a pretty good test case with lambdas generating a few hundred streams a day...
Ok, I have some thoughts on how to re-structure to improve performance.. Will see how it goes.
username_2: I've just tagged `v1.0.0.pre`, a pre-release which fixes a few memory leaks that I think were the culprit here.
I'd appreciate any feedback, initial testing on my end looks good.
https://github.com/username_2/logstash-input-cloudwatch-logs/releases/tag/v1.0.0.pre
Status: Issue closed
|
denoland/deno_std | 892707649 | Title: std/io/bufio.ts - `readLines` and `readStringDelim` should allow to specify the character encoding
Question:
username_0: Hi,
This is an improvement proposal. It should somehow be possible to tell [readLines](https://deno.land/[email protected]/io/bufio.ts#L703) and [readStringDelim](https://deno.land/[email protected]/io/bufio.ts#L691) which encoding the `Reader` data actually have. That way we could easily read files by line from any supported encoding.
The idea would be to somehow parameterize both functions in order to affect the creation of the `TextDecoder` [here](https://deno.land/[email protected]/io/bufio.ts#L696).
What do you think?<issue_closed>
Status: Issue closed |
db-migrate/node-db-migrate | 115663401 | Title: Npm istall issue with version 0.10.x
Question:
username_0: Hello,
Whilst specifying version "db-migrate": "0.10.0-beta.4" in the package.json file then calling npm install it seems skips over installing node-db-migrate. After the install is complete all the packages have been installed apart from db-migrate. If I specify the version to be 0.9.x this issue doesn't occur. This is slightly problematic if you want to use the module's API options and run it via a .js file. Any idea what might be going on?
Package.json content - https://gist.github.com/username_0/44ec86475c7f15085a53
Answers:
username_1: I need more information from you:
- node version used
- npm version used
However, I have tried with many npm versions and node versions and all worked. How do you come to the conclusion it was skipped?
username_0: Information:
- Node 0.10.29
- Npm - 2.11.3
Here are the steps I used
1. rm -rf node_module/
2. npm install
3. ls -l node_modules
Everything is listed apart from db-migrate. If I then call npm update && npm install, then it gets installed. But I always have to call npm update first.
If I change the version to 0.9.x for db-migrate then everything is fine with just npm install. This happens across develop, integration and all other environments where jenkins handles the builds.
Status: Issue closed
username_0: It was a local build issue. Sorry about that. All fixed now
username_1: @username_0 Great to hear! Do not hesitate to ask again if you encounter any issues. |
dropwizard/dropwizard | 154038781 | Title: can't register exceptionmapper in 0.9.2
Question:
username_0: **This is my exceptionmapper.**
`@Provider
public class RuntimeExceptionMapper implements ExceptionMapper \<Throwable\> {
private static final Logger LOG = Logger.getLogger(RuntimeExceptionMapper.class.getName());
@Override
public Response toResponse(Throwable exception) {
Response defaultResponse = Response.status(Response.Status.NOT_FOUND)
.entity(new ErrorView("/500.mustache"))
.build();
if (exception instanceof WebApplicationException) {
return handleWebApplicationException(exception, defaultResponse);
}
LOG.warn(exception.getMessage(), exception);
return defaultResponse;
}
private Response handleWebApplicationException(Throwable exception, Response defaultResponse) {
WebApplicationException webAppException = (WebApplicationException) exception;
if (webAppException.getResponse().getStatus() == 404) {
return Response
.status(Response.Status.NOT_FOUND)
.entity(new ErrorView("404.mustache"))
.build();
}
LOG.warn(exception.getMessage(), exception);
return defaultResponse;
}
}`
**i have defined:**
`registerDefaultExceptionMappers: false`
in my config.yml and i have checked that it doesn't register any default jersey exceptionmappers.
**I register my exceptionmapper with the lines:**
`RuntimeExceptionMapper runtimeExceptionMapper = new RuntimeExceptionMapper();
environment.jersey().register(runtimeExceptionMapper);`
**but when i throw:**
`throw new WebApplicationException(response);`
**it deosn't trigger my exception mapper and i get a:**
`{
"type": "error",
"message": "No HTTP resource was found that matches the request URI xxxxxxxx."
}`
as response.
What have i missed? what have i done wrong? im loosing my mind here. Is there a bug?
Answers:
username_1: Isn't the whole idea with the `WebApplicationException` that you don't need an exception mapper?
Have you tried throwing a `RuntimeException`?
username_0: Yes i have tried runtimeexception, i even tried writing an customexception. And exceptionmapped it.
Its like it doesn't register providers at all.
username_1: Have you tried changing the generic type from `Throwable` to something else?
username_0: Yes i have tried "RuntimeException", "NotFoundException" and my own "CustomException".
Is there any way to se that the exceptionmapper actually gets registered at startup
username_1: It shows up in the log file on startup.
username_0: okey, just checked my log, apparantly it doesn't get listed as a provider at startup. Now the question is. Why.
username_1: Have you tried registering `RuntimeExceptionMapper.class` instead of an instance of it?
username_0: yeah and it didn't work
username_0: Okey after much research this is what i have found out.
The order of registration matters. If you register Providers after resources they will not be registered. By raising the my provider to the top of all registered objects it suddenly got registered in the jersey environment.
But disabling Jersey exceptionmappers did work, next problem is that apperantly there is another exceptionmapper underneath that will map out an html-response and the log will log out a warning that there is already a mapper for those exceptions and this one apparently has priority over my mapper.
if not disabling jersey exceptions i get a json-object. when disabling i get an html-response that overrides my mapper.
If i know made an customexception everything suddenly works and i can build my own html.
Bare in mind im only a junior dev so i dont have enough experience concerning this. But if someone could shed some light over the multiple exceptionmappers then please do, i'd love to learn more about it.
username_2: I was able to create a test project that shows that registration order does not matter (with and without `registerDefaultExceptionMappers=false`), so unless a test can be demonstrated (preferably within this repo), I'm closing this
I do acknowledge that the docs in this area sorely needs some love, which yours truly will get to.
Status: Issue closed
username_3: It seems Jersey internally handles any WebApplicationException prior to any custom registered exception mappers. I have not found any way to customize this behavior unfortunately.
Have a look in the class org.glassfish.jersey.server.ServerRuntime and do a search for 'WebApplicationException' you'll find what happens. |
kubernetes-client/java | 669450298 | Title: There is no ' io.kubernetes.client.util.Yaml'
Question:
username_0: Because of upgraded to k8s 1.18.0 , I upgrade client-java to 9.0.0; Now i found that there is no io.kubernetes.client.util.Yaml; Is there something changed ?

Answers:
username_1: https://github.com/kubernetes-client/java/blob/master/util/src/main/java/io/kubernetes/client/util/Yaml.java
don't think so? otherwise we will preserve compatibility in terms of deprecating classes
username_0: but actually i cant find it from maven dependency?

username_1: can you retry w/ `mvn clean compile -U`?
username_0: sorry, it was something wrong with my maven&idea configuration.
Thanks for reply. :)
Status: Issue closed
|
XX-net/XX-Net | 636799801 | Title: 流量转让报错
Question:
username_0: xx-net版本:4.0.5
系统:win10
问题描述:使用<EMAIL>账户,转让流量时报错

使用<EMAIL>账号转给<EMAIL>正常,但<EMAIL>账号无法转让流量
Answers:
username_1: 收到,似乎是客户端版本的bug,用老的客户端应该就可以转,回头再修复
username_0: 好的 |
apache/trafficcontrol | 515005235 | Title: Quick How-to page for MSO references dead UI
Question:
username_0: ## I'm submitting a ...
- bug report
## Traffic Control components affected ...
- Documentation
## Current behavior:
The "Quick How-To" page _Configuring Multi-Site Origins_ has screenshots of and instructions for the now-removed Traffic Ops UI.
## Expected / new behavior:
Any and all UI instructions and/or screenshots should only pertain to the only supported and documented UI: Traffic Portal. |
ossrs/srs | 223017863 | Title: APPLICATION: HTTP API get the number of frames received or sent for FPS.
Question:
username_0: To get the FPS, SRS must provide the number of frames, user can get it from HTTP API.
Answers:
username_1: amazing
username_0: 一般帧率计算需要计算一定时间差的,比如10秒的fps,或者30秒的fps,或者300秒的fps,用来体现流的收发情况。
比如,可以每隔10秒钟(最小的统计间隔)取一次数据:
```
第一个10s,取到的帧数是1000
第二个10s,取到的帧数是1500
第三个10s,取到的帧数是1700
```
那么第1个10秒,fps=(1500-1000)/10=50fps。
第2个10秒,fps=(1700-1500)/10=20fps。
第1个30秒,fps=(1700-1000)/30=23.33fps。
如果流本身是25FPS,那么第1个10秒可能是gop cache一次发送了较多帧率,第2个10秒实际上有卡顿,帧率不够。从30秒看,帧率有点点偏低。
username_2: Good
username_0: 
Status: Issue closed
username_0: 更新的分辨率是这个配置:
```
# the normal packet timeout in ms for encoder.
# default: 5000
normal_timeout 7000;
```
可以通过设置这个配置,也就是收包的超时时间来更新。不应该设置太短,否则会认为编码器超时,一般3-5秒以上。
FPS的计算本身也无法实时,因为TCP发送包时,是类似脉冲的,不会一个包一个包的发送,而是一堆一堆的发送,所以这个时间间隔调整为1秒也没有意义,3-5秒以上不会影响FPS的计算。
username_3: how can I get duration of live? |
facebookresearch/PyTorch-BigGraph | 429116779 | Title: Feature request: Initial embeddings
Question:
username_0: Having a feature to allow the user to define initial embeddings would be very helpful!
Currently the only easy work-about I could think of is through modifying checkpoint files but less convoluted methods are not yet available.
For streaming behaviour data (think of reddit and twitter), it's meaningful to allow embeddings to change a little but also not too much from the previous training cycle.
It would be cool if train.py could accept custom initial embeddings and create embeddings for new entities in edge_list.
Thanks FAIR for open sourcing this awesome tool!
Answers:
username_1: I think PBG does more or less what you're asking for. Imagine you have a sequence of edgelists and want to train on one at a time, by bootstrapping the embeddings for an edgelist with the embeddings learned on the previous edgelist. Then, on the very first one, you leave `init_path` unset, which means embeddings are initialized randomly. When training on the second one, you set `init_path` to the `checkpoint_path` of the previous run (and you set `checkpoint_path` to a new empty directory, otherwise PBG would think that training is already complete and do nothing). Then you continue like this until you covered all edgelist files. This should work fine. Note however that, apart from the edgelist path, the configs of all these runs must be identical, including the entity counts.
The TSV importer does support this use-case quite well too, as you can pass to it an arbitrary number of TSV edgelist files and it will partition all their entities and consistently bucket all the edgelists.
Problems may arise if you don't yet have all edgelists available when you start training on the first one. (I'm guessing this is what you meant by "streaming behavior"). In that case you may not be aware at the beginning of entities that will only start appearing at a later time. Off the top of my head there are two adjustments that need to be made for this to work:
- The `import_from_tsv` script needs to be updated to support reading a previously-produced dictionary file and use it to initialize the assignments from entities to partitions + offsets. (Observe that while the assignment of an entity to a partition should be uniformly at random, the offsets within a partition don't need to be shuffled, so it's fine to just append new entities at the end).
- There needs to be a script that allows to "enlarge" a checkpoint, i.e., take a `.h5` file containing the embeddings of a partition of N entities and produce one for N+M entities, by copying over the old embeddings and initializing the new ones with random data.
username_2: Yes, this is a common use case. @username_1 I think we also want to make it easy to initialize from an *externally produced* set of initial embeddings for some of the entities, not just from an existing PBG checkpoint. E.g. it might be useful to initialize with a (properly normalized) SVD of the adjacency matrix or initial embeddings from some smaller-scale embeddings package.
I think we need to add additional support in tsv_to_checkpoint to also take in a tsv file in the embeddings output format (i.e. each row contains entity name followed by tab-separated embeddings) and automatically create an initial checkpoint. This would technically support both use cases (initial embeddings from PBG / initial embeddings from elsewhere). I don't think it's as important to support adding entities to a checkpoint "in place" (although I could be wrong).
username_0: Those are exactly what I'm looking for! Certainly makes the package more robust and simple to use.
I apologize for the loose use of vocabulary in the previous post. I meant to refer to datasets which regularly receives new entries (and therefore streaming). User behaviour data are often like that.
username_0: Hi @username_1 let me check with you real quick. Help me catch the errors in the plan
`init_path` should be a folder containing the`embeddings_{entity}_{part}.v{version}.h5` files. `version` will use contents of `checkpoint_version.txt` if present in `init_path` else version will be assumed to be 0, meaning `embeddings_{entity}_{part}.v0.h5` is perfectly valid.
Also `embeddings.h5` can contain only embeddings and a zeroed optimiser state will be assumed.
username_1: All you said is correct.
If the `.h5` file contains no dataset named `optimizer/state_dict` then the optimizer will be initialized with its default values.
Status: Issue closed
|
dotnet/runtime | 673817895 | Title: Make GraphQL a part of .NET
Question:
username_0: _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/idea/1132825/make-graphql-a-part-of-net.html)._
---
<p>I'd suggest to make GraphQL functionalities a part of .NET (System.Net.Http.GraphQL).</p> <p>There are community projects for it, like you can see here https://graphql.org/code/#c-net, but the demand for GraphQL is increasing. Having buil-it GraphQL classes and methods would provide a covenient way to build an API with a GraphQL endpoint. For instance there could be a template for a WebApi with GraphQL.</p>
---
### Original Comments
#### Feedback Bot on 8/3/2020, 03:17 AM:
Thank you for taking the time to provide your suggestion. We will do some preliminary checks to make sure we can proceed further. We'll provide an update once the issue has been triaged by the product team.
Answers:
username_1: @username_3 is doing some investigation about the maturity of GraphQL libraries on .NET. As a general rule, we want to strengthen the general .NET Library ecosystem so that there are great, trustworthy libraries to fulfil all needs. Only in certain cases is the best answer to put an implementation in the base .NET libraries.
username_2: Also, we would need an actual API proposal as well :)
username_1: I wouldn't suggest to put in the work to make an API proposal without consensus here that we would be open to it. As I suggested above we would rather support our ecosystem and that's what @username_3 is looking into.
username_3: Looping in @jamesmontemagno who is doing focused work here as well. As @username_1 mentioned, right now we aren't looking to build `System.GraphQL`. We are in the process of evaluating existing libraries and speaking with customers to understand what their end-to-end scenarios are and where the pain points in .NET exist. That will drive our next steps to determine how we can help deliver the best experience for .NET developers.
username_1: @username_4 where do you plan to track issues relating to GraphQL work (if any)?
username_4: @username_1 Not decided yet.
username_5: At one point in time, `System.Text.Json` did not exist, which was fine because `Newtonsoft.Json` worked very well (and still does). But, Microsoft still created and supported a NuGet package for Json support. I don't see why GraphQL can't also have a Microsoft package for support.
When you create a new ASP.NET Core project, it defaults to using REST. Having a package available like `Microsoft.AspNet.GraphQL` (just an example) would be nice.
username_4: @username_5 Have you tried [Hot Chocolate from ChilliCream](https://chillicream.com/docs/hotchocolate)? Don't be put off by the name; it's a really well done, professional GraphQL platform.
We are currently in the process of deciding how to best support GraphQL in .NET. Feedback on the advantages of a Microsoft solution verses leveraging third-party packages would be much appreciated.
/cc @username_3
username_5: I have tried both `HotChocolate` and `graphql-dotnet` -- I even have a contribution towards the `graphql-dotnet` project. Both are really good. I'm not sure what the decision to create `System.Text.Json` was when even ASP.NET was using `Newtonsoft.Json`, but it happened. I imagine the same will probably happen with the GraphQL protocol.
username_3: @username_5 `System.Text.Json` addresses a much lower level in the stack than GraphQL. For example, ASP.NET Core relies on JSON serialization as part of its engine and functionality, so the choice of a library impacts the entire framework. GraphQL is an extension to the core capabilities and therefore is more flexible and doesn't require us to commit to a specific library for it to work.
Having said that, there are probably features that make sense for developers independent of the GraphQL library they use. For example, `.graphql` and `.gql` schemas are well-defined and work the same for any library, so one consideration is providing support for these (validation, IntelliSense).
Can you share the benefits you would gain from a Microsoft GraphQL implementation over using a community solution like HotChocolate? A big part of this work item is understanding what developers like you need and are missing so we can help fill that gap. Your feedback will be very valuable to help us craft the best solution to address your requirements.
username_3: Adding this cross-reference for further discussion:
[Issue #5852: .NET Developers have a first class experience working with GraphQL](https://github.com/dotnet/core/issues/5852)
Status: Issue closed
username_6: Closing this in favor of https://github.com/dotnet/core/issues/5852. |
mlcommons/inference | 1127641647 | Title: Equal Issue mode only implemented for Offline
Question:
username_0: We have an equal issue mode implementation for Offline scenario, through PR1032 (https://github.com/mlcommons/inference/pull/1032), and this works flawlessly for 3D-UNet Offline runs.
Now for 3D-UNet SingleStream scenario, we are missing the equal issue mode support from LoadGen, and it is problematic as below:
* 3D-UNet KiTS19 input sets has 42 samples, with 15 different shapes; total voxel count of each sample ranges from 7.8 millions to 64 millions.
* First 1050 samples using RNG seed for v2.0 submission produces sequence of samples whose total voxel count to be 33,341,833,216.
* With equal issue mode, first 1050 samples give total voxel count of 34,000,076,800.
* If one produces logs satisfying the min_query_count=1024 for SingleStream, the work is about 2% less than it should be, and the performance metric (90% latency) would be optimistic as such.
* Next round, with different RNG seed, the measurement may be flipped into pessimistic side.
* Overall, unless equal issue mode is introduced, there will be opposing 'official results' round by round, even with the same system (HW&SW) running exact the same 3D-UNet SingleStream.
* Without equal issue mode, early stopping backbone (statistical model) is broken, limiting the usefulness of early stopping model.
It is important for LoadGen to support the equal issue mode, for scenarios other than Offline.
For v2.0, we probably want to add the support to 3D-UNet SingleStream specifically, and after the submission we can make the implementation to be more generic and unified.

Answers:
username_1: @ashwin @pgmpablo157321 @psyhtest for visibility
Status: Issue closed
|
gchq/Gaffer | 260904065 | Title: Filter, Transform, and Aggregate should be validated
Question:
username_0: When the above operations are used via the REST API, it is difficult to discern any potential issues with the query, so adding validation to each of them would aid utilisation of them.
Answers:
username_1: Merged into develop.
Status: Issue closed
|
tarantool/doc | 489691681 | Title: None
Question:
username_0: See https://github.com/tarantool/tarantool/issues/3904
Answers:
username_1: Added follow status as per https://github.com/tarantool/tarantool/issues/3904.
But as i was testing the behaviour in different versions i found, that in versions 2.1 and 2.2 the downstream shown as follows:
```
downstream:
status: follow
idle: 0.79490400000941
vclock: {3: 1, 1: 6}
```
which means there is additional `idle` in downstream. Note this is not the case for 1.10 versions i have tried.
Is there a correct description for this parameter available anywhere? @username_0
username_0: See https://github.com/tarantool/tarantool/commit/a4a7744ce54b02bda2fc6c6bccc82eb7a36a6dc4
It is a current time minus a time when a last row was sent by a relay.
username_1: Is it time of "sent" by other server (so additional time sync issues possible) or local time the moment it has been received?
username_0: As I see it is pure local value: when a relay sends a row it updates `last_row_time` value with a current time. When box.info() is called, `idle` is set to current_time - last_row_time.
Status: Issue closed
|
chrisjshull/homebridge-nest | 647125633 | Title: Home Occupied Not Working Anymore
Question:
username_0: Hi Chris
Still enjoying your excellent work.
I am having an issue since a couple of days that my EU Nest Thermostat E is not doing **Home Occupied** anymore,
I think Google may have changed something in there API. I also tried generating a new pair but no luck there.
Changing to ECO and Temp still work.
Status: Issue closed
Answers:
username_0: I redeployed HOOBS and got the latest version of 3.2.6 and the I could update your plugin to 4.4.6 which resolved the issue.
Everything is fine again. |
h7ris/VAWTCleanCase | 445233509 | Title: improve the case
Question:
username_0: We want to make this run faster (less computationally expensive), and otherwise improve the mesh.
Play around, and make it faster -- the results should not change.
Play around, and make it better -- the "continuity" residuals get smaller, faster, we are getting better. (look at noIterations) --
```
smoothSolver: Solving for omega, Initial residual = 3.30116e-05, Final residual = 6.49584e-11, No Iterations 12
```
Try running locally (with 0.5s instead of 10s), so you can compare. Use the existing case as a ground truth, and compare other schemes to it.
(Do not update the timestep -- Haris will be working on that.)
1. Look at `fvSchemes`:
Look at the divergence, gradient schemes, etc
The most important things to look at are "time difference schemes " (ddt) (never use euler, use backwards or nicholson).... and "divergence" and "grad" schemes.
2. Look at`fvSolution`:
There may be other solvers that work better and run faster.
Also, look at tolerances and relaxationFactors / symmetrical vs nonsymmetrical .
3. Look at 1.txt, look at hwall_n and thickness and ratio (at the bottom of the file)
And look at transfinite lines... (mesh controls)
Answers:
username_0: I need to come up with a game plan for
divergence schemes and hwall_n, thickness, transfinite lines.
Decide what I want to test and write it up. How many simulations do we need? How long will it take? We may need to test this on a supercomputer, and Haris will look into computer time.
username_1: https://www.cfd-online.com/Tools/yplus.php
username_0: 1. divergence scheme test:
* div(phi,U): linearUpwind, linear, SFCD
* div(phi,k) and div(phi,omega) (they stay together): limitedLinear, linear, SFCD
3*3 is 9 cases
2. Mesh test
* 2 and 3 times (or 1.5 and 2.5 times) of all numbers in
Transfinite Line {1:8} = 150 Using Progression 1;
Transfinite Line {9, 27} = 800 Using Progression 1;
Transfinite Line {15,17,19,21} = 300 Using Progression 1;
Transfinite Line {16, 18, 20, 22} = 6 Using Progression 1;
* 3 points around thickness Field[1].thickness = 1.5e-2;...
* 3 points around Field[1].hwall_n = 1e-3;
(Use the y-plus calculator!)
username_0: We decided to skip the Mesh test, and just run the divergence scheme test. |
worlduniting/revdavethompson.com | 343764749 | Title: RDT site should be wrapped in rails framework and hosted on heroku with SSL
Question:
username_0: In order to have a full feature set, RDT site should be wrapped in a rails framework and hosted on Heroku with SSL enabled. This will allow for #2 donation issue to be resolved, as Stripe Elements require SSL enabled in order to use. |
laurencedawson/reddit-sync-development | 103829847 | Title: Captcha not visible (black)
Question:
username_0: The captcha is black when I try to post on a subreddit.
Android m developer preview 3
Nexus 5
Screenshot
Answers:
username_0: http://imgur.com/k58VMVl
username_1: It isn't just a problem for M developer preview, the problem seems to be happing with anyone using the material designed app
username_2: This is also happening to me, Galaxy S6 Edge, Android 5.1.1 - very frustrating! Tried disabling AMOLED and night views made no difference.
username_3: This has since been fixed. Cheers.
Status: Issue closed
|
pbihler/atom-macros | 688685198 | Title: Rebuild Failed
Question:
username_0: from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:25,
from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/node.h:63,
from ../../nan/nan.h:52,
from ../src/common.h:6,
from ../src/main.cc:1:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8config.h:307:49: warning: ‘BufferReference’ is deprecated: Use MemorySpan<const uint8_t\> [-Wdeprecated-declarations]
declarator __attribute__((deprecated(message)))
^
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:4444:3: note: in expansion of macro ‘V8_DEPRECATED’
V8_DEPRECATED("Use CompiledWasmModule::GetWireBytesRef()",
^~~~~~~~~~~~~
In file included from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/node.h:63:0,
from ../../nan/nan.h:52,
from ../src/common.h:6,
from ../src/main.cc:1:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:4386:58: note: declared here
V8_DEPRECATED("Use MemorySpan<const uint8_t\>", struct) BufferReference {
^~~~~~~~~~~~~~~
In file included from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8-internal.h:13:0,
from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:25,
from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/node.h:63,
from ../../nan/nan.h:52,
from ../src/common.h:6,
from ../src/main.cc:1:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8config.h:307:49: warning: ‘v8::WasmModuleObject::SerializedModule’ is deprecated: Use OwnedBuffer [-Wdeprecated-declarations]
declarator __attribute__((deprecated(message)))
^
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:4457:3: note: in expansion of macro ‘V8_DEPRECATED’
V8_DEPRECATED("Use CompiledWasmModule::Serialize()",
^~~~~~~~~~~~~
In file included from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/node.h:63:0,
from ../../nan/nan.h:52,
from ../src/common.h:6,
from ../src/main.cc:1:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:4380:55: note: declared here
std::pair<std::unique_ptr<const uint8_t[]\>, size_t\> SerializedModule;
^~~~~~~~~~~~~~~~
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h: In member function ‘v8::Local<v8::Boolean\> v8::Value::ToBoolean() const’:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:10172:62: warning: ‘v8::MaybeLocal<v8::Boolean\> v8::Value::ToBoolean(v8::Local<v8::Context\>) const’ is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
return ToBoolean(Isolate::GetCurrent()-\>GetCurrentContext())
^
In file included from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8-internal.h:13:0,
from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:25,
from /home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/node.h:63,
from ../../nan/nan.h:52,
from ../src/common.h:6,
from ../src/main.cc:1:
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8.h:2388:63: note: declared here
V8_WARN_UNUSED_RESULT MaybeLocal<Boolean\> ToBoolean(
^
/home/<user\>/.atom/.node-gyp/.cache/node-gyp/5.0.13/include/node/v8config.h:322:3: note: in definition of macro ‘V8_DEPRECATE_SOON’
declarator __attribute__((deprecated(message)))
^~~~~~~~~~
In file included from ../../nan/nan_converters.h:67:0,
from ../../nan/nan.h:221,
from ../src/common.h:6,
from ../src/main.cc:1:
../../nan/nan_converters_43_inl.h: In static member function ‘static Nan::imp::ToFactoryBase<v8::Boolean\>::return_t Nan::imp::ToFactory<v8::Boolean\>::convert(v8::Local<v8::Value\>)’:
../../nan/nan_converters_43_inl.h:18:51: warning: ‘v8::MaybeLocal<v8::Boolean\> v8::Value::ToBoolean(v8::Local<v8::Context\>) const’ is deprecated: ToBoolean can never throw. Use Local version. [-Wdeprecated-declarations]
val-\>To ## TYPE(isolate-\>GetCurrentContext()) \
[Truncated]
gyp ERR! build error
gyp ERR! stack Error: \\>make\` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/share/atom/resources/app/apm/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (events.js:198:13)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)
gyp ERR! System Linux 5.4.0-42-generic
gyp ERR! command "/usr/share/atom/resources/app/apm/bin/node" "/usr/share/atom/resources/app/apm/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/<user\>/.atom/packages/atom-macros/node_modules/pathwatcher
gyp ERR! node -v v10.20.1
gyp ERR! node-gyp -v v5.1.0
gyp ERR! not ok
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: \`node-gyp rebuild\`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/<user\>/.atom/.apm/_logs/2020-08-30T09_09_28_068Z-debug.log
Answers:
username_1: Same issue on 1.51.0 |
cortexlabs/cortex | 647809958 | Title: How to reuse the port and identity the API by URI
Question:
username_0: If I want to use the same port for different API, for example:`http:localhost:8888/uri1` and `http:localhost:8888/uri2`. How should I set in `cortex.yaml` and `predictor.py`, is there any example?
Answers:
username_1: I think it's not possible at the moment with local deployments. Good news is that it's been raised before, see #961
Status: Issue closed
username_2: @username_1 yes, that is correct, thank you for linking to #961; I'll go ahead and close this issue given that we have #961 to track this |
angular/material | 178934397 | Title: md-chips with md-autocomplete[md-require-match="true"] and [md-add-on-blur]
Question:
username_0: **Actual Behavior**:
When using md-chips with md-autocomplete and md-require-match being true it should not be possible to add new labels. When md-add-on-blur is also true chips get added on blur, though the require match prohibits chip adding.
**Expected Behavior**:
When require-match is used the chip should not be added on blur.
**Angular Versions**:
- `Angular Version: 1.5.8`
- `Angular Material Version: 1.1.1`
Answers:
username_1: Duplicate of #9582.
Status: Issue closed
username_0: Workaround which fixes this issue but not #9582
```js
angular.module( "mdChipsOverride", [] ).directive( "mdChips", mdChips );
function mdChips()
{
return {
restrict: "E",
require: "mdChips", // Extends the original mdChips directive
link: function ( _scope:IScope, _element:IAugmentedJQuery, _attributes:IAttributes, mdChipsCtrl:any )
{
mdChipsCtrl.onInputBlur = function ( this:any, _event:JQueryEventObject )
{
this.inputHasFocus = false;
let chipBuffer = this.getChipBuffer().trim();
// Update the custom chip validators.
this.validateModel();
let isModelValid = this.ngModelCtrl.$valid;
if( this.userInputNgModelCtrl )
{
isModelValid &= this.userInputNgModelCtrl.$valid;
}
// Only append the chip and reset the chip buffer if the chips and input ngModel is valid.
if( this.addOnBlur && chipBuffer && isModelValid && !this.requireMatch )
{
this.appendChip( chipBuffer );
this.resetChipBuffer();
}
};
}
};
}
``` |
autofac/Autofac | 149043284 | Title: Add netstandard support
Question:
username_0: RC2 is getting close. Autofac should support netstandard.
Answers:
username_1: The RC2 package on our MyGet feed has already been updated to netstandard.
https://www.myget.org/feed/autofac/package/nuget/Autofac
We push across to NuGet.org after Microsoft pushes their release packages over.
Need to check out what has changed since we did the netstandard update. There's always something. =)
username_2: Given we're tracking the .NET Core tasks in #594 and this is on the list of things to do, I'm going to close this as a duplicate. If you're interested in tracking progress, that's a good issue to check out.
Status: Issue closed
|
cri-o/cri-o | 497521664 | Title: cri-o passes (possibly) reused namespaces on network teardown
Question:
username_0: **Description**
CRI-O tears down networking after a reboot, risking pid reuse.
**Steps to reproduce the issue:**
Sadly, reproducing this is probabilistic. It should still be easy to fix, though
1. Reboot the node. Create some containers, so they have a low pid number
2. Reboot the node again
3. Kubelet starts tearing down sandboxes that were killed because of the reboot
4. cri-o issues a CNI delete with /proc/<pid>/ns/net, even though <pid> is meaningless since the reboot.
Even if you don't get a pid collision, I was able to see pretty clearly getting a CNI DEL for a stale pid. For example, from crio logs at level Info:
```
Sep 24 07:40:06 ip-10-0-141-144 crio[1106]: time="2019-09-24 07:40:06.625440600Z" level=info msg="Got pod network &{Name:ale
rtmanager-main-1 Namespace:openshift-monitoring ID:9ca4f5c165c1f5057fc2014662836565ece49df963d99fa2ac3887df74c08ec7 NetNS:/p
roc/8036/ns/net Networks:[] RuntimeConfig:map[]}"
-- reboot --
Sep 24 07:40:19 ip-10-0-141-144 crio[1106]: time="2019-09-24 07:40:19.384497028Z" level=info msg="About to del CNI network lo (type=loopback)"
Sep 24 07:40:19 ip-10-0-141-144 crio[1106]: time="2019-09-24 07:40:19.386740838Z" level=error msg="Error deleting network: failed to Statfs "/proc/8036/ns/net": no such file or directory"
```
This clearly shows that it is *looking* for `/proc/8036...`, and it happens to not be a process. However, reboot enough times and you will eventually lose. We typically see this in about 1-in-10 reboots.
**Describe the results you received:**
We got a CNI Delete with the netns of `/proc/<pid>/ns/net`, which is correct, except that the node was rebooted in the mean time, and `/proc/<pid>/ns/net` pointed to the root netns.
**Describe the results you expected:**
The CNI delete should be with an empty netns parameter, which signifies to the plugins that the namespace is gone and only bookkeeping operations (e.g IPAM cleanup) are to be done. CRI-O should only pass the netns parameter if it points to a known-good crio-created process that is still running.
**Output of `crio --version`:**
```
crio version 1.14.10-0.19.dev.rhaos4.2.gita86dae7.el8
```
Answers:
username_0: Another solution would be to bind-mount the network namespace to `/var/run/netns/<containerid>`, and only pass that. Then, if there's a reboot, it will be invalidated automatically. This is what rkt does.
username_1: @username_0 I am currently working on fixing this up in CRI-O. I am returning an empty string as the netns path but I keep getting this error when tryng to stop the network and clean up
```
level=error msg="Error deleting network: failed to Statfs "": no such file or directory"
34.709227408-04:00" level=error msg="Error while removing pod from CNI lo network: failed to Statfs "": no such file or directory"
```
This probably needs to be fixed in ocicni before we can patch this up in CRI-O
username_0: Very good point. Filed https://github.com/cri-o/ocicni/issues/64
username_2: Using ```ocicnitool``` supplying "" for netns seems to behave. What call is returning the ```failed to Statfs``` error?
username_3: I think the stat is coming from `internal/lib/sandbox/sandbox.go:372` `NetNsPath()` which returns the path given the pause pid. I am guessing the pause pid is no longer valid, and so the stat fails? If the pause container is down, the net namespace is probably not valid anymore. what is the proper course of action there?
username_3: I actually think it comes from
`vendor/github.com/opencontainers/runc/libcontainer/container_linux.go`
because we try to grab the NetNsPath in `server/container_create_linux.go:618` and immediately pass it to the runtime.
username_3: We have mitigated this with the option `manage_ns_lifecycle` in cri-o, which does not allow for races with kernel namespace tearing down. as such, I am closing this. please reopen if you disagree
Status: Issue closed
username_0: @username_3 cool, thanks for following up. That's a decently large enough change to grok, so one question for you: we don't pass namespaces with pids to CNI anymore, right?
username_3: nope! they're bind mounted in a similar way that rkt did it |
cech12/CeramicBucket | 755127685 | Title: Add Whitelist for breaking fluids
Question:
username_0: Hot fluids like lava can break ceramic buckets. Unfortunately the temperature is not everytime the best parameter because every mod developer can set it by its own. An additional whitelist could enable a better configuration.
Two options: a tag or a config option
Status: Issue closed
Answers:
username_0: will be released in 2.4.0 |
feenkcom/gtoolkit | 484854475 | Title: BlTaskItStatus display Failed queue entries
Question:
username_0: Extend BlTaskItStatus to display information about failed tasks.
Answers:
username_1: We have
```
gtStatusFor: aView
<gtView>
^aView textEditor
title: 'Status' translated;
priority: 1;
look: BrGlamorousCodeEditorLook;
text: [ self statusString asRopedText ]
```
We can open a new one with more specifics if needed.
Status: Issue closed
|
alexedwards/scs | 255558652 | Title: Can't load session multiple times in same request and get saved value
Question:
username_0: ```go
package main
import (
"fmt"
"net/http"
"net/http/httptest"
"github.com/username_1/scs"
)
var manager = scs.NewCookieManager("u46IpCV9y5Vlur8YvODJEhgOY8m9JVE4")
type Site struct {
}
func (site Site) ServeHTTP(w http.ResponseWriter, req *http.Request) {
session := manager.Load(req)
session.PutString(w, "key", "value")
fmt.Println(session.GetString("key"))
session2 := manager.Load(req)
// should be able to get saved value
fmt.Println(session2.GetString("key"))
}
func main() {
Server := httptest.NewServer(Site{})
http.Get(Server.URL)
}
```
https://github.com/username_1/scs/blob/master/session.go#L41
Maybe should save session into request's context, and load it from context next time?
Answers:
username_1: Sorry, I should have spotted that.
I've added a new `manager.Multi` middleware, which loads the session into the request context. If you wrap your router (or routes) with that, it now works as expected.
```go
package main
import (
"fmt"
"net/http"
"net/http/httptest"
"github.com/username_1/scs"
)
var manager = scs.NewCookieManager("u46IpCV9y5Vlur8YvODJEhgOY8m9JVE4")
type Site struct {
}
func (site Site) ServeHTTP(w http.ResponseWriter, req *http.Request) {
session := manager.Load(req)
session.PutString(w, "key", "value")
fmt.Println(session.GetString("key"))
session2 := manager.Load(req)
// should be able to get saved value
fmt.Println(session2.GetString("key"))
}
func main() {
Server := httptest.NewServer(manager.Multi(Site{}))
http.Get(Server.URL)
}
```
Status: Issue closed
|
npgsql/EntityFramework6.Npgsql | 330660008 | Title: The store type 'jsonb' could not be found in the Npgsql provider manifest
Question:
username_0: I m trying to use Npgsql to connect to postgresql database with Entityframework 6.
I have a mapping problem with 'jsonb' data type :
this is sample code :
```
class Program
{
static void Main(string[] args)
{
using (var context = new db_Entities())
{
var customers = context.Customer.ToList();
foreach (var cust in customers)
{
Console.WriteLine(cust.Id);
}
}
Console.ReadLine();
}
public partial class db_Entities : DbContext
{
public db_Entities() : base(nameOrConnectionString: "Default") { }
public DbSet<Customer> Customer { get; set; }
}
public class Customer
{
[Key]
[Column("id_customer")]
public int id { get; set; }
public string customer { get; set; }
public string nit { get; set; }
public string address { get; set; }
[Column(TypeName = "jsonb")]
public string SerializdJson { get; set; }
}
}
```
The app.config
```
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />
<!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->
</configSections>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
[Truncated]
```
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="EntityFramework" version="6.2.0" targetFramework="net461" />
<package id="EntityFramework6.Npgsql" version="3.1.1" targetFramework="net461" />
<package id="Npgsql" version="4.0.0" targetFramework="net461" />
<package id="System.Runtime.CompilerServices.Unsafe" version="4.5.0" targetFramework="net461" />
<package id="System.Threading.Tasks.Extensions" version="4.5.0" targetFramework="net461" />
<package id="System.ValueTuple" version="4.5.0" targetFramework="net461" />
</packages>
```
when i execute this program i get this error
` The store type 'jsonb' could not be found in the Npgsql provider manifest`
if i remove the attribute `[Column(TypeName = "jsonb")]` the program work fine
Is there something missing ?
Answers:
username_1: I have the same issue, do you have a temporary work around?
I am following your sample on http://www.npgsql.org/ef6/
username_0: @username_1 no, i did not found any workaround
i was testing Npgsql to migrate from Devart to use Npgsql instead
but i faced a lot of problems ...
so i will continue to use devart for now , as Npgsql is not stable yet to use it with EF6 |
lorenmt/mtan | 533217963 | Title: file structure
Question:
username_0: Wish you show me your file(data) structure.I can't run this program since I don't konw how to prepare the data
Answers:
username_0: And I wanner the is there npy file in origin data ? I see the program load the npy file in create_dataset.py
username_1: Please carefully read the readme document which has already explained how to prepare the data. It's quite simple: just to download the files I attached in the dropbox links.
Status: Issue closed
username_0: Thanks for your reply, but I can't open that url in China.
username_0: Thank you, all is ok |
magda-io/magda | 519684958 | Title: Misplaced tooltip doesn't do anything
Question:
username_0: **Describe the bug**
There is a tooltip in the `/dataset/add` flow that doesn't do anything.
Also, the tooltip is slightly bottom-aligned.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '/dataset/add'
1. Scroll down to where it says 'Spatial extent'
1. Notice the tooltip that doesn't do anything.
**Expected behavior**
The tooltip, on mouse-over, should show a helpful message, or just not be there at all.
**Screenshots**

Answers:
username_1: @username_2 can you please confirm the expected behaviour here is:
"Select the appropriate region that best matches the spatial extent of your dataset.
💡 This helps data users understand the scope of your dataset"
username_2: @username_1 do you mean removing the tooltip and replacing it with a lightbulb icon? If so, yes
username_0: Yep, removing that question mark and replacing it with a lightbulb so that the whole text reads like:
Select the appropriate region that best matches the spatial extent of your dataset.
:bulb: This helps data users understand the scope of your dataset
username_2: Sounds good
Status: Issue closed
|
dm4t2/vue-currency-input | 708224358 | Title: Glitch when using DistractionFree...
Question:
username_0: This control is awesome and just what I was looking for. However, I found a glitch when using the directive with the option distractionFree: {hideCurrencySymbol: false}. Using your codesandbox to illustrate the issue, change to the following:
`<input ref="input" v-currency="{currency: 'USD', locale: 'en', distractionFree: {
hideNegligibleDecimalDigits: false,
hideCurrencySymbol: false,
hideGroupingSymbol: false
}
}"
v-model="value2"></currency-input>
<p><button @click="value2 = 55">Button</button></p>`
Using the modified "codesandbox' example, click the button, tab into the directive control, do not edit, then tab off. You will notice the value resets to the "previously edited value" on lost focus. It would appear that there is an internal value that is being retained and not reset when the v-model value is reset outside the control.
Note that this only happens when "hideCurrencySymbol: false".
Other things to note:
It doesn't matter if the 55 is a number or a string;
It can be any combination of the distractionFree options, so long as hideCurrencySymbol is set to false, it occurs.
Answers:
username_1: Hi, you have to use [setValue](https://username_1.github.io/vue-currency-input/api/#setvalue) if you want to set the value programmatically:
https://codesandbox.io/s/vue-currency-input-169-do3c8?file=/src/App.vue
Status: Issue closed
|
ioBroker/ioBroker.sonos | 1176763103 | Title: Coordinator Volume setzen geht nicht mehr richtig
Question:
username_0: Hallo,
wollte gerade einen Taster zur Volumensteuerung installieren dabei ist mir folgender Bug aufgefallen:
Wenn der Coordinator in der Gruppe ist, wird einfach ein ganz anderer Wert umgesetzt. Selbst wenn man es bei den Objekten direkt eingibt wird aus einem "8" z. B. ein "11". Bei den anderen Player der Gruppe funktioniert es nach wie vor einwandfrei.
Wenn man den Coordinator alleine (ohne andere Player) betreibt funktioniert das Volumen setzen auch wie erwartet. Kann nur leider nicht sagen seit wann das Problem besteht. Im Log taucht nichts verdächtiges auf, er ändert den Wert einfach.
Sonos Adapter Version: 2.1.7
Admin 5.3.1
JS-Controller 4.0.21
Node.js 14.19.0
npm 6.14.16
Wäre nett, wenn ihr euch das mal anschauen könntet. |
vpereira/ctags-web | 308553698 | Title: add search operator file: xxxx
Question:
username_0: if you are searching, would be good to be able to give an operator as option like:
```my_super_func file: super.c``` and then the search would limit the scope of the search just for files with that name |
gravitee-io/issues | 502145008 | Title: [management] Override context path while importing an OAI spec
Question:
username_0: It will be a good option to allow to override the context path while importing an API. (Maybe only when the current context path exists only?)
Answers:
username_0: Closed by https://github.com/gravitee-io/issues/issues/4295
Status: Issue closed
|
github-vet/rangeloop-pointer-findings | 771500955 | Title: renproject/shamir: vss.go; 4 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/renproject/shamir/blob/4bbf85e33bd4851c359528484635b76d1edab4a5/vss.go#L370-L373)
<details>
<summary>Click here to show the 4 line(s) of Go which triggered the analyzer.</summary>
```go
for i, ind := range indices {
(*vshares)[i].Share = shares[i]
polyEval(&(*vshares)[i].Decommitment, &ind, coeffs)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4bbf85e33bd4851c359528484635b76d1edab4a5 |
htongtongx/htongtongx.github.io | 772735536 | Title: go 百宝箱 | 黄童威笔记本
Question:
username_0: https://huangtongwei.cn/2019/12/24/go-note/
go mod 的使用12345678# xxx表示项目的根域名go mod init "xxx"# 映射go mod edit -replace=golang.org/x/[email protected]=github.com/golang/[email protected]# 执行以下命令会自动分析项目里的依赖关系同步到go.mod文件中,同时创建go.sum文件go mod tidy# 直接使用这个命令就可以把GOPA |
rook/rook | 715158917 | Title: Support all the rook secrets to be stored in Vault
Question:
username_0: <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Feature Request
**What should the feature do:**
There're 30+ secrets created by rook-ceph deploy in general. And some are the very sensitive information( password, admin-id/, adminkey, certificate). And the k8s secrets can be easily accessed/attacked as a week encryption algorithm is used. Need to enhance the security level by using Vault or similar methods.
**What is use case behind this feature:**
This is an extenstion of #6105
**Environment**:
<!-- Specific environment information that helps with the feature request -->
Answers:
username_0: Vault integration is planned for v1.5, scratch my previous comment.
_Originally posted by @travisn in https://github.com/rook/rook/issues/6105#issuecomment-702857257_
username_0: @travisn Any design or similar exists?
username_1: I'll soon open a PR which will serve as a foundation for supporting your use case.
username_0: @username_1 Do you have update on this feature?
username_1: The PR is here: https://github.com/rook/rook/pull/6474.
username_0: @username_1 #6476 ( related to OSD encription ) has different scope. This ticket is for store the rook secrets in Vault, i.e, user/passwd.
username_1: Yes, but it will serve as a foundation for further improvements.
So once it merges you can start working on a broader implementation if you want.
username_0: Please keep it open. |
youzan/vant | 930651044 | Title: [Bug Report] stepper在开启async-change希望能让blur后的值也交由开发者控制
Question:
username_0: ### 设备 / 浏览器
Chrome
### Vant 版本
2.12.18
### Vue 版本
2.5.22
### 重现链接
<a href="https://jsbin.com/sevitic/edit?js,console,output" target="_blank">https://jsbin.com/sevitic/edit?js,console,output</a>
### 描述问题
首先场景是这样的,假定此时max为10,但是由于种种原因,实际上发给后端大于3的值后端就会返回报错,此时我期望
1.用户从输入框输入超过3的值,给后端传值,此时返回一个reject
2.然后stepper的值退回上一个change的值,比如由1改到14,blur的时候 应当退回到1
然而实际上,请具体看链接里的代码:
1.当用户从1开始,输入4-10的值,比如8,值直接到了8,此时组件内部currentValue为8,而传入的值仍为1
2.当用户从1开始,传入大于10的值,比如12,值直接到了10,此时组件内部currentValue为10,而传入的值仍为1,现象就是stepper上显示值是10,无法通过重新赋值传参来更改视图,只能通过赋值内部的currentValue来改变
我想请教下,在开启async-change后,组件的值是否应该全交由开发者控制?组件默认的比如blur事件对于value的处理(比如输入的值超过max,自动置为max),是否应当交给开发者控制
Answers:
username_1: 开启 async-change 属性时,onBlur 时确实不应该直接修改 currentValue,下个版本会调整下
Status: Issue closed
username_1: 已在 2.12.23 版本修复 |
zyla/rybu | 159819936 | Title: Check exhaustiveness in pattern matching
Question:
username_0: Non-exhaustive pattern matches cause invalid Dedan code to be generated ([498. Undefined continuation service in action](https://github.com/zyla/rybu/wiki/Known-problems---Troubleshooting#semantic-error-498-undefined-continuation-service-in-action)).
We need to compare return values of server action with processes' patter-matches.
Probably strongly typed return values, instead of a global enumeration, would help in this matter. |
kamranahmedse/driver.js | 577918405 | Title: It's possible to call moveNext when I click on driver-popover-item?
Question:
username_0: I'm trying to trigger the moveNext function when the user clicks inside the popover chained to the highlighted element, is it possible?
I tried to attach a click event inside driver-popover-item but sometimes the element doesnt exist yet |
moby/moby | 257932520 | Title: Out of memory on windows container,
Question:
username_0: **Steps to reproduce the issue:**
1. Run docker for windows.
2. Switch to windows containers.
3. Run container.
**Describe the results you received:**
OutOfMemoryException

**Describe the results you expected:**
No exception.
**Additional information you deem important **
Here is my code:
`static void Main(string[] args)
{
int count = 2048 * 1024;
List<byte[]> list = new List<byte[]>(count);
for (int i = 0; i < count; i++)
{
list.Add(new byte[1024]);
Console.WriteLine((long)(i + 1) / 1024 + "MB");
}
Console.WriteLine(list.Count);
Console.ReadLine();
}`
**Output of `docker version`:**
```
Client:
Version: 17.06.2-ce
API version: 1.30
Go version: go1.8.3
Git commit: cec0b72
Built: Tue Sep 5 19:57:19 2017
OS/Arch: windows/amd64
Server:
Version: 17.06.2-ce
API version: 1.30 (minimum version 1.24)
Go version: go1.8.3
Git commit: cec0b72
Built: Tue Sep 5 19:59:47 2017
OS/Arch: windows/amd64
Experimental: true
```
**Output of `docker info`:**
```
C:\Users\111>docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 5
Server Version: 17.06.2-ce
[Truncated]
Operating System: Windows 10 Pro
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.903GiB
Name: DESKTOP-8HO8BQH
ID: EGP5:UGMM:HTWN:LI3W:KRCA:UJA4:WJL3:R2JP:4KHA:VXUG:52L7:XYOT
Docker Root Dir: C:\ProgramData\Docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: -1
Goroutines: 38
System Time: 2017-09-15T13:34:32.1201138+08:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
Answers:
username_1: Please provide the exact steps to reproduce and what happened; is the container killed, the host? Is the error occurring inside the container or on the host? How is the container started, and what image is used to run the container?
username_2: @username_0 I can't reproduce this. What version of .NET did you target? The information @username_1 asked for is required too.
username_0: @username_2 Thanks and I have resolved this problem, use "-m 2G" when I run that image. I think switch to windows containers, the default memory is 1G.
Status: Issue closed
username_1: Looks like this was resolved then |
tailscale/tailscale | 1187265878 | Title: None
Question:
username_0: Help is scoped by sub-command, because commands accept different flags. See `tailscale up --help`, which does list (among many others) the authkey parameter.
Answers:
username_1: That has not been my experience, and I have enough. For example, the `ip` command with a bunch of subcommands behaves similarly:
```
username_1@tsdev:~$ ip help
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
ip [ -force ] -batch filename
where OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
vrf | sr | nexthop | mptcp }
OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
-h[uman-readable] | -iec | -j[son] | -p[retty] |
-f[amily] { inet | inet6 | mpls | bridge | link } |
-4 | -6 | -I | -D | -M | -B | -0 |
-l[oops] { maximum-addr-flush-attempts } | -br[ief] |
-o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
-rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |
-c[olor]}
username_1@tsdev:~$ ip --help
Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
ip [ -force ] -batch filename
where OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |
vrf | sr | nexthop | mptcp }
OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
-h[uman-readable] | -iec | -j[son] | -p[retty] |
-f[amily] { inet | inet6 | mpls | bridge | link } |
-4 | -6 | -I | -D | -M | -B | -0 |
-l[oops] { maximum-addr-flush-attempts } | -br[ief] |
-o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
-rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |
-c[olor]}
```
Status: Issue closed
|
psi-4ward/docker-contao | 317607282 | Title: DocumentRoot Problem in Contao 4.5
Question:
username_0: Hi,
in contao 4.5 the document root is not set because of the second condition you use in
https://github.com/username_1/docker-contao/blob/f9c6bfcd1aa999f861c5187b815369684cd45cfc/rootfs/run-httpd#L4
I have to add a nonsense line in composer.json to satisfy the script. ;)
Answers:
username_1: So whats the way to detect the contao version ?
username_0: In the require section of the composer.json would be some version numbers from different core bundles
`"require": {
"php": "^7.1",
"contao/calendar-bundle": "^4.5",
"contao/comments-bundle": "^4.5",
"contao/faq-bundle": "^4.5",
"contao/listing-bundle": "^4.5",
"contao/manager-bundle": "4.5.*",
"contao/news-bundle": "^4.5",
"contao/newsletter-bundle": "^4.5"
},`
Maybe could this be an approach?
Status: Issue closed
username_1: Pls test latest in about 30 Minutes, Docker-Hub needs some time to build the image.
username_0: It works. Thanks!
username_1: Nice, I'll tag a new version |
dmwm/CRABServer | 130847802 | Title: check for dead posJobs
Question:
username_0: see #4082
3) The PostJob script should update the file state timestamp once every N minutes (N=15?); if there has been no updates in 3*N minutes, mark the job as failed.
Answers:
username_0: let's review once task process is working and postjob role is much decreased
username_1: Could someone clarify what problem we're trying to solve here, or if there even is such a problem anymore? To check if the PostJob is somehow hung on some part of the code is an interesting problem, but not something that has a clear solution I don't think.
username_0: the problem is exactly to detect situations where postJobs get hung.
It is certainly true that this did not happen in recent times. But it is part of a good tool to have this king of self-diagnosing capability to lessen the need for someone to check things periodically.
It is surely not an urgent topic given that current PostJobs run nicely (other then things like bad CEPH volumes which we notice anyhow).
Maybe we can generalize as:
Task Process should detect if task status is not updateded for abnormaly long time. I can't say what is abnormal, but i we can imagine to start with things as simple as "no update in 24h" and refine checks as we hit things.
I thik the mileston here refers more to review what TaskProcess does vs. what PostJob does then to implementing any specific final solution.
Status: Issue closed
username_1: To summarize, while this would be nice to have, my view is that this is something hard to achieve and not worth the effort knowing that the PostJobs don't seem to die randomly. |
newman55/unity-mod-manager | 709704848 | Title: It's impossible to get the version if it requires several method calls
Question:
username_0: To get the version in Desperados III you need to call something like `MiVersion.Version.ToFullString()` where `Version` is a getter. As far as I can tell the parser always sees `MiVersion.Version` as a type and tries to call the `ToFullString` method on it but since `MiVersion.Version` is not actually I type it fails.
I would be up to implement a fix but I wanted to ask first what you think the best solution is. My two ideas would be to either check every combination if one of them is a getter or introduce some special syntax to separate the type from the calls, like maybe `MiVersion::Version.ToFullString` similar to how Java does it. Although the `:` would clash with the `:After` so maybe something else would be better, maybe `$` or `@`. On the other hand, since for the version the `:After` modifiers don't make sense anyways so it could just be separated. And of course, the old method still has to work in any case.
Quick question while I'm at it: The guide for adding a new game says that the starting point should be as early as possible. Is there a reason to have it really early (like even before the main menu) besides maybe that mods could then modify very early code?
And another thing, maybe it would be a good idea to close issues after they are solved? Otherwise, it gets a bit hard to see the ones that are actually still open.
Answers:
username_1: 1. The version is interpreted as a static method or static field. So you can just write it in the config like this.
<GameVersionPoint>[Assembly-CSharp.dll]MiVersion.Version.ToFullString</GameVersionPoint>
Other ways can be implemented through additional processing [script](https://github.com/username_1/unity-mod-manager/blob/8be8e021853dd21159d033993a1665c2d1ee72db/UnityModManager/Games.cs#L103).
2. There is no reason. I wrote this for those who do not understand what they need. You can choose any position.
3. I think some of the answers will warn against duplicating topics. In addition, you can close the issues yourself.
username_0: [Assembly-CSharp.dll]MiVersion.Version.ToFullString
Yeah, I get that and tried that already but `MiVersion.Version` is a getter so what I need to do is more like `MiVersion.get_Version().ToFullString()`. I don't think that's currently possible, right? I can see that it would be possible if the last call was `ToString` since that is called automatically but that doesn't return a proper version string for this game.
GameScripts seem like a possible solution. Do I understand that correctly that I should just create a PR that adds a game script to the file you linked? If that's the preferred way then I'll do that.
I was just wondering if it wouldn't be better to implement a more general solution that wouldn't require a game script for every game where you can't get the version with a direct static field access or method call. But I guess that's maybe not that common so game scripts might be the easiest solution. It's your call, I would be happy to implement a solution that works for the XML config but I also don't mind just creating a game script.
2. Ok, thanks for the info.
3. That's true, though I don't think that really applies to issues asking to add a game. I would close them myself if I could but only you or the person that created the issue can close it. But I guess it doesn't really matter, I just wanted to point it out.
username_1: You're right, two methods cannot be used at once. But now I have no other solution.
Your script will look something like this.
```csharp
class DesperadosIII : GameScript
{
public override void OnBeforeLoadMods()
{
string ver = "Get value via reflection";
gameVersion = ParseVersion(ver);
}
}
```
username_0: Yes, I know, that's exactly why I'm offering to implement a solution 😅
I'm not sure if I'm somehow not communicating my point properly.
Basically, I think the best solution would be if there were a way to do this without GameScripts. I'm totally fine with writing a GameScript but I think it would be better overall if this would be possible without a GameScript since it would make it easier for other (future) games that might have the same problem and not require a special GameScript for each of them.
After thinking about it a bit, I think it would be best if you could do this: `<GameVersionPoint>[Assembly-CSharp.dll]MiVersion.get_Version().ToFullString()</GameVersionPoint>`. The old way could still work but UMM could for example detect if the string contains `()` and then parse it the new way.
If you think this sounds like a good idea then I would implement this in UMM. If you have a better idea for how this could work I would also be happy to implement that. I'm also fine if you think this sounds like a stupid idea in which case I'm totally okay with just writing a GameScript.
I just want to have a clear answer on what you prefer. So far it feels like you have been kind of avoiding my question and just pointing at the game scripts without saying anything about whether or not you would accept those changes and I'm not really sure what to make of that. If you don't want me to do any changes, that's totally fine with me but please just tell me.
username_1: If this does not break compatibility, I'm all for it.
username_0: Ok, thanks, here is the PR: #60
Status: Issue closed
|
dbeaver/dbeaver | 351635582 | Title: java.lang.NullPointerException while opening ssh tunnel
Question:
username_0: I have a _java.lang.NullPointerException_ while opening ssh tunnel
"Test tunnel configuration" runs ok but when I open the database the null pointer exception is returned.
DBever v5.1.5.201808130751
The tunnel config is the same as previous version
Console error is:
`2018-08-17 17:17:15.704 - SSH INFO: Connecting to XXXXX port 22
2018-08-17 17:17:15.709 - Connection failed (mysql5-1574bb9e2c8-629fe0c048a1d356)
2018-08-17 17:17:15.762 - org.jkiss.dbeaver.model.exec.DBCException: Can't initialize tunnel
org.jkiss.dbeaver.model.exec.DBCException: Can't initialize tunnel
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:739)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.run(ConnectJob.java:70)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.runSync(ConnectJob.java:98)
at org.jkiss.dbeaver.ui.actions.datasource.DataSourceHandler.connectToDataSource(DataSourceHandler.java:106)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.initConnection(DataSourceDescriptor.java:658)
at org.jkiss.dbeaver.model.navigator.DBNDataSource.initializeNode(DBNDataSource.java:147)
at org.jkiss.dbeaver.model.navigator.DBNDatabaseNode.getChildren(DBNDatabaseNode.java:195)
at org.jkiss.dbeaver.model.navigator.DBNDatabaseNode.getChildren(DBNDatabaseNode.java:1)
at org.jkiss.dbeaver.ui.navigator.NavigatorUtils.getNodeChildrenFiltered(NavigatorUtils.java:564)
at org.jkiss.dbeaver.ui.navigator.database.load.TreeLoadService.evaluate(TreeLoadService.java:49)
at org.jkiss.dbeaver.ui.navigator.database.load.TreeLoadService.evaluate(TreeLoadService.java:1)
at org.jkiss.dbeaver.ui.LoadingJob.run(LoadingJob.java:86)
at org.jkiss.dbeaver.ui.LoadingJob.run(LoadingJob.java:71)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:95)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:60)
Caused by: org.jkiss.dbeaver.DBException: Cannot establish tunnel
at org.jkiss.dbeaver.model.net.ssh.SSHImplementationJsch.setupTunnel(SSHImplementationJsch.java:81)
at org.jkiss.dbeaver.model.net.ssh.SSHImplementationAbstract.initTunnel(SSHImplementationAbstract.java:125)
at org.jkiss.dbeaver.model.net.ssh.SSHTunnelImpl.initializeTunnel(SSHTunnelImpl.java:72)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:734)
... 14 more
Caused by: com.jcraft.jsch.JSchException: java.lang.NullPointerException
at com.jcraft.jsch.Util.createSocket(Util.java:394)
at com.jcraft.jsch.Session.connect(Session.java:215)
at org.jkiss.dbeaver.model.net.ssh.SSHImplementationJsch.setupTunnel(SSHImplementationJsch.java:73)
... 17 more
Caused by: java.lang.NullPointerException
at org.jkiss.dbeaver.model.net.ssh.SSHTunnelImpl.matchesParameters(SSHTunnelImpl.java:86)
at org.jkiss.dbeaver.model.exec.DBExecUtils.findConnectionContext(DBExecUtils.java:75)
at org.jkiss.dbeaver.runtime.net.GlobalProxySelector.select(GlobalProxySelector.java:66)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/java.net.Socket.connect(Socket.java:540)
at java.base/java.net.Socket.<init>(Socket.java:436)
at java.base/java.net.Socket.<init>(Socket.java:213)
at com.jcraft.jsch.Util$1.run(Util.java:362)
at java.base/java.lang.Thread.run(Thread.java:844)`
Answers:
username_1: I can see how this may happen in theory but I can't reproduce it.
Looks like there is no host name in your target database socket URI.
Anyhow, I'll add the fix.
Please check this in EA version (https://dbeaver.io/files/ea/)
username_0: With the EA version it runs ok and open the ssh tunnel as expected.
Thank you for fast service :-)
username_1: Thanks for testing :)
Status: Issue closed
|
getgrav/grav | 157534916 | Title: How do I get the last item of a collection in some other part of the website
Question:
username_0: Hi;
I have a page template that must pull its summary from the last published page in an (unpublished) blog section in another part of the website.
I've gotten to the point were I've been able to print all fo the summaries of the pages below the section *seasons* , but I can't find a way to print just the last one. Here is the code I have working in my template:
```
<div class="header">
<div class="header_content">
<h1>{{ page.header.title }}</h1>
{% for p in page.find('/seasons').children if p != page %}
<p class="lead">{{p.route}}—{{ p.summary|striptags}}</p>
{% endfor %}
</div>
</div>
```
This is the structure of the site:
```
_
|
— contents
|
— seasons
|
— Page 1
|
— Page 2
|
— Page 3
```
What I want is to display just Page 3 as a summary in another page (which is a blog of the Contents section), but I get all Pages 1, 2 and 3.
I'm learning GravCMS, but even though the documentation is great, I can't find a way to use `isLast()` properly. In my case, I'm using 'date' to sort the last one.
Can anyone help with this please?
Answers:
username_1: According to Twig docs (http://twig.sensiolabs.org/doc/filters/last.html) you should be able to do this:
```
<div class="header">
<div class="header_content">
<h1>{{ page.header.title }}</h1>
{% for p in page.find('/seasons').children|last if p != page %}
<p class="lead">{{p.route}}—{{ p.summary|striptags}}</p>
{% endfor %}
</div>
</div>
```
Status: Issue closed
username_0: Hi username_1! Thank you for your reply! I tried your suggestion
```
{% for p in page.find('/seasons').children|last if p != page %}
<p class="lead">{{p.route}}—{{ p.summary|striptags}}</p>
{% endfor %}
```
and for some reason that returned empty, so I clicked the link you suggested, and that reminded me about [this in the Grav cookbook](https://learn.getgrav.org/cookbook/twig-recipes). Following that I changed it to
```
{% for p in page.find('/seasons').children.order('date', 'desc').slice(0, 1) %}
```
and that did it. It's now correctly displaying the Page with the latest date as a summary in my Contents blog. Thank you for the suggestions! |
ceu-lang/ceu-maker | 327278935 | Title: [pico-ceu] API for Serial ports
Question:
username_0: Same API used in `ceu-arduino`:
https://github.com/ceu-arduino/driver-usart/blob/master/usart.ceu
- Also needs to receive a `port` parameter.
- Note: *When addressing ports larger than COM9 in Windows you will have to specify the port thusly: "COM10" becomes "\\\\.\\COM10" See: http://support.microsoft.com/default.aspx?scid=kb;EN-US;q115831*
Answers:
username_1: The link wasn't working. Instead, I use as base [this one](https://support.microsoft.com/en-us/help/115831/howto-specify-serial-ports-larger-than-com9). |
picturae/OpenSeadragonMagnifier | 144556129 | Title: Magnifier
Question:
username_0: I tested it and it mostly works fine. There are some minor bugs that i´ll try to fix.
I think that would be more useful that display region could move with mouse cursor. Do you think it´s possible? I´m workin on it.
Thx!
Answers:
username_1: Note: _I got confused when thinking of regions, so I am going to call the box that moves around a "highlight region/box" and the box in the corner that shows the zoomed image a "display region/box"._
That was the initial idea. To be able to drag the highlight & display regions around with the mouse and resize them by dragging a corner. Resizing the regions could then change the zoom ratio on the display region accordingly. For the display box, it might be better to drag it by a handle in the upper-left corner so it can still be navigated as a normal viewer, but that might be better as an option.
On another note: How do you plan to finish this? You can submit pull requests in this repo or fork it. I can ask for access for you to commit directly here but this is a company owned repo, so I don't know if you can be given access. If you decide to just fork, pls let me know when you reach a more stable state of the plugin so I can add link to your fork in the README for future visitors.
username_0: Ok. I said that would be nice anoher kind of magnifier. I click on a button to activate the magnifier and then i move around the image (main viewer, where display region is located) with a mouse and at the same time, i see the zoomified image in the "highlighted region". I think that it's an inversed process. Actually, you need moving in highlighted region.
username_0: I dont have access, but i would like to support it. Ley me know how can i do it.
Kind regards!
username_1: @username_2 @dthornley How can let him support this plugin? Repo access? If no one else at picturae is going to finish this, maybe just hand it over entirely?
username_2: Well in my opinion he can just create a fork and create a pull request. And if it's useful / not breaking existing code we can merge it back to plugin.
username_0: There is a bug when i rotate an image. Highlight region/box resizes it in a wrong way. I think display region/box should rotate too. Any thoughts and suggestions? Kind regards. |
moov-io/watchman | 537818762 | Title: search: ?q=24 incorrectly matches against partial remarks ID
Question:
username_0: Reported from @atonks2
A query like: `?q=24` incorrectly matches against a partial remarks ID from this SDN:
```
23410,"WORLD WATER FISHERIES LIMITED",-0- ,"LIBYA3",-0- ,-0- ,-0- ,-0- ,-0- ,-0- ,-0- ,"D-U-N-S Number 56-558-7594; V.A.T. Number MT15388917 (Malta); Trade License No. C 24129 (Malta); Company Number 4220856; Linked To: DE<NAME>."
```
The extracted remarks ID from this SDN record is `C 24129`, which is correct. Our current logic for when a space exists is to just `strings.Contains` the query and parsed ID. This logic needs to change such that "all the numeric parts have to exactly match".
- `?q=24129` should return this SDN
- `?q=C+24129` should return this SDN
- `?q=24` should **not** return this SDN<issue_closed>
Status: Issue closed |
programadores-br/geral | 1159897315 | Title: [Remoto] Fullstack Software Developer na Pixida do Brasil
Question:
username_0: ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-software-developer-174317151?utm_source=github&utm_medium=programadores-br-geral&modal=open) com o pop-up personalizado de candidatura. 👋
<p><strong>Pixida do Brasil</strong> is seeking <strong><ins>Front-End Software Developer</ins></strong> to work remotely in Brazil for the company's office.</p>
<p>We are seeking a Fullstack Software Developer, who wants to strengthen our software development and industrial automation team creating completely new degrees of freedom beyond proprietary industrial automation systems for the automated configuration, operation, maintenance and continuous change of production plants. Our customer is closing the communication gap between IT and OT and develops an enabler for AI based self-optimizing production plants and the next technological leap in productivity for the world.</p>
## Pixida do Brasil:
<p>Pixida Group is an innovative technology consulting corporate group with focus on digitalization and mobility solutions for different industries. Our consultants and developers offer expertise in the fields of IoT, Telematics, Location-based Services, Multimedia, Driver Assistance Systems, Cloud Solutions and Data Analytic. They design tailor-made products and applications for highly challenging technical environments to meet individual customer requirements across all sectors at a global level.</p>
<p>Pixida is international. The group’s cooperation and exchange of knowledge transcend national borders beyond Germany, USA, Brazil and China.</p><a href='https://coodesh.com/empresas/pixida-do-brasil'>Veja mais no site</a>
## Habilidades:
- React.js
- HTML 5
- CSS
- AWS
- Flutter
- Microservices
- Terraform
- JSON
- Docker
## Local:
100% Remoto
## Requisitos:
- Deep knowledge in one or more of the following technology areas from experience in a professional environment or a considerable private project
- Backend with e.g. Java, NodeJS, SQL Databases, No-SQL Databases, or more
- Frontend with e.g. JavaScript, React, flutter, TypeScript, or more
- Cloud Platforms and DevOps with e.g. AWS, Azure, Microservices, Terraform, or more
- Web Security with Authentication or JSON Web Token
- Git, Gitlab or Docker
- Practical knowledge encompassing at least one full tech
- Job definition:
- Take responsibility for developing new cloud-based products or major features using the latest technologies and methodologies
- Create concepts for future developments of existing systems
- Advance and foster our culture of best practices, clean code, and testing in our development processes
## Diferenciais:
- Cloud training and certifications
## Benefícios:
- Flexible working hours, part- and full time working models, mobile working/home office – we have the best solution for you!
- Individual guidance and mentoring from the start
- Personal development to improve your professional growth
- Working with the latest technology and state-of-the-art equipment
- Short- and long-term international opportunities
- The opportunity to be part of company development with your own ideas
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Software Developer na Pixida do Brasil](https://coodesh.com/vagas/fullstack-software-developer-174317151?utm_source=github&utm_medium=programadores-br-geral&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
Full-Stack |
Subsets and Splits