repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
akka/akka-meta | 239068986 | Title: Akka Sprint Plan 2017-06-26
Question:
username_0: # Sprint plan for the core Akka team
3 weeks
Start: 2017-06-26
End: 2017-07-14
## Data center aware cluster membership
* Membership, leaders
* Failure detection
* Split Brain Resolver
* Singleton
* Sharding
## Other
* Complete hardening of Artery lanes > 1, and performance optimization by implementing `PartitionHub`
* Akka HTTP Quickstart guide
* Run performance and scalability tests on Google Cloud if we get access to the full scale environment
* More requirements gathering of other multi-DC features
* Help / unblock Play team if they need help after the 2.6 release
## Bugs and failures
[Bug](https://github.com/akka/akka/issues?q=is%3Aopen+is%3Aissue+label%3Abug) start count: 12
[Failure](https://github.com/akka/akka/issues?q=is%3Aopen+is%3Aissue+label%3Afailed) start count: 38
Akka HTTP [Bug](https://github.com/akka/akka-http/issues?q=is%3Aopen+is%3Aissue+label%3Abug) start count: 38
Akka HTTP [Failure](https://github.com/akka/akka-http/issues?q=is%3Aopen+is%3Aissue+label%3Afailed) start count: 12
Answers:
username_1: Checklist for things to finish this week here;
- [ ] Complete remaining tasks and final polish of the multi-dc stuff, perhaps some additional tests?
- [ ] Review Getting Started PR.
- [ ] Final review and merge of the akka.io PR
- [ ] Review HTTP Quickstart PR, and involve Ruth.
- [ ] Publish new sharding example to TechHub
username_1: Data center aware cluster membership
----------------------------------------
// Note: "done" here means "for this sprint", by definition we will re-visit those after more distributed testing and continue covering edge cases. These features are not yet ready to be released.
- Membership, leaders - done
- DatacenterReachable events - we have the events done
- Not passing new tests cross DC tests yet, could be failure detector issues?
- TODO: Rethink when to fire the event - when all nodes are split off or just a few
- Failure detection - done, though needs more testing
- Split Brain Resolver - worked on adjusting it to multi DC settings; Is not aimed to cover automatic downing of DCs, that's not the usual use case
- Singleton - should work, assuming events work
- Sharding - started, but not finished
- Additional: worked on fixes for DData
Other
-------
- Complete hardening of Artery lanes > 1, and performance optimization by implementing PartitionHub - done, final review and merge https://github.com/akka/akka/pull/23102
- Akka HTTP Quickstart guide
- Content is there, polishing/editing still in progress, will not hinder us in next spring; it's going on async
- Run performance and scalability tests on Google Cloud if we get access to the full scale environment
- Slipped waiting for environment to be provided
- Gathering more requirements of other multi-DC features - done and in progress still
- Help / unblock Play team if they need help after the 2.6 release
- We’ll continue on this, more actively this sprint
Bugs and failures
------------------
- Bug start count: 12
- Bug end count: 16 (closed 513)
- Failure start count: 38
- Failure end count: 44 (closed 373) -- many new in the branch about multi-dc
- Akka HTTP Bug start count: 38
- Akka HTTP Bug end count: 42 (closed 83)
- Akka HTTP Failure start count: 12
- Akka HTTP Failure end count: 13 (closed 25)
Status: Issue closed
username_0: Have you created issues for remaining dc work so that we know what is left? E.g. What is missing for sharding?
username_1: Not yet, will do though. Sharding mostly you researched if I remember correctly, so we weren't as sure what's missing.
username_0: No, I implemented it.
username_1: Team memory about it was a bit vague :-) Will go over issues and open things in my morning :)
username_1: Fixed status of singleton / sharding, opened follow up tickets |
KenSuenobu/rust-pushrod | 424005148 | Title: Event: Mouse click up inside widget
Question:
username_0: Create callback for when a mouse clicks up _inside_ a widget. (This will trigger an `on_click` callback) Apple equivalent event is `MouseEventTouchUpInside`
Answers:
username_0: Uses a new button map in the run loop. Code completed!
Status: Issue closed
|
loris/api-query-params | 380597598 | Title: Exclude key=""
Question:
username_0: Really nice lib!
The only thing I'm having trouble figure out is how to exclude "" values:
```
{
id: 23423,
key1 : "test",
key2 : ""
}
```
I tried &key2!="" with no effect.
Thanks!
Status: Issue closed
Answers:
username_1: Hi @username_0
Filtering on existing, but empty values are indeed not possible at the moment.
One workaround is to use the `filter` param which give you access to more advanced mongo operations, for instance `key1=test&filter={"key2":{"$ne":""}}` would work.
I was considering changing the way "exists" / "do not exists" operators work in the lib:
Instead of mapping exactly the $exists mongodb operator (ie, `&foo` would translate to `{ foo: { $exists: true }}` and `&!bar` would translate to `{ bar: { $exists: false }}`, I was considering adding test to check if the values are null or empty too: ie `&foo` would translate to `{ foo: { $exists: true, $nin: [null, '', []]}}`. This would be a breaking change tho, so next major version.
username_0: Thank you!
username_2: @username_1 Thank you very much for providing this awesome tool.
How is the $exists changing going? |
CutiaGames/CutiaEngine | 322153352 | Title: Adicionar AnimatorController
Question:
username_0: AnimatorController é a controladora de animação.
Um componente, que gerencia sprites de acordo com o tempo/frame.
- [ ] Guardar imagens em vetor
- [ ] Iterar imagens em tempo arbitrário
- [ ] Mudar animação de acordo com estado<issue_closed>
Status: Issue closed |
PietSmietde/Bugs | 482966579 | Title: Bug: Falsche Fehlermeldung bei Tags
Question:
username_0: **Beschreibe den Fehler**
Wenn man unangemeldet auf Tags klickt (auf das X+) steht da nur "Du musst dafür angemeldet" ohne das nötige "sein"
**Reproduktion**
Schritte um den Fehler zu reproduzieren:
1. Nicht angemeldet sein
2. Bei einem Video auf das X+ bei einem Tag klicken
**Erwartetes Verhalten**
"Du musst dafür angemeldet sein" sollte erscheinen
**Screenshots**
<img width="543" alt="2019-08-20_18h36_42" src="https://user-images.githubusercontent.com/54325828/63366200-7bfabd80-c379-11e9-9d66-6864b6d04a5c.png">
**Desktop (bitte die Informationen vervollständigen):**
- Betriebssystem: Windows
- Browser: Firefox
- Browser-Version: 68.0.2 (64-Bit)
Status: Issue closed
Answers:
username_1: Ist behoben |
stackblitz/core | 605135193 | Title: Minor: Angular generator icon for components is tiny
Question:
username_0: 
Presumably this is unintentional.
Status: Issue closed
Answers:
username_1: Thanks for submitting the issue @username_0!
Yes, definitely unintentional - this is fixed now :) |
MasterAler/SampleYUVRenderer | 974951818 | Title: SharpGl not work
Question:
username_0: ```
public unsafe void PushFrame(AVFrame* frame)
{
switch((AVPixelFormat)frame->format)
{
//case AVPixelFormat.AV_PIX_FMT_NV12://VGA decode
// break;
case AVPixelFormat.AV_PIX_FMT_YUV420P://h264 decode
break;
default: throw new NotSupportedException(((AVPixelFormat)frame->format).ToString());
}
AVFrame* clone = av_frame_clone(frame);
frames.Enqueue(clone);
Console.WriteLine($"Pushed {clone->pts}, WxH: {clone->width}x{clone->height} linesize:{clone->linesize[0]},{clone->linesize[1]},{clone->linesize[2]}");
//Console Output: Pushed 3196433, WxH: 1080x2400 linesize:1152,576,576
}
```
Can you look a bit my code? Thank.
[InitShader](https://github.com/username_0/TqkLibrary.Media.VideoPlayer/blob/90edd8d676dd5df818aa4ca89804c5f79ed609df/TqkLibrary.Media.VideoPlayer.OpenGl/OpenGlVideoPlayer.xaml.cs#L135) (sub function: [BuildShader](https://github.com/username_0/TqkLibrary.Media.VideoPlayer/blob/90edd8d676dd5df818aa4ca89804c5f79ed609df/TqkLibrary.Media.VideoPlayer.OpenGl/ShadersHelper.cs#L63))
[RenderFrame](https://github.com/username_0/TqkLibrary.Media.VideoPlayer/blob/90edd8d676dd5df818aa4ca89804c5f79ed609df/TqkLibrary.Media.VideoPlayer.OpenGl/OpenGlVideoPlayer.xaml.cs#L184)<issue_closed>
Status: Issue closed |
abynim/Sketch-Headers | 923715597 | Title: This file does not contain any Objective-C runtime information.
Question:
username_0: when i run "class-dump -H sketchtool -o /output" , show error "This file does not contain any Objective-C runtime information." . please help me
Answers:
username_1: Yeah it seems like sketchtool is a different kind of executable and class-dump is unable to read it. You might need to use a [disassembler](https://www.hopperapp.com) to peek at the headers instead of using class-dump.
I'm curious why you're looking for sketchtool headers though.
username_0: sorry, my mistake, I use "class-dump -H Sketch -o /output" |
freelawproject/juriscraper | 208458333 | Title: Bunch of DeprecationWarnings in Python3 due to invalid escape sequences
Question:
username_0: Some scrapers still have potentially issue-prone regex patterns that could be an issue in Py3.7+. Guess I didn't catch these before.
Simple fix is to set these string literals to raw string literals.
```
Finds all the $module_example* files and tests them with the sample ... /Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_appellate/ca8.py:23: DeprecationWarning: invalid escape sequence \d
case_name_regex = re.compile('(\d{2}/\d{2}/\d{4})(.*)')
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_appellate/ca8.py:33: DeprecationWarning: invalid escape sequence \d
case_date_regex = re.compile('(\d{2}/\d{2}/\d{4})(.*)')
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_appellate/ca8.py:41: DeprecationWarning: invalid escape sequence \d
docket_number_regex = re.compile('(\d{2})(\d{4})(u|p)', re.IGNORECASE)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_district/dcd.py:81: DeprecationWarning: invalid escape sequence \?
regex = re.compile('(\?)(\d+)([a-z]+)(\d+)(-)(.*)')
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_district/dcd.py:101: DeprecationWarning: invalid escape sequence \s
judge = re.search('(by\s)(.*)', judge_string, re.MULTILINE).group(2)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_district/dcd.py:113: DeprecationWarning: invalid escape sequence \?
regex = '(\?)(\d+)([a-z]+)(\d+)(\-)(.*)'
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/federal_special/acca_p.py:21: DeprecationWarning: invalid escape sequence \d
self.docket_case_name_splitter = re.compile('(.*[\dX]{5,8})(.*)')
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/fla.py:22: DeprecationWarning: invalid escape sequence \d
self.regex = re.compile("(S?C\d+-\d+)(.*)")
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/fladistctapp_3.py:72: DeprecationWarning: invalid escape sequence \d
text = re.search('(\d{2}-\d{2}-\d{4})', text).group(1)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/fladistctapp_5.py:31: DeprecationWarning: invalid escape sequence \d
self.case_regex = '(5D.*-.*\d{1,3})([- ]+[A-Za-z].*)'
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/miss.py:38: DeprecationWarning: invalid escape sequence \d
date_re = re.compile('(\d{2}-\d{2}-\d{4})')
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nc.py:37: DeprecationWarning: invalid escape sequence \d
date_cleaner = "\d+ \w+ [12][90]\d\d"
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nc.py:105: DeprecationWarning: invalid escape sequence \(
download_url = re.search('viewopinion\("(.*)"',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nc.py:71: DeprecationWarning: invalid escape sequence \(
'viewopinion\("(.*)"',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nc.py:130: DeprecationWarning: invalid escape sequence \d
docket_number = re.search('(.*\d).*?',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nc.py:135: DeprecationWarning: invalid escape sequence \d
if not re.search('^\d\d.*\d\d$', neutral_cite):
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/nd.py:47: DeprecationWarning: invalid escape sequence \d
citation_pattern = '^.{0,5}(\d{4} ND (?:App )?\d{1,4})'
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/or.py:29: DeprecationWarning: invalid escape sequence \d
docket_numbers.append(' & '.join(re.findall('S\d+', s)))
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/pacommwct.py:24: DeprecationWarning: invalid escape sequence \s
self.set_regex("(.*)(?:- |et al.\s+)(\d+.*\d{4})")
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/ri_p.py:82: DeprecationWarning: invalid escape sequence \(
regex = '(.*?)(\((\w+\s+\d+\,\s+\d+)\))(.*?)'
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/ri_p.py:101: DeprecationWarning: invalid escape sequence \s
'(.*?)(,?\sNos?\.)(.*?)',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/ri_p.py:103: DeprecationWarning: invalid escape sequence \s
'(.*?)(,?\s\d+-\d+(,|\s))(.*?)',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/ri_p.py:106: DeprecationWarning: invalid escape sequence \s
'(.*?)(,?\s(?:\w+-)?\d+-\d+(,|\s))(.*?)',
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/sd.py:46: DeprecationWarning: invalid escape sequence \d
case_name = re.search('(.*)(\d{4} S\.?D\.? \d{1,4})', s, re.MULTILINE).group(1)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states/state/sd.py:62: DeprecationWarning: invalid escape sequence \d
neutral_cite = re.search('(.*)(\d{4} S\.?D\.? \d{1,4})', s, re.MULTILINE).group(2)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states_backscrapers/federal_district/dcd_2013.py:101: DeprecationWarning: invalid escape sequence \s
judge = re.search('(by\s)(.*)', judge_string, re.MULTILINE).group(2)
/Users/dave/src/freelawproject/juriscraper/juriscraper/opinions/united_states_backscrapers/federal_district/dcd_2013.py:113: DeprecationWarning: invalid escape sequence \?
regex = '(\?)(\d+)([a-z]+)(\d+)(\-)(.*)'
/Users/dave/src/freelawproject/juriscraper/juriscraper/oral_args/united_states/federal_appellate/ca3.py:20: DeprecationWarning: invalid escape sequence \d
self.regex = '(\d{2}-\d{3,4})?(.+)\.(:?(wma)|(mp3))'
```
Answers:
username_1: There are a lot more problems than just this when trying to run Juriscraper in Python 3. Tried running:
`````
python3 setup.py test
````
Are python2 and python3 compatibility desired?
username_2: Py3 is desired in the broad sense, but "nobody" is asking for it yet. Until CourtListener itself is Py3 ready, doing Juriscraper is good, but not a huge thing. IIRC, we turned off Travis testing for py3 a while back and with a lot of sadness.
All of that said, I'm totally in favor of and enthusiastic about Py3 compatibility, especially if, like here, it sounds fairly easy.
username_3: in https://github.com/freelawproject/juriscraper/commit/c7b6fef5b9e7177481542be7651ac35aa5571aa3 @username_2 cited requests-mock (which now seems to be python3 over at https://github.com/jamielennox/requests-mock) and jsondate, which doesn't seem likely to get updated on its own (https://github.com/rconradharris/jsondate/issues/7)
username_2: If jsondate is the only issue, I wonder if the easiest path here is either:
- Forking and hosting our own version of jsondate (as sometimes happens when py2 stuff is abandoned), or
- Dropping our dependency on it and figuring out a different way forward.
I think either should be fairly simple. I don't think we do a *ton* with jsondate.
Thanks for doing the digging on this, @username_3. |
angular/angular | 110168754 | Title: [test] Text bindings are not bound element anymore
Question:
username_0: With the new compiler (introduced in alpha-38), elements with a simple text binding like `<div>{{username}}</div>` are not bound element anymore.
When writing tests, that means they don't appear in `rootTC.debugElement.componentViewChildren` or in `rootTC.debugElement.query(By.css('div'))`.
It leads to write less readable tests, relying on something like `DOM.childNodes(rootTC.debugElement.nativeElement)[4]`.
These tests are less readable and more fragile.
Is there a plan to fix or improve this?
Answers:
username_1: @username_0 did you try alpha.39? There were number of fixes that went into it and it was released just yesterday.
username_0: @username_1 I've got to admit I did not yet (I wanted to have all my 300 tests green before moving on to 39). As far as I saw in the commits in 39 though, there are no changes on this part.
username_1: @username_0 .38 had several problems and .39 has several fixes so I would advise going straight to .39. .38 is just too unstable.
username_0: @username_1 I can confirm this is the same in alpha-39
username_2: I don't think we will make them bound elements. @tbosch please comment. The reason we had to do it was that we used a different mechanism of locating text nodes. The new mechanism is faster, so it is very unlikely that we would add this back.
Perhaps you could suggest better syntax for your tests and we could work on that?
username_0: Thanks for considering this. I understand your point and that's not really a problem. I think what we really need is an easy way to query elements in tests, among other things. I've got around 300 tests right now for my ebook, testing pretty much everything, and I think it's not friendly enough (at least with what I know/we have right now).
Let's take a simplified example.
````
let html = `
<form (ng-submit)="register()">
<div>
<label>Username</label><input [(ng-model)]="user.username">
<small>{{ user.username }} is an awesome username!</small>
</div>
</form>
`;
tcb.overrideTemplate(RegisterFormCmp, html)
.createAsync(RegisterFormCmp)
.then((formCmp) => {
formCmp.debugElement.componentInstance.user.username = 'Cédric';
formCmp.detectChanges();
// asserts
let input = formCmp.debugElement.query(By.css('input')).nativeElement;
expect(input.value).toBe('Cédric');
let small = DOM.childNodes(DOM.childNodes(formCmp.debugElement.query(By.css('form')).nativeElement)[1])[4];
expect(small).toHaveText('Cédric is an awesome username!');
async.done();
});
````
A few things could be straightforward/less verbose:
- `formCmp.debugElement.componentInstance.user.username = 'Cédric';` is really long. Maybe we could have something like `formCmp.user.username = 'Cédric';`
- `DOM.childNodes(DOM.childNodes(formCmp.debugElement.query(By.css('form')).nativeElement)[1])[4];` is horrible. As the elements with text bindings are not bound elements, the css query `formCmp.debugElement.query(By.css('small')).nativeElement` does not work anymore.
`DOM.childNodes` is a quick fix to make my tests green right now, but I don't know what I could do to access an element that can't be queried at the moment... (Am I missing something obvious?)
Something like `$(formCmp).find('small')` would be better perhaps?
That would give:
````
let html = `
<form (ng-submit)="register()">
<div>
<label>Username</label><input [(ng-model)]="user.username">
<small>{{ user.username }} is an awesome username!</small>
</div>
</form>
`;
tcb.overrideTemplate(RegisterFormCmp, html)
.createAsync(RegisterFormCmp)
.then((formCmp) => {
formCmp.user.username = 'Cédric';
formCmp.detectChanges();
// asserts
let input = $(formCmp).find('input');
expect(input.value).toBe('Cédric');
let small = $(formCmp).find('small');
expect(small).toHaveText('Cédric is an awesome username!');
async.done();
});
````
username_3: I agree that this is annoying, I ran into it myself yesterday. Assigning myself to take a look and see if we can modify `DebugElement` to search through text bindings in addition to normal bound elements.
FYI this is going in the 'After AngularConnect' priority bucket for me.
username_3: @rkirov and I were thinking that maybe the `DebugElement` API just isn't useful for most users. It seems like getting at the native element and doing plain old DOM manipulation from there is what most people will want. So @username_0 your test would look something like:
```js
// Note that this should all already work with angular as in master right now.
let html = `
<form (ng-submit)="register()">
<div>
<label>Username</label><input [(ng-model)]="user.username">
<small>{{ user.username }} is an awesome username!</small>
</div>
</form>
`;
return tcb.overrideTemplate(RegisterFormCmp, html)
.createAsync(RegisterFormCmp)
.then((formCmp) => {
formCmp.componentInstance.user.username = 'Cédric'; // the need for `debugElement here is now removed
formCmp.detectChanges();
// asserts
let htmlRoot = formCmp.nativeElement; // again, no debugElement
let input = htmlRoot.querySelector('input');
expect(input.value).toBe('Cédric');
let small = htmlRoot.querySelector('small');
expect(small).toHaveText('Cédric is an awesome username!');
});
```
It would require significant changes to `DebugElement#query` to make it work with text nodes - at the moment, it only returns other `DebugElement`s, and those must be backed by an `ElementRef`. Bound text nodes no longer have an `ElementRef`, and as Misko mentioned, probably won't be changed.
Thoughts @tbosch?
username_0: Indeed, I rewrote my tests to look exactly like what you suggest. DOM queries are more readable than using `query` or `componentViewChildren` for a lot of cases.
In my apps, the only time that I can see `query` needed is when we want to grab a reference to the children directive to test something. At least, that's what I suggest in our ebook, feel free to tell me if there is something I'm missing:
```
let pony = fixture.query(By.directive(PonyCmp));
expect(pony.componentInstance.name).toBe('Rainbow Dash');
```
Status: Issue closed
username_3: @username_0 yeah, that looks good to me.
There's a similar discussion at https://github.com/angular/angular/issues/5385 - let's continue there and close this issue for now, since the particular case of the bound text element is shut. |
aspnetboilerplate/aspnetboilerplate | 326277006 | Title: EntityNotFoundException is not returning translation of a new language.
Question:
username_0: * Your Abp package version: 3.5.0
* Your base framework: .Net Core.
- I added a new Language es-CO
- In the localization file (xml file), I added the following line
<text name="EntityNotFound" value="No existe una entidad {0} con id = {1}!" />
- Every time I throw an EntityNotFoundException I'm getting [Entity Not Found], instead of the translation.
Answers:
username_1: @username_0
* Is your xml file marked as embedded resource ?
* If so, could you share the xml file you have created ?
Status: Issue closed
|
linq2db/linq2db | 84020723 | Title: Oracle: Need to dispose command parameters but also their values when disposing command
Question:
username_0: https://community.oracle.com/thread/2279050
https://fromthefiefdom.wordpress.com/2011/08/26/what-to-dispose-in-oracle-odpnet/
Недавно столкнулись с крешами пула asp.net, источник которых находится где то глубоко внутри oracle odp.net. Исследования показали, что с таким поведением столкнулись не только мы. По ссылкам выше приводятся слова службы поддержки Oracle о том, что нужно диспозить не только команду, но и все ее параметры и их значения. Люди пишут, что причина падений в этом.
fix:
file: C:\Projects\linq2db\Source\Data\DataConnection.cs:551
public void DisposeCommand()
{
if (_command != null)
{
foreach (var param in _command.Parameters)
{
DisposeParameter(param);
}
_command.Dispose();
_command = null;
}
}
private void DisposeParameter(object param)
{
if (param == null)
return;
// возможно стоит обернуть в try/catch
var dbParam = param as System.Data.Common.DbParameter;
if (dbParam != null && dbParam.Value != null && dbParam.Value is IDisposable)
((IDisposable)dbParam.Value).Dispose();
if (dbParam is IDisposable)
((IDisposable)dbParam).Dispose();
}
tags: w3wp exception 0xc0000374 odp.net
Answers:
username_1: why should I dispose the Value? it can still be used in other parts of the program!
username_0: Ok, I think that disposing only parameters will be enough. But it must be done, do you agree?
username_1: If the Oracle Dokumentation says so, I think it's ok, but the value not!
Status: Issue closed
|
palash1992/GEM | 290858502 | Title: Test script gives error when executed on karate edgelist
Question:
username_0: Hello,
replacing the input dataset in the test script with the karate edgelist, I can the following error upon execution:
python test.py
```
Using TensorFlow backend.
Couldn't import dot_parser, loading of dot files will not be possible.
Traceback (most recent call last):
File "test.py", line 28, in <module>
models.append(GraphFactorization(2, 50000, 1*10**-4, 1.0))
File "/code/GEM/gem/embedding/gf.py", line 44, in __init__
for key in dictionary:
TypeError: 'int' object is not iterable
```
Status: Issue closed
Answers:
username_1: The issue has now been fixed |
sabeechen/hassio-google-drive-backup | 999320224 | Title: An unexpected error occurred: 400 Client Error: Bad Request for url: https://www.googleapis.com/oauth2/v4/token
Question:
username_0: Please add some information about your configuration and the problem you ran into here.
More info really helps speed up debugging, if you don't even read this and help me understand what happened, I probably won't help you.
Remember that its just one guy back here doing all of this.
If english isn't your first language, don't sweat it. Just try to be clear and I'll do the same for you. Some things you might consider including:
* What were you doing when the problem happened?
* A screenshot if its something visual.
* What configuration options are you using with the add-on?
* What logs is the add-on printing out? You can see the detailed logs by clicking "Logs" at the right of the web-UI.
* Are there any problematic looking logs from the Hassio supervisor? You can get to them from the Home Assistant Interface from "Hass.io" > "System" > "System Log"
Traceback (most recent call last):
File "/app/backup/coordinator.py", line 109, in _sync
self._buildModel().sync(self._time.now())
File "/app/backup/model.py", line 127, in sync
self._syncSnapshots([self.source, self.dest])
File "/app/backup/model.py", line 205, in _syncSnapshots
from_source: Dict[str, AbstractSnapshot] = source.get()
File "/app/backup/drivesource.py", line 86, in get
self._info.drive_folder_id = self.getFolderId()
File "/app/backup/drivesource.py", line 79, in getFolderId
return self._getParentFolderId()
File "/app/backup/drivesource.py", line 204, in _getParentFolderId
self._folderId = self._validateFolderId()
File "/app/backup/drivesource.py", line 222, in _validateFolderId
if self._verifyBackupFolderWithQuery(folder_id):
File "/app/backup/drivesource.py", line 200, in _verifyBackupFolderWithQuery
raise e
File "/app/backup/drivesource.py", line 189, in _verifyBackupFolderWithQuery
folder = self._get(id)
File "/app/backup/drivesource.py", line 233, in _get
return self.drivebackend.get(id)
File "/app/backup/driverequests.py", line 154, in get
return self.retryRequest("GET", URL_FILES + id + "/?" + urlencode(q), is_json=True)
File "/app/backup/driverequests.py", line 304, in retryRequest
send_headers = self._getHeaders(refresh=refresh_token)
File "/app/backup/driverequests.py", line 80, in _getHeaders
"Authorization": "Bearer " + self.getToken(refresh=refresh),
File "/app/backup/driverequests.py", line 143, in getToken
raise e
File "/app/backup/driverequests.py", line 140, in getToken
resp = self.retryRequest("POST", URL_AUTH, is_json=True, data=data, auth_headers=self._getAuthHeaders(), cred_retry=False)
File "/app/backup/driverequests.py", line 358, in retryRequest
response.raise_for_status()
File "/usr/lib/python3.8/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://www.googleapis.com/oauth2/v4/token
An unexpected error occurred: 400 Client Error: Bad Request for url: https://www.googleapis.com/oauth2/v4/token
Answers:
username_1: You are using a very old verison of the addon (more than a year old). Could I have you update it and see if you're still running into this issue?
username_0: It was rare, it took a long to ask for an update, addon is updating. will update you after
username_0: Issue was solved with update. not sure why the addon didnt asked for and update untill a few hours later, and also that whne I isntalled it I was installed the same version. With the issues.
THanks
Status: Issue closed
|
javers/javers | 813769303 | Title: Does not ignore fields defined in withIgnoredProperties
Question:
username_0: Expect to ignore properties that are marked as ignored
```
Javers javers = JaversBuilder.javers()
.registerEntity(EntityDefinitionBuilder.entityDefinition(entity.getClass())
.withIdPropertyName("id")
.withIgnoredProperties("createdBy", "updatedBy", "created", "modified").build())
.build();
Diff diff = javers.compare(record, entity);
```
**Javers' Version**
5.14.0
Error message:
```
JaversException ENTITY_INSTANCE_WITH_NULL_ID: Found Entity instance 'com.vw.asa.entities.acl.AclUsers$HibernateProxy$L3u3Am7D' with null Id-property 'id'
```
The file `AclUsers` is one of the ignored properties (an instance of `createdBy`)
Answers:
username_1: Please follow the https://github.com/javers/javers/blob/master/CONTRIBUTING.md and create a failing test case |
jruby/activerecord-jdbc-adapter | 51281932 | Title: Invalid SQL for DB2 with limit + offset + order when order refers to a column not in the result set
Question:
username_0: See my comment on https://github.com/jruby/activerecord-jdbc-adapter/commit/b33063534c606b06c29b08e4f82b3f8fe5b061df
for details.
In brief, that commit introduced a limitation on ordering. Prior to that commit, you could join on another table and sort by the joined table's column(s) even if they weren't included in your result set. With that commit, doing so results in invalid SQL.
Status: Issue closed
Answers:
username_1: https://github.com/jruby/activerecord-jdbc-adapter/commit/b33063534c606b06c29b08e4f82b3f8fe5b061df got reverted to NOT mangle with *OVER ( ORDER BY ... )* https://github.com/jruby/activerecord-jdbc-adapter/commit/0ec8f12d616f98e9cbd6a712a50b15a03ac808b5 as there was no test-case for the feature, use the test-case provided so that if it's re-introduced it's done in a more consistent way, thx!
username_0: @username_1 Thanks for following up on this! |
apache/airflow | 699386070 | Title: StreamLogWriter has no close method, clashes with abseil logging (Tensorflow)
Question:
username_0: **Apache Airflow version**: 1.10.12
**What happened**:
If any Python code run in an operator, has imported `absl.logging` (directly or indirectly), `airflow run` on that task ends with `AttributeError: 'StreamLogWriter' object has no attribute 'close'`. This is not normally seen, as this only happens in the `--raw` inner stage. The task is marked as successful, but the task exited with exit code 1.
The full traceback is:
```python
Traceback (most recent call last):
File "/.../bin/airflow", line 37, in <module>
args.func(args)
File "/.../lib/python3.6/site-packages/airflow/utils/cli.py", line 76, in wrapper
return f(*args, **kwargs)
File "/.../lib/python3.6/site-packages/airflow/bin/cli.py", line 588, in run
logging.shutdown()
File "/.../lib/python3.6/logging/__init__.py", line 1946, in shutdown
h.close()
File "/.../lib/python3.6/site-packages/absl/logging/__init__.py", line 864, in close
self.stream.close()
AttributeError: 'StreamLogWriter' object has no attribute 'close'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/.../lib/python3.6/logging/__init__.py", line 1946, in shutdown
h.close()
File "/.../lib/python3.6/site-packages/absl/logging/__init__.py", line 864, in close
self.stream.close()
AttributeError: 'StreamLogWriter' object has no attribute 'close'
```
Abseil is Google's utility package, and is used in Tensorflow. The same issue would be seen if you used `import tensorflow`.
**What you expected to happen**:
I expected the task to exit with exit code 0, without issues at exit.
**How to reproduce it**:
- Install abseil: `pip install absl-py`
- Create a test dag with a single operator that only uses `import abs.logging`
- Trigger a dagrun (no scheduler needs to be running)
- execute `airflow run [dagid] [taskid] [execution-date] --raw`
**Anything else we need to know**:
What happens is that `absl.logging` sets up a logger with a custom handler (`absl.logging.ABSLHandler()`, which is a proxy for either `absl.logging.PythonHandler()`), and that proxy will call `.close()` on its `stream`. By default that stream is `sys.stderr`. However, airflow has swapped out `sys.stderr` for a `StreamLogWriter` object when it runs the task under the `airflow.utils.log.logging_mixin.redirect_stderr` context manager. When the context manager exits, the logger is still holding on to the surrogate stderr object.
Normally, the abseil handler would handle such cases, it deliberately won't close a stream that is the same object as `sys.stderr` or `sys.__stderr__` (see [their source code](https://github.com/abseil/abseil-py/blob/06edd9c20592cec39178b94240b5e86f32e19768/absl/logging/__init__.py#L852-L870)). But *at exit time* that's no longer the case. At that time `logging.shutdown()` is called, and that leads to the above exception.
Since `sys.stderr` has a close method, the best fix would be for `StreamLogWriter` to also have one.<issue_closed>
Status: Issue closed |
thp/urlwatch | 569400806 | Title: Grep options?
Question:
username_0: It would be helpful to use grep options:
`-i, --ignore-case`
Is this possible? Can we set this in the config file somehow?
If this is already possible would someone provide a working example or notate it in the man page?
Answers:
username_1: No grep command line options are supported. Urlwatch doesn't actually use the `grep` utility internally.
However, you can use the inline flag `(?i)` to turn on case insensitive mode.
https://www.regular-expressions.info/modifiers.html#bodytext
https://docs.python.org/3/library/re.html#re.I
username_0: Thanks, this is exactly what I needed.
Please provide an example of using the inline flag in a urlwatch filter. I am not sure of the placement.
>
username_0: Thanks, this is exactly what I needed.
Here is an example that works for me:
`- grep: (?i)olympic`
username_0: Thank you so much, this is exactly what I needed. I found the answers to my other questions using your first link. I couldn't find that information.
Here is an example that works for me:
`- grep: (?i)olympic`
And when searching for more than one string the first inline flag affects the entire line.
`- grep: (?i)olympic|time standards|2020`
The case will stay the same for the entire expression.
If you want to specify case sensitivity in the middle of the expression you can turn off case insensitive mode using the minus sign:
`- grep: (?i)olympic|(?-i)Time Standard|2020`
username_0: I am only able to get the (?i) flag to work. Other flags fail... e,F
Is there something I am missing or not understanding?
username_0: Was not able to use the (?v) flag either to exclude items from grep search.
username_0: I have this rule. I am trying to use grep to remove an item from results but I am having difficulty.
```name: MacRumors Apple Releases
kind: url
url: "https://www.macrumors.com"
filter:
- xpath: '(//div[contains(@class,"wrapper")]//h2)/a'
- grep: (?i)Apple Seeds|Apple Releases
- html2text: re
---
```
These are the results:
```$ uwtf 22
Apple Seeds Fifth Beta of watchOS 6.2 to Developers
Apple Seeds Fifth Betas of iOS and iPadOS 13.4 to Developers [Update: Public Beta Available]
Apple Seeds Fifth Beta of Upcoming macOS Catalina 10.15.4 Update to Developers [Update: Public Beta Available]
Apple Seeds Fifth Beta of Upcoming tvOS 13.4 Update to Developers [Update: Public Beta Available]
```
I tried the following grep line but it fails with an error.
```
- grep: (?v)watchOS|(?i-v)Apple Seeds|Apple Releases
```
```
re.error: unknown extension ?v at position 1
```
Any idea how to exclude using grep or awk?
username_1: `(?v)` is not a valid regex inline flags.
Again, urlwatch does NOT use the grep utility internally. The name of the filter is a bit misleading. It's regex only, not grep. See https://docs.python.org/3/library/re.html for references. Don't use grep's references.
username_0: This may be a different issue but since we have this "help" thread going on grep I thought I would ask here.
urlwatch is accepting a grep format in one job but not in another which has me baffled. I thought the pipe could be used to separate strings, but obviously it doesn't like the` ' ` character next to the` | ` character. I tried to escape the pipe `\|` but it didn't like that either. I am trying to grep for two strings with a space between them and avoid the two words if they are not next to each other.
my job:
```
name: (30)MacRumors Apple Releases
kind: url
url: "https://www.macrumors.com"
filter:
- xpath: '(//div[contains(@class,"article")]//h1)//a|//div[contains(@class,"byline")]'
- grep: 'Apple Seeds'|'MacBook Air'
- html2text: re
---
```
The error on exiting:
```
Parsing failed:
======
while scanning a block scalar
in "/Users/john/Library/Application Support/urlwatch/urls.edit.yaml", line 185, column 23
expected chomping or indentation indicators, but found "'"
in "/Users/john/Library/Application Support/urlwatch/urls.edit.yaml", line 185, column 24
======
```
The other job which is not flagged with a formatting error:
```
name: (25)SwimSwam News "NAG"
kind: url
url: https://swimswam.com/news
filter:
- xpath:
path: '(//div[contains(@class,"item")]//h3)/a|(//div[contains(@class,"item")]//h4)/a'
- grep: (?i)'National Age Group'|'NAG'
---
```
username_2: In the master branch (not yet in a release) there's now the `shellpipe` filter which you can use to pipe the content through a shell command, so you can use the `grep` command line utility with any parameters you want.
This is documented with examples here:
https://urlwatch.readthedocs.io/en/latest/filters.html#using-a-shell-script-as-a-filter
Status: Issue closed
username_2: This is now in urlwatch 2.19. |
cdnjs/cdnjs | 191828524 | Title: [Request] Add Themify Icons
Question:
username_0: **Library name:** Themify Icons
**Git repository url:** https://themify.me/themify-icons
**npm package url(optional):**
**License(s):**
**Official homepage:**
**Wanna say something? Leave message here:**
=====================
Notes from cdnjs maintainer:
You are welcome to add a library via sending pull request,
it'll be faster then just opening a request issue,
and please don't forget to read the guidelines for contributing, thanks!!
Answers:
username_1: Hi @username_0,
this library doesn't have notable popularity (100 stars/watchers on GitHub is a good examples).
You can take a look on our [README](https://github.com/cdnjs/cdnjs/blob/master/README.md) for more details :)
The issue will be closed if it has no response in 7 days.
Thanks for your request.
username_0: @username_1 thank for your comment.
Status: Issue closed
|
cityofaustin/atd-data-tech | 637975185 | Title: [BUG] Look at why Supervisor field are blank on Employee object table
Question:
username_0: Queried and saw that 327 records have no Supervisor attached to them!!
Status: Issue closed
Answers:
username_0: Looked at Employee list and the supervisors look like they're populated now. Not sure if the automated script updated them, but the last time checked ~300 records didn't have supervisors. |
brutella/hc | 467610774 | Title: tv example panics immmediately
Question:
username_0: ```
panic: adding a linked should be done after the server was added to an accessory
goroutine 1 [running]:
github.com/username_1/hc/service.(*Service).AddLinkedService(...)
/Users/username_0/work/hc/service/service.go:96
main.addInputSource(0xc0000b8bc0, 0x1, 0x13f45f6, 0x6, 0x3)
/Users/username_0/work/hc/_example/tv/main.go:23 +0x65c
main.main()
/Users/username_0/work/hc/_example/tv/main.go:139 +0x423
```
was there an api update that didn't make it in the example? or is it maybe something wrong with my environment?
Status: Issue closed
Answers:
username_1: Fixed by b7c9de178b3f6e48ffc8cee40c6cb294cfca357b |
fastai/fastprogress | 557361632 | Title: TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
Question:
username_0: Hi there,
I got an error
```
File "/home/venv/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 264, in add_child
self.child.prefix = f'Epoch {self.main_bar.last_v+1}/{self.main_bar.total} :'
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
I solved it by commenting that particular line. But, is there any better workaround for this? Because the workaround I did means that I have to change the script in the installation.
Thanks!
Answers:
username_1: ```
self.last_v = 0
if parent is None: self.leave,self.display = leave,display
else:
self.leave,self.display=False,False
parent.add_child(self)
# self.last_v,self.comment = None,''
self.last_v,self.comment = 0,''
```
This works, the author should have mistakenly written it. @username_0
Status: Issue closed
username_2: As mentioned in #57, please reopen with a clear reproducer if needed. |
helium/miner | 1116178284 | Title: No withness, no beacon sent by helium hotspot
Question:
username_0: Hi guys, since 5 days, no beacon was sent by my helium hotspot and in Nebra dashbord it appears 0 at withness no. Please advice.
Reward dropped 5 times from 0.2 HNT/day to 0.04 HNT/day.
Answers:
username_1: You have to troubleshoot that and can't expect much sympathy when you heard the word "FLATLINE" before that means 0.0000000000 HNT/day. I have gone more than 10 days with send a beacon.
username_1: I just have to point out we speak about so many issues getting 0.000000 HNT/day. I don't think you're going to get much help making 0.04 HNT/day . That's a good day for me.
username_2: Heh the saga continues … make sure your up to the latest firmware, and not relayed… your not on the deny list if your still earning which is a good sign. Your going to be down for Atleast 3 hours for new firmware upgrade so leave it on and don’t mess with it for Atleast 3 hours… then check your logs to see your lovely failed to connect to challanger … your going to be pleasantly surprised with Atleast 10000 lines of those errors , and your going to feel like all of us 🤪 |
PokemonGoers/PokeMap-2 | 174925798 | Title: Real pokemon data on map
Question:
username_0: Change mock pokemon data for real pokemon data.
- Show all data PokemonData REST API sends
- Think about what function to use ( getByTime or getByCoordinates)
Answers:
username_1: Remember to set the API location as a configurable parameter! :) Aka:
```javascript
const uri = option.uri || "http://somethingsomething.com"
```
and when calling routes
```javascript
http.get(uri + '/something/else')
```
username_1: UH! and also: http and https behave differently, that should also be configurable!
username_2: I think we have to wait for the issue PokemonGoers/PredictPokemon-2#60 to be fixed before we can finally implement the predictions.
username_1: Is fixed now |
StackStorm-Exchange/stackstorm-ghost2logger | 352375781 | Title: Feature Request: for multiple "AND" conditions to detect an event
Question:
username_0: Hi,
I am new to stackstorm and ghost2logger. And nice work David to provide this package.
Just wondering if there is any way to define 2 or more patterns with and condition.
Seems like in this current version there is only one trigger.pattern can be defined.
In some cases, we need to check multiple parameters, like interfaceX with recent state up or down.
I have tried with Regex and these type of events can be detected with ghost2logger.
But attaching this event to an action, we need to extract the interface name from the syslog message. And forward this info to another module like execute ansible playbook, to change that interface configuration.
I am not sure if any other way to do this, but if it is possible using ghost2logger it would be really nice.
Regards,
username_0
Answers:
username_1: If I understand your question correctly, yes, this is entirely possible. The provided [example rule](https://github.com/StackStorm-Exchange/stackstorm-ghost2logger#rules) even illustrates how to do this:
```yaml
name: rule_1
pack: ghost2logger
ref: ghost2logger.rule_1
criteria:
trigger.host:
pattern: 192.168.16.1
type: eq
# implicit AND condition
trigger.pattern:
pattern: thing [0-9]$
type: eq
...
```
All you have to do is provide more than one `criteria` object.
You can also create this in the StackStorm web UI in the "Rules" tab and using the "Add Criteria" button:
<img width="889" alt="screen shot 2018-08-20 at 9 34 18 pm" src="https://user-images.githubusercontent.com/597113/44380633-4cdd1200-a4c1-11e8-8bac-86b338c0dd78.png">
Note the "and" between the criteria - that indicates that both criteria must match to pass the rule and create a trigger instance. Furthermore, please note that the example rule and the screenshot I posted are not for the same rule.
username_0: Thanks for the detailed explanation.
I think I cannot able to explain the problem correctly.
I am aware that there can be multiple criteria defined.

But I need to define multiple trigger.pattern, so when I add another trigger.pattern like following, the previous one seems to be re-written.

* you can see here the previous one is back to default value.
Regards,
username_0
username_0: to add an example, in my environment I can now detect the following syslog. Using to AND criteria, host IP and syslog message.
<189>166: Aug 21 11:37:39.394 JST: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3, changed state to down
What I need is, parse the affected interface and the fault from this message, so what I thought, if I can define the criteria like,
trigger.pattern1 == GigabitEthernet.* (<- interface name only)
trigger.pattern2 == "changed state to down"
In this way, I can pass the trigger.pattern1 value to some other modules, like ansible to execute some commands, like "show interface GigabitEthernet1".
If the scenario is not clear enough please let me know.
Regards,
username_0
username_2: Hi,
Sorry for any delay here. I've been off the grid to unwind!
Ghost2Logger can be viewed as a two-step pipeline where Ghost2Logger binary itself does the regular expressions. Even if you change the operation type in the ST2 rule, the Ghost2Logger binary will always do regex. It was a huge amount of work to insert every kind of match type so went for the most useful and made it a documented limitation. I suspect I need to make it more clear, so thank you for the nudge.
http://ipengineer.net/2017/05/stackstorm-ghost2logger-pack/ <http://ipengineer.net/2017/05/stackstorm-ghost2logger-pack/>
Ghost2Logger -> ST2 Rule Engine
regex eq / or regex
Ok, next bit. With regards to the UI issue you are seeing, I ran into similar problems a few versions back (2.0 I think or 2.1) where on creating or altering rules and saving, the rule was not as I had set. Sometimes the rule needed deleting or the browser needed refreshing. Ghost2logger has no affect on the rules unfortunately, so I can't help you there. You should be able to record a short video on the problem then create an issue following the GitHub template with the ST2 core team.
In terms of pipeline, the St2 rule-set is read unidirectionally from ST2 towards the Ghost2Logger binary and not the other way around. If the rule is deleted from ST2 then it is from Ghost2Logger, but not vice-versa.
Hope all of that helps?! Thanks also for the kind words. I spent quite an amount of time building Ghost2Logger and it brings me joy knowing that someone is using it!
Thanks,
David
>
username_0: Thanks, David. Hope you have a nice relaxing time^^
I may understand you wrong so please point me out if I am.
What I need, or I think if its available could make this package better is to have options something like following,
```
name: rule_1
pack: ghost2logger
ref: ghost2logger.rule_1
criteria:
trigger.host:
pattern: 192.168.16.1
type: eq
trigger.pattern1:
pattern: link.*down$
type: eq
trigger.pattern2:
pattern: ge-0/0/[0-9]*
type: regex
enabled: true
tags: []
```
If its possible we can get two variables, {{trigger.pattern1}} and {{trigger.pattern2}}.
pattern1 is to know that something, eventually pattern2, is in a down state now.
By getting these two variables we can pass related info to our action module, say use ansible to get that interface info.
What I am facing is, I cannot define trigger.pattern more than 1 time. It may be a GUI problem, or it is the only way to define only one trigger.pattern.
I will try to troubleshoot this problem and report back to St2 team if needed :)
Regards,
username_0 |
dart-lang/sdk | 201263095 | Title: dartk: Static setter/getter should be taken into account when resolving references
Question:
username_0: From `tests/language/static_setter_get_test.dart`:
```
class Class {
static set o(_) {}
m() => o; /// 01: static type warning
noSuchMethod(_) => 42;
}
main() {
Expect.throws(() => new Class().m()); /// 01:
}
```
Answers:
username_1: That's not really well-defined since there can be both an `id` and an `id=` declaration in the same scope, so which one is it? It matters, because it's only a warning if one is static and the other is dynamic, and the program should have a semantics in that case.
Implementations favor the getter over the setter (reasonably, I'd say, but not mandated).
(Another interesting thing is that this looking at declarations includes abstract getters and setters too).
username_2: Fixed in Fasta! Hooray.
Status: Issue closed
|
gforge/htmlTable | 326460742 | Title: error when the table to display have just one row using the `css.cell`
Question:
username_0: The `htmlTable` function raise an error when the user want to display a dataframe of one row when the `css.cell` argument is specified as a matrix of two rows (one for the header and one for the data).
For example:
```{r}
myDF <- data.frame(X = rnorm(1),
Y = 5*rnorm(1),
Z = c("toto"))
htmlTable(myDF,
css.cell = rbind(rep("padding-right: 2em;",
times=ncol(outTable)),
matrix(rep("padding-right: 1em;",
times=ncol(myDF)*nrow(myDF)),
ncol=ncol(myDF),
nrow=nrow(myDF))))
```
error message:
`Error in css.cell[row_nr, ] : incorrect number of dimensions`
It works fine when the dataframe has more than one row.
Session information:
Package htmlTable version 1.11.2
R version 3.4.4 (2018-03-15)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1 )
Answers:
username_0: Sorry I didn't see that issue was already raised: issue #54 .
My bad ><"
Status: Issue closed
|
InfoWings/Knowledge-Net | 415565440 | Title: Range validation fails for [positive value..+inf] and [-inf..negative value]
Question:
username_0: Internally infinity represented with 0 and corresponding flags but validation method does not take flags into account and compare bound. So any validation for range [1..+Inf] looks like 1 shoud be greater than 0. Same for negative infinity |
FofanovLab/MTSv | 461041674 | Title: Offline mirror of Genbank/Refseq for use in database creation.
Question:
username_0: Due to our not having Internet access on our analysis network, we try to mirror portions of NCBI that are used in our work. If this is a duplication of effort in terms of the offline_download scripts, maybe some of what we have could be used to build the databases internally.
I won't include the entire higher-level directory structure, but assuming all of the following are within a single ncbi folder and are using the structure of the ncbi ftp, here is our mirror layout as you requested:
genomes/refseq/*
genomes/genbank/*
genomes/Viruses/*
genbank/*.seq.gz
pub/taxonomy/*
blast/db/nt* and nr* and the v5 versions
refseq/release/plasmid/*
I can also add other sections of the ftp if needed, if it would facilitate an easier internal build.
Thanks for any help in getting this working. |
trailofbits/ebpfpub | 765908746 | Title: 宣化妹子真实找上门服务m
Question:
username_0: 宣化哪里有真实大保健(找特色服务【威信781372524美女】近日,首届湾区歌手乐队大赛颁奖典礼在西湾公园固戍码头顺利举办本次活动由宝安区流行音乐协会主办、宝安区西乡街道办事处承办并获得了宝安区宣传文化体育发展专项资金的特别支持。此次大赛以“湾区新时代宝安好未来”为主题展开结合中华人民共和国成立周年举国欢庆的热烈文化氛围为广大青年音乐爱好者提供展示自我的超级舞台。此次大赛现场邀请到了三位重量级嘉宾他们分别是深圳市资深音乐人、吉他演奏家、宝安区流行音乐协会会长潘卫东先生深圳市音乐家协会副主席、深圳市流行音乐协会理事宝安区流行音乐协会常务副会长兼秘书长、青年歌唱家林晓薇女士广东省音乐家协会会员、宝安区流行音乐协会副会长、中国好声音深圳区评委安凯先生。大赛历时三个多月历经初赛场、复赛场、决赛场共决出组获奖歌手和组获奖乐队。本次大赛的组获奖歌手及奖项分别为钟佳欣荣获“至尊歌神奖”農里、区裔烨荣获“霸气歌王奖”陈凯、田海明、黎艳成荣获“实力唱将奖”组获奖乐队及奖项分别为长生鸟乐队荣获“湾区超神乐队”蓝色基因乐队荣获“湾区称霸乐队”凤唐大道乐队、计划人声阿卡贝拉乐团、星座乐队荣获“湾区唱响乐队”每一位歌手每一组乐队都是我们心中最闪耀的那颗星星他们持有对音乐对热爱对文化对传承对观众的尊敬去唱响每一首歌。每一首歌都带着他们的热情献给湾区观众献给深圳市民更献给中国人民。本次活动的开展推动了湾区文化的发展进程和深圳文化的繁荣兴盛丰富了深圳市民的精神文化活动展现了“立足宝安、面向深圳、走向世界”的宏伟愿景为深圳市的文化发展和中华传统文化的发扬作出了卓越的贡献。路永亿蘸在https://github.com/trailofbits/ebpfpub/issues/103?48806 <br />https://github.com/trailofbits/ebpfpub/issues/152?05534 <br />https://github.com/trailofbits/ebpfpub/issues/99 <br />https://github.com/trailofbits/ebpfpub/issues/146 <br />https://github.com/trailofbits/ebpfpub/issues/125 <br />https://github.com/trailofbits/ebpfpub/issues/71 <br />https://github.com/trailofbits/ebpfpub/issues/119 <br />hmzwegqctyjevtayvaxcktgaocwicdouivn |
ccyang/ysx-stackdriver-alerts | 420505285 | Title: [ALERT] GKE Container - CPU usage for ysx-cloud-platform on ysx-cloud device-app
Question:
username_0: Date: March 13, 2019 at 09:31PM<br>
<EMAIL><br>
<br>
<div style="width: 600px; margin: 0 auto;">
<table style="width: 100%; padding: 34px 6px; border-spacing: 0;">
<tr>
<td style="padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/google_stackdriver_logo.png" alt="Google Stackdriver" style="vertical-align: top;"></td>
<td style="text-align: right; padding: 0; font-family: inherit;"><a href="https://app.google.stackdriver.com/incidents/0.l53pqshx05c3?project=ysx-cloud" style="font-weight: 500; text-decoration: none;">VIEW DETAILS</a></td>
</tr>
</table>
<div style="background-color: white; border-top: 4px solid #d40001; border-left: 1px solid #eee; border-right: 1px solid #eee; border-radius: 6px 6px 0 0; height: 24px;"></div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; padding: 0 24px; overflow: hidden;">
<table style="width: 100%; border-spacing: 0;">
<tr>
<td style="width: 35px; padding: 0;"><img src="http://www.gstatic.com/stackdriver/notification/exclamation_mark.png" alt="exclamation mark" style="vertical-align: top;"></td>
<td style="padding: 0; font-family: inherit;"><span style="color: #d40001; font-size: 130%; font-weight: bold;">Alert firing</span></td>
</tr>
</table>
<div style="margin-left: 35px;">
<h1>GKE Container - CPU usage for ysx-cloud-platform,</h1>
<p>CPU request utilization for ysx-cloud device-app is above the threshold of 1.3.</p>
<h2>Summary</h2>
<p><strong>Start time</strong><br>
March 13, 2019 at 1:27PM UTC (~3 min, 45 sec ago)</p>
<p><strong>Project</strong><br>
ysx-cloud (<a href="https://console.cloud.google.com/?project=ysx-cloud" style="text-decoration: none;">Cloud Console</a> | <a href="https://app.google.stackdriver.com/?project=ysx-cloud" style="text-decoration: none;">Stackdriver</a>)</p>
<p><strong>Policy</strong><br>
<a href="https://app.google.stackdriver.com/policy-advanced/7838395135008489506?project=ysx-cloud" style="text-decoration: none;">CPU request utilization (containers)</a></p>
<p><strong>Condition</strong><br>
GKE Container - CPU usage for ysx-cloud-platform,</p>
<p><strong>Metric</strong><br>
<a style="color: inherit; cursor: text; text-decoration: none;">kubernetes.io/container/cpu/request_utilization</a></p>
<p><strong>Threshold</strong><br>
above 1.3</p>
<p><strong>Observed</strong><br>
1.302</p>
</div>
<div style="height: 54px;"></div>
<div style="text-align: center;"><a href="https://app.google.stackdriver.com/incidents/0.l53pqshx05c3?project=ysx-cloud" style="display: inline-block; background-color: #4285f4; color: white; padding: 10px 18px; border-radius: 2px; text-decoration: none;">VIEW DETAILS</a></div>
</div>
<div style="background-color: white; border-left: 1px solid #eee; border-right: 1px solid #eee; border-bottom: 1px solid #eee; height: 58px;"></div>
<div style="padding: 62px 6px; text-align: center; color: #757575;">
<img src="http://www.gstatic.com/stackdriver/notification/google_logo.png" alt="Google" style="vertical-align: top;">
<p>© 2017 Google LLC<br>
<a style="color: inherit; cursor: text; text-decoration: none;">1600 Amphitheatre Parkway, Mountain View, CA 94043</a></p>
<p>You have received this mandatory service announcement to update you about important changes to Google Cloud Platform or your account.</p>
<p><a href="https://app.google.stackdriver.com/policy-advanced/edit/7838395135008489506?project=ysx-cloud" style="text-decoration: none;">Manage notifications</a></p>
</div>
</div>
<br> |
thinreports/thinreports-generator | 90539346 | Title: Italic style の Text が Editor で指定した幅に収まらずに途切れる
Question:
username_0: [Advanced List - Thinreports Examples](https://github.com/thinreports/thinreports-examples/tree/master/list/advanced) のレイアウトファイル(Editor version 0.7.7.2) を使って Generator version 0.8.0 で生成したPDFにて表題の問題を確認しました。
なお、Editor version 0.8.2 で再保存しても同一の結果となりました。
| ファイル名 |Generator |Editor |
| ---------------------- | ------------ | --------- |
| advanced_list.pdf | 0.7.7 | 0.7.7.2 |
| result.pdf | 0.8.0 | 0.7.7.2 |

Answers:
username_1: @username_0 ちなみにイタリック体をやめたらどうなるでしょう?
username_0: @username_1
Editor 0.7.7.2 でイタリックを外して、Generator 0.8.0 で生成したPDFは途切れました。
しかし、 Editor 0.8.2 でイタリックを外して生成したPDFは途切れませんでした。
ちなみに、幅はEditorの version で少し異なるみたいです。

username_1: @username_0 なるほど。深そうですねぇ。ありがとうございます。
username_1: @username_0
Could you confirm with the latest versions. Please close this issue if the problem does not reproduce.
username_1: I believe this issue has already fixed in the latest version. So i will close the issue.
Status: Issue closed
|
mautic/mautic-zapier | 440656970 | Title: Zapier to mautic - no field or place to map the ip address to create a contact with
Question:
username_0: Hi
Im trying to use a webhook to pass data into mautic from the form on my site. I am able to pass all of the information I need including date and time and remote ip address. However I do not see any place to map that data so the contact that is created shows the location and the last active info correctly populated in the two columns - https://take.ms/0wtSi
Is there any core field in mautic that I can map with the ip address to create the contact with the ip address so the location info shows up?
Is there any other way to get this info from webhook/zapier to the create a contact option in mautic?
Answers:
username_1: Also interested in a solution for this! Currently there seems no way to link previous IP-registered visits without e-mail address to a registration with e-mail afterwards since the IP-address of the registration cannot be passed form website via Zapier to Mautic. Or is there an alternative approach?
username_2: Hi have you get resolution? |
SpongePowered/SpongeForge | 188344190 | Title: New User Error
Question:
username_0: Minecraft: 1.10.2
SpongeAPI: 6.0.0-SNAPSHOT-bd88c91
SpongeForge: 1.10.2-2107-6.0.0-BETA-1873
Minecraft Forge: 12.18.2.2116
http://pastebin.com/qhefz3G6
Answers:
username_1: That... should not be happening at all. Is this reproducible? What operating system are you using?
username_0: Probably some sort of Linux distribution, most likely Debian, it's a shared hosting so I can't tell
username_0: Unsure, it started spamming it for that specific user, it looks like it started when they joined the server for the first time
username_1: Has it happened for anyone else?
username_0: I believe not, but I just switched over to bungee, not sure if it's related to that.
Status: Issue closed
username_1: If this happens again please reply to this issue. In the meantime, I'm going to close it.
username_0: This happend again:
http://pastebin.com/JCG02DMN
username_1: This has been fixed in 1.11: https://github.com/SpongePowered/SpongeCommon/commit/4fd3b2754c60b42024f7f1b6f4df854230f609e7.
username_0: Awesome, thanks |
scripting/Scripting-News | 482967901 | Title: Testing Dave-by-email
Question:
username_0: I've been steadily working on my own MailChimp. I'll be looking for people to test it.
Please you have to be good at error reports of the form:
1. I did this,
2. Expected this, but
3. This is what actually happened.
Screen shots can be helpful. Also in case of errors, check the JavaScript console in the browser.
The emails will be sent shortly after midnight every night, they will contain the contents of Scripting News for the day.
There will be an Unsubscribe link at the bottom of each email. Other people will not be able to unsub on your behalf if you forward the emails.
It uses the same Twitter-based identity system that _Likes_ currently use, so you don't have to sign on again if you're already signed on that way.
I'll post a link here once public testing starts. ;-)
Answers:
username_0: A bunch of small changes to the email template, you will see the result in tonight's email.
I also changed how the email page works if you're not signed in.
It now puts up a button which, when clicked, directs you to Twitter to sign in.
When you do sign in, it redirects you back to the email page, which stores the access info in local storage that's accessible on all username_0.com pages. You're now logged in, and can Like posts, and comment on them, in addition to subscribing to email.
At this point, if you enter your email address and confirm by clicking the link in the email it sends, you should be registered to receive the emails.
I know JY was having trouble with this yesterday, which of course is good for the testing process.
Also if you aren't already following this thread, please do so now, so you can see the updates. Thanks. ;-)
username_1: I think this is awesome. I really forgot how much I missed DaveNet and Scripting News in my mailbox. They're formatted nicely, easy to read, and have the distinction of being one of only two email subscriptions -- the other being Monday Note. Now if it was only like the old, old days, where you could see all the mail addresses and chat with the titans of tech. Fat chance, eh?
username_2: The first two times I tried to subscribe, I seem to have waited too long (55 and 17 minutes) and the link expired. Other than that no problems.
username_0: It expires after five minutes |
nrenner/brouter-web | 226421292 | Title: z-index issue on http://brouter.damsy.net
Question:
username_0: The CSS z-index value of the _dropdown-menu_ and _leaflet-top leaflet-left_ are the same (=1000) and it results this overlap:

Answers:
username_1: Yes, should be fixed on master via https://github.com/nrenner/brouter-web/commit/6e4794131f2a1aa2b91676872f6c1b601cc<PASSWORD>
Will update server tomorrow ;).
username_1: Updated & fixed, thanks!
Status: Issue closed
|
dart-lang/http | 682263131 | Title: Treat HTTP 422 as error
Question:
username_0: `HTTP 422` is treated as response other than error now. According to https://stackoverflow.com/questions/16133923/400-vs-422-response-to-post-of-data, error may be a better choice
And [angular/http](https://www.npmjs.com/package/@angular/http) DOES treat it as error, from https://thinkster.io/tutorials/building-real-world-angular-2-apps/intercept-and-manipulate-http-requests
```dart
import 'dart:convert';
import 'package:http/http.dart' as http;
void main(List<String> arguments) {
print('Hello world!');
var body = {
'user': {
'username': '',
'password': '',
}
};
http.post(
'https://conduit.productionready.io/api/users/login',
body: jsonEncode(body),
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
},
).catchError((error) {
print('onError $error');
}).then((response) {
print('onResponse statusCode ${response.statusCode}');
print('onResponse body ${response.body}');
});
}
```
Answers:
username_1: This package doesn't treat any http status code as an error AFAIK. If you want an exception for invalid status codes, you'll need to add that check yourself.
username_0: No, it doesn't, even on `400`. Could it be?
```dart
import 'package:http/http.dart' as http;
void main(List<String> arguments) {
http
.post(
'https://httpstat.us/400',
)
.catchError((error) {
print('onError $error');
}).then((response) {
print('onResponse statusCode ${response.statusCode}');
print('onResponse body ${response.body}');
});
}
```
username_1: For what its worth, I don't think this package _should_ throw on invalid status codes, since an `HttpResponse` with a `statusCode` of `400` is still an `HttpResponse` and thus, not exceptional behavior.
There are higher-level packages like [dio](https://pub.dev/packages/dio) which do throw on unexpected status codes, maybe you're better off using those?
username_0: I encounter the problem in AngularDart. Is `Dio` compatible?
username_2: That is a question best asked to the authors of that project. I would assume it is compatible with the web platform.
Status: Issue closed
|
cupy/cupy | 552746064 | Title: Support NumPy 1.18
Question:
username_0: - non-compatible changes
- [x] #2852 [`np.can_cast(np.uint64, np.timedelta64, casting='safe')` is now `False`](https://numpy.org/devdocs/release/1.18.0-notes.html#np-can-cast-np-uint64-np-timedelta64-casting-safe-is-now-false)
- [x] #2883 error message in `numpy.testing`
- [x] #2974 `isscalar(num)` -> `isscalar(element)`
- [x] #2975 `step` returned by `linspace(..., retstep=True)`
- [expired deprecations](https://numpy.org/devdocs/release/1.18.0-notes.html#expired-deprecations)
- [x] #2992 "The deprecation of `expand_dims` out-of-range axes in 1.13.0 has expired. ([gh-14051](https://github.com/numpy/numpy/pull/14051))"
- [ ] #3005 cupy documentation |
MoneroOcean/xmr-node-proxy | 319994002 | Title: Warning during npm update command execution
Question:
username_0: gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR
gyp WARN download NVM_NODEJS_ORG_MIRROR is deprecated and will be removed in node-gyp v4, please use NODEJS_ORG_MIRROR`
Thanks.
Alexandre
Answers:
username_1: You just need to do `unset NVM_NODEJS_ORG_MIRROR` shell command if you do not want to see these warning. Not sure if it can be possible to fix on my side.
Status: Issue closed
|
aquasecurity/trivy | 1163842376 | Title: Can't install with brew
Question:
username_0: ## Description
```
$ brew install aquasecurity/trivy/trivy
==> Tapping aquasecurity/trivy
Cloning into '/usr/local/Homebrew/Library/Taps/aquasecurity/homebrew-trivy'...
fatal: unable to access 'https://github.com/aquasecurity/homebrew-trivy/': The requested URL returned error: 400
Error: Failure while executing; `git clone https://github.com/aquasecurity/homebrew-trivy /usr/local/Homebrew/Library/Taps/aquasecurity/homebrew-trivy --origin=origin --template=` exited with 128.
$ git clone https://github.com/aquasecurity/homebrew-trivy /usr/local/Homebrew/Library/Taps/aquasecurity/homebrew-trivy --origin=origin --template=
Cloning into '/usr/local/Homebrew/Library/Taps/aquasecurity/homebrew-trivy'...
fatal: unable to access 'https://github.com/aquasecurity/homebrew-trivy/': The requested URL returned error: 400
## What did you expect to happen?
Trivy to install
## What happened instead?
It didn't
## Output of run with `-debug`:
```
It didn't install anything to debug
```
## Output of `trivy -v`:
```
It didn't install anything to put -v on
```
## Additional details (base image name, container registry info...):
Following instruction on installing with brew from here :
https://aquasecurity.github.io/trivy/v0.24.2/getting-started/installation/
Answers:
username_0: Stupid GitHub have expired my token.
_Thanks_ GitHub.
Status: Issue closed
|
cgeo/cgeo | 413978889 | Title: Button to change Map from Satellite to OSM-Offline noct visible
Question:
username_0: ##### Detailed steps causing the problem:
- choose any cache
- tap button for navigation (arrow to the right lower )
- choose maps
- if the view was "satellite", the button for navigate (arrow to the point) ist not visibile, because of white button on white background
- tap on the interspace beetween the buttons "position" and "menü", then then maps dialog pops up and you can choose an other map (like OSM Offline)
The button für "navigation" is working but noch visible in satellite mode.
##### Version of c:geo used:
2019.02.23
##### Is the problem reproducible for you?
Yes
##### System information:
```
--- System information ---
Device: SM-G930F (heroltexx, samsung)
Android version: 7.0
Android build: NRD90M.G930FXXU2DRC1
c:geo version: 2019.02.23
Google Play services: disabled - 15.0.90 (040408-231259764)
Low power mode: inactive
Compass capabilities: yes
Rotation vector sensor: present
Orientation sensor: present
Magnetometer & Accelerometer sensor: present
Direction sensor used: rotation vector
Hide own/found: false
Map strategy: detailed
HW acceleration: enabled (default state)
System language: de_DE
System date format: dd.MM.yy
Debug mode active: no
System internal c:geo dir: /data/user/0/cgeo.geocaching (16,3 GB free) internal
User storage c:geo dir: /storage/emulated/0/cgeo (16,3 GB free) external non-removable
Geocache data: /storage/0001-4880/Android/data/cgeo.geocaching/files/GeocacheData (37,8 GB free) external removable
Database: /storage/emulated/0/Android/data/cgeo.geocaching/files/databases/data (10,3 MB) on user storage
Fine location permission: granted
Write external storage permission: granted
Geocaching sites enabled:
geocaching.com: Logged in (Anmeldung OK) / PREMIUM
opencaching.de: Logged in (Anmeldung OK)
Geocaching.com date format: dd.MM.yyyy
Installed c:geo plugins: none
--- End of system information ---
sry, my english is worse
Answers:
username_0: 
username_1: Thanks for reporting
username_1: @username_0 Could you check how the OSM map looks for you?
username_0: 
OSM offline map
username_2: Looks like light theme is selected and respected for satellite map, but OSM new map choses dark theme. And light theme uses the wrong button, then.
username_0: If I use Settings->Appearance ->NOT Light Skin
and use the maps
Google Maps (white menu background -> button not visible)
Google Satellite (white background -> not visible)
OSM: Mapnik (dark background -> visible
any OSM offline-map (dark background - visible)
If I change the Settings->Appearance -> Light Skin
and use the maps
Google Maps (dark menu background -> button visible)
Google Satellite (dark background -> visible)
OSM: Mapnik (dark background -> visible
any OSM offline-map (dark background - visible)
So, when I reported the issue, the light theme was NOT selected.
username_2: Ok, thanks for clarifying. So it's the other way around - dark theme chosen, but Google map view does not adhere to that choice. Might be a side effect of the targetSDK26 changes? (@username_1 Do you still have a device available with pre-targetSDK26 installed and can compare this?)
username_1: I would need to reinstall an older release which is not a big deal, but interesting here is, that for me the title bar is always black on gmaps. mmh...
username_1: I would need to reinstall an older release which is not a big deal, but interesting here is, that for me the title bar is always black on gmaps. mmh...
username_0: It's a bit amazing. With my "old" Acer tablet with Android 4.2.2 and NB 2019.02.23 or NB 2019.02.25 everthing is working fine. If I chose the dark theme in settings, all the background (menu) bars for alle map-types are black and the button is visible.
My smartphone uses still Android 7.0. And if I understand right, SDK26 ist for Android 8.0 and higher.
If nobody else has this problem with the hidden button, then close the issue. I know how to work around. Thanks for your response.
username_3: I have the same effect on android 7. The colors and icons differ between the android versions.
username_1: Setting label "High". Lot of users report, that different maps are no longer available...they are, but the button is invisible.
Should be fixed rather urgent.
username_1: I played around with my Android 8.0 device:
When I use the default black skin I see this "older" looking design with black background and cyan line on GMaps and in Settings.
However when I switch to light skin GMaps and Settings show a similar design as with black skin on new map and other parts of the app.
Could it be, that this is somehow only mixed up.
I mean if we achieve, that the white skin look on GMaps and Settings is used while black skin is activated the whole app would look pretty the same at least on my Android 8 device.
username_1: Cant produce screeb shots as I am on business trip, but if you take an Android 8 emulator it should be reproducible.
username_3: Maybe it is a problem raised by the appCompat Library?
I created a PR that the result can be checked (without maps API key).
username_4: I have an idea for the reason, will try to check it later today.
username_4: Currently I do not have any means to reproduce this issue, unfortunately. I tried emulators from API 17 up to 26 (omitting 20 and 21), but everywhere the google map api action bar was dark...
As we have to use a 'native' theme there (as we cannot derive from the appcompat activity) this might be influenced by the, possibly vendor specific; theme of the device as such.
Anybody with a samsung device who can cross-check?
username_4: Different approach as a workaround: Add an outline to the icon as the others have (see my PR)
Status: Issue closed
|
gdotdesign/elm-ui | 227208213 | Title: Missing some REALLY important components
Question:
username_0: I've used some ui frameworks around there and the availability of components and ease of use helped me build rich, intuitive, organized and beautiful UIs. All of that because they sticked to very good design guidelines AND provided some really importante components.
I miss some components in Elm UI. I tried to build a simples ui and soon I missed lists, cards and text. Maybe these would be some GREAT improvements around here!
Some examples:
- https://semantic-ui.com/elements/list.html
- https://semantic-ui.com/views/card.html
- https://material.angularjs.org/latest/demo/list
- https://material.angularjs.org/latest/demo/card
Answers:
username_0: No to mention, of course, other important frameworks, like bootstrap and foundation.
- https://v4-alpha.getbootstrap.com/components/card/
- https://v4-alpha.getbootstrap.com/components/list-group/
- http://foundation.zurb.com/sites/docs/card.html
username_1: I'm using http://package.elm-lang.org/packages/rundis/elm-bootstrap with Elm-UI, and it seems to work fine. Appearance looks similar. |
basisproject/tracker | 672372261 | Title: Cost tracking model
Question:
username_0: There are two approaches here.
1. [vf::Process](https://docs.rs/vf-rs/0.3.10/vf_rs/vf/struct.Process.html)
2. Cost tags (original approach)
## Process
The essense of a process is a somewhat short-lived combination of inputs that produces a set of outputs.
### Pros vs Cost tags
- Encapsulates costs in a direct and meaningful way vs tracking costs over time. A widget that cost 3C yesterday might cost 2C today because we updated our equipment. With cost tags, that 2C could potentially take a long time to be realized (one year or however long the cycle is).
- Allows directing costs internally via paths.
- Naturally lends to resource transformations (crude oil -> gasoline/etc)
- Costs of production are immediately encapsulated, meaning if we order inventory for 10 shirts, we have inputs and outputs for those 10 shirts. With cost tags, the first shirt produced will cost `(inventory + labor) / 1`. With processes, we can determine that the cost of the ten shirts is `(inventory + labor) / 10` without needing amortization BS.
- Allows more detailed tracking of production of some Thing, vs cost tags which obscure the production
## Cost tags
Cost tags are long-lived buckets that costs are directed into and out of proportionally. Their purpose is to encapsulate costs over a long period of time.
### Pros vs Processes
- Naturally spread costs over a longer period.
- Offers inherent safeguards against "lost" costs (although not foolproof)
- Much simpler to derive vs processes, which requires network traversal (sometimes recursive).
- Do not have a defined end, meaning their attribution to outputs is fairly simple to derive.
- Does not require intimate knowledge of the flow of resources within a company (companies can use whatever accounting they see fit)
- Much more naturally spread overhead costs onto outputs.
Answers:
username_0: Haven't figured this out yet. This is a huge looming question. It might be the case that processes without outputs, like cost tags, are just assigned to outputs in equal proportion. Then, instead of having to traverse the graph to find dead-end processes and *disallow them* we effectively tack on their costs to our outputs in equal proportion.
Not sure how this looks yet, but it seems achievable.
username_0: Ok need to add more here. Given that we're moving from a sort of costs-over-time stream based approach (cost tags) to a more granular approach (processes), I think it makes sense for companies to account for their costs internally as a total. This would be a sum of all processes.
Effectively it means:
- amortization is deprecated. processes become cost aggregators, and it's up to the company to decide how those costs are distributed from the processes. whether this is based on a stream or very defined processes with start/end dates no longer matters.
- companies will have a total cost cap. once it is reached, they must receive orders before they can make outgoing orders or record labor.
- dead-end processes don't matter. if you don't connect it, the costs will balloon, and eventually the company will bankrupt
I love this idea because it creates an economic incentive to account for costs properly. It also means amortization doesn't have to be a special process but can be a core function of processes and instead of the community managing how costs are amortized at what rates and by what amounts, it means they can just adjust one number: total cost cap. it's up to companies to manage their costs such that they stay under that cap.
This is a much simpler model for both companies and regions.
username_0: Maybe processes don't have cost tags, but rather the outputs specify a proportion of the costs they receive.
If a process only has one output, then the cost is divided as `cost = inputs / # outputs`. However, if there are multiple outputs of a process, then each output should have some proportion of the costs it uses.
It might be nice to do this on a somwhat automated basis. For instance, give a process a "unit" (like hours) and whenever it's an input to another process, if that event (like, `use 3d printer for 5 hours`) then it would know that 5 hours of the machine's total 5000 hour expected lifetime means 1/1000 of the cost of the machine should be transferred to the next process. However, if an event that doesn't use hours is attached as an output, not sure how this would be handled (or even if allowed?).
Might be better to just have each output event "take" whatever costs it needs from the process without trying to do fancy unit conversions.
username_0: Note that one thing I'm still fairly stumped on is how to do overhead costs. A janitor might not make widgets, but keeps the factory clean for everyone. The janitor's labor allows the widget makers to make more widgets than they could if they were doing their own sweeping/mopping.
How does the janitor's labor get attributed to the manufacturing process? Perhaps some kind of hour-based method, like our 3d printer machine one comment up.
One thing that's bugging me is that this should be automated. We shouldn't have to connect the janitor's labor to the manufacturing process more than once, and after the initial setup, the janitor's labor should flow into the costs of the build process without us having to touch anything, and the costs should be spread evenly based on hourly production.
username_0: Thinking about this more, it seems to be that resources should ultimately hold the costs, not processes. Processes can aggregate costs, but ultimately the resources themselves (the stamping machine, the widgets, etc) would have the costs. This works much like Basis v1 except we'll be tracking resources internally as a means to aggregate/divide costs.
My issue here is that resources are not the only thing that has cost. Labor has cost. The janitor doesn't create some resource directly, so there's no resource to attach the cost *to*. It's almost like resources *and* processes need cost. The more I mull this over, the more it makes sense for both to contain costs.
username_0: Ok, both processes and resources will have `Costs` objects. A company's total costs will be the sum of all processes and resources. Closing this for now since the decision is made. Cost tags are gone, forever. Farewell, my friends.
username_0: closed
Status: Issue closed
|
Bluegrams/Vividl | 555040125 | Title: Portable version
Question:
username_0: I am glad that there is a portable version of [Vividl](https://sourceforge.net/projects/vividl/) (I always prefer portable programs) but in my opinion [version](https://sourceforge.net/projects/vividl/files/) [0.1.0](https://sourceforge.net/projects/vividl/files/v.0.1.0/) is not fully portable as it still writes to AppData (**user.config** file):
`C:\Users\User\AppData\Local\Bluegrams\Vividl.exe_Url_2emrq3scocpo2mxqxzwbdn50ixbadvyb\0.1.0.0\user.config`
I think that for the portable version of [Vividl](https://github.com/Bluegrams/Vividl) this **user.config** file should be saved next to **portable.config** file in program folder (the folder which contains the executable- **Vividl.exe**) which would make the program fully portable (not writing outside program folder).
Answers:
username_1: I've always tried to release portable versions of my apps since you made me aware of this a while ago :)
What you describe is indeed a bug in the current implementation as Vividl _should_ write all configuration to the `portable.config` file in the executable folder. I've already fixed this and the fix will be released with the next version. Thanks for reporting.
Status: Issue closed
username_0: Excellent, thanks, I am waiting for the new release.
username_0: @username_1
The portable version is really fixed- I downloaded and tested version [0.1.1](https://sourceforge.net/projects/vividl/files/v.0.1.1/) and [Vividl](https://github.com/Bluegrams/Vividl) no longer writes outside its own folder, excellent, thanks. |
careless-gazelles/Bangazon-Site | 253350893 | Title: User can select product category when selling product
Question:
username_0: **Given** a user is authenticated
**And** performs a gesture on the *Sell a product* hyperlink
**When** the product form is rendered
**Then** there should be a dropdown that displays all product categories
**Given** a user has filled out the product form, but not chosen a category
**When** the user clicks on the *Sell* button
**Then** the user should be alerted to select a product category
Answers:
username_1: COMPLETE.
Status: Issue closed
|
microsoft/BotFramework-Composer | 569303493 | Title: [Accessibility][Keyboard Navigation - User Input]: 'All' and specific user input section is not accessible via Keyboard.
Question:
username_0: **User Experience:**
For keyboard users it is difficult to complete the task is if any control is not accessible via keyboard.
**Test Environment:**
Environment: Chromium Edge (Anaheim) + Narrator
Build Version: 2004 (OS Build 19564.1005)
Microsoft Edge Version: 82.0.425.0 (Official build) canary (64-bit)
Test URL: http://localhost:3000/home
**Repro Steps:**
1. Open URL: http://localhost:3000/home
2. Select 'User Input' link from left navigation.
3. Observe the issue.
**Actual Result:**
'All' and specific user input section is not accessible via Keyboard. On clicking using mouse, these controls are interactive as action is getting performed but its not accessible with keyboard.
**Note:**
Same issue is repro for 'Bot responses'.
**Expected Result:**
'All' and specific user input section should be accessible via Keyboard.
**MAS Reference:**
https://microsoft.sharepoint.com/:w:/r/sites/accessibility/CELA/Official_Microsoft_Accessibility_Standards/MAS%202.1.1%20%E2%80%93%20Keyboard.docx?d=w1f29acf9a3e7414fbd8d7e3598d877bb&csf=1&e=LSJeSF
**Attachment:**
[MAS2.1.1_Bot responses_.zip](https://github.com/microsoft/BotFramework-Composer/files/4239008/MAS2.1.1_Bot.responses_.zip)
[MAS2.1.1_User Input_Bot responses.zip](https://github.com/microsoft/BotFramework-Composer/files/4239009/MAS2.1.1_User.Input_Bot.responses.zip)
Answers:
username_1: 
Focus does not move to green box area after tabbing from focus on 'Start bot' button
username_0: As checked and verified on below environment, the issue is fixed now. So, we are closing the issue.
**Test Environment**
Browser: Microsoft Edge Dev {Version 83.0.467.0 (Official build) dev (64-bit)}
OS build: 2004 (19588.1000)
Screen Reader: Narrator
URL: [Bot Framework Composer](http://localhost:3000/dialogs/Main)
[**Prerequisite Here**](https://github.com/microsoft/BotFramework-Composer)
Status: Issue closed
|
wenzhixin/bootstrap-table | 63051220 | Title: Header/Cell formatter
Question:
username_0: It would be possible to format the column headers? (I would like to include some filters just like MS EXCEL)
It would be great also the possibility to return a javascript object to the "table formatter" functions rather than just HTML strings.
Thank you a lot!
Answers:
username_1: @username_0
There are some good examples of the column 'formatter' attribute to be found at http://bootstrap-table.username_2.net.cn/examples/#format .
I personally use a generic formatter and changed bootstrap-table.js to pass this.header.fieldname through were needed, but its perfectly usable without that - i just prefer a switch(context) to multiple functions.
Also, you 'could' use the onPostHeader() callback and manipulate the headers <th> themselves even more (though you may need to add the this.trigger() call in the bootstrap-table.js file, depending on version, 2sec job with many examples in other functions). I do something similar in #678 where i use that for my column filters.
username_2: It is not supported, but I think you can use the extension to do this.
http://bootstrap-table.username_2.net.cn/extensions/#table-filter-control
http://issues.username_2.net.cn/bootstrap-table/#extensions/filtercontrol.html
Status: Issue closed
|
soufianesakhi/youtube-playlist-helper | 932316012 | Title: Chrome Version 91.0.4472.114 (Official Build) (64-bit) manifest not found
Question:
username_0: Hello,
After unzipping the extension and trying to install it to said chrome in Windows you get error:
`Manifest file is missing or unreadable`
Fixed by moving manifest to root and adding src/'s to paths.
Status: Issue closed
Answers:
username_1: Hello, thanks for opening the issue, but this is neither a feature nor a bug fix.
If you need to debug locally the extension, download the repository using the available links on GitHub and import the src folder directly to Google Chrome (Load unpacked) or Firefox (in about:debug).
username_0: To this day I thought the normal way of installing Chrome extensions was through drag&drop. My bad for opening this issue. Sorry for the trouble. :) |
objectify/objectify | 260990931 | Title: Support new Java 8 date/time classes
Question:
username_0: As Java8 runtime is now supported as standard environment, it would be sensical to support new Java8 time/date classes: OffsetDateTime, ZonedDateTime, etc..
Problem: New Java8 date/time classes contain both timestamp info (Instant) and offset or timezone info (OffsetDateTime, ZonedDateTime). OTOH, Datastore natively supports only timestamp fields which do not contain offset and/or timezone info.
Qs: How to support new Java8 time/date classes?
Could this be done via custom serializer or extension?
Answers:
username_1: Objectify6 will be JDK8+ so it will at least support Instant out of the box.
I'm not sure what to do about the more exotic time classes. One option is to convert to/from the ISO-8601 string representation, but that won't sort. Another is to use a custom string format that satisfies the `compareTo` of the Java classes, but that wouldn't necessarily be human-friendly.
This requires some discussion; I'm open to ideas. It's easy to add arbitrary translators; perhaps any experimental translators should require being registered explicitly (like the joda translators) until we decide on a permanent format.
username_0: Since timestamp + offset/zone do not map to any of Datastore native fields, we'd have to use two fields that are logically linked. This would probably rule out using translators which only do single property <> single field translation?
Maybe we could use embedded objects?
username_1: There's really no value in using multiple fields; it makes indexing and querying difficult and expensive (lots more indexes required, including datastore-indexes.xml). The best approach will be some sort of String translation. We can generate something that matches comparator behavior; OffsetDateTime would be something like: "2017-09-28T10:50:12.3456Z2017-09-28T18:50:12.3456" (Z is an implicit separator; the second part is the localdatetime so it sorts correctly; the offset is easy to calculate).
https://docs.oracle.com/javase/8/docs/api/java/time/OffsetDateTime.html#compareTo-java.time.OffsetDateTime-
ZonedDateTime is also Comparable but the behavior does not appear to be documented. I'd have to look at the JDK source but my guess is it is similar to OffsetDateTime, although the same format won't work for us (we'll need to add an explicit TZ to reconstitute the original form).
username_0: Imho, the point of having datetimes with offset is to answer queries like: find all datetimes where local time is 5pm (for instance this is useful for sending notifications at certain local time). Would the proposed format enable such queries?
ZonedDateTime is altogether more complicated because time offset of geographical zones is not fixed, i.e. it changes based on different (local) daylight saving definitions. JVM library keeps an updated list of zones/offsets/dst definitions to be able to translate geo zone -> time offset.
username_1: Assuming you have an OffsetDateTime field, how would you even issue this query? Objectify would have to include some sort of query language/builder that understands the time objects. Plus, what do you do about the people that want different kinds of ordering (ie, the Comparator<?> ordering)? This could end up needing a half-dozen indexed properties just to support one field, plus now users will need various combinations of them in datastore-indexes.xml.
This is the kind of thing that users can manage pretty well on their own (ie, inside your `MyEntity.setMyOffsetDateTimeField()` you can break down the value into component parts and save them as indexed private fields). There's a lot of different approaches; to me this doesn't feel like the kind of behavior that should be built into Objectify.
username_0: Hmm, agreed.
Status: Issue closed
username_0: There seems to be very little value-add in supporting Java 8 date-time classes (OffsetDateTime, ZonedDateTime) beyond the basic Instant class. |
jasp-stats/jasp-issues | 416907219 | Title: Forecasting - ARIMA
Question:
username_0: Is forecasting available in JASP? Specifically ARIMA? Thanks!
Answers:
username_1: Great suggestion, I would love to see the time series stuff added. We haven't done this yet though.
E.J.
username_0: E.J.,
Thanks for the quick response. I wish I could help, but I'm hopeless when
it comes to coding.
I'll just have to hope someone smarter than me takes an interest in ARIMA
and ITS and builds it.
Warm regards,
*<NAME>, MS*
Psychology
307.855.2173
MAKE YOUR WAY HERE
*The sole purpose of human existence is to kindle a light in the darkness
of mere being.*
- <NAME> (1961) |
godotengine/godot | 581129304 | Title: Mac OSX Export Template compilation produces a not executable template
Question:
username_0: macOS Catalina 10.15.3
Macbook Air
Steps:
```
-downloaded 3.2.1 Godot source from https://github.com/godotengine/godot/releases
-compiled code using http://docs.godotengine.org/en/stable/development/compiling/compiling_for_osx.html
-compiled templates using https://godotforums.org/discussion/20550/how-to-create-export-templates using username_1's response
-used template in exporting a mac build from both windows and same mac
-running the template gives "LSOpenURLsWithRole() failed with error -10810 for the file"
```
This is preventing me from supporting mac steam build.
Answers:
username_1: The export template will have its executable flag disabled due to macOS security restrictions. I'm not sure if we can do anything about it just yet. See also #36332 where I had the same problem.
username_0: The compilation also fails for creating the Editor application with the same error (using 3.2.1 source).
In effect, I can't compile the engine from source.
username_0: Yeah, turns out this happens if the folder structure required by the module I was trying to integrate is not maintained. It's not a Godot issue. Closing.
If anyone's hitting the same wall as the dumb me, don't skip the
```NOTE: For OSX, the libsteam_api.dylib and steam_appid.txt must be in the Content/MacOS/ folder in your application zip or the game will crash```
part of the godot steam tutorial.
Status: Issue closed
|
Caliburn-Micro/Caliburn.Micro | 72055154 | Title: Support for Eto.Forms
Question:
username_0: Just a thought that the community may want to consider.
Curtis is building a pretty kick-ass tool suite for cross-platform development called [Eto](https://github.com/picoe/Eto) which may be beneficial to support in Caliburn Micro. It may assist with getting #142 off the books too.
Answers:
username_1: It's certainly a different library, looks very similar to Xamarin.Forms in conception but with a broader reach.
Given it's functionality set it doesn't look like using this overtop of Xamarin would give us any benefits and could potentially hamper things like data binding and conventions.
Thanks for bringing it to my attention though I'll keep an eye on it.
Status: Issue closed
|
wareismymind/Secundatives | 520096710 | Title: We should have a "FlatMap" abstraction for Maybe<T>
Question:
username_0: There are certain situations that seem to come up when handling things like:
Maybe transforming an object based on the existence of another.
traditionally:
```cs
var thingToTransform = GetThing();
if(thing != null)
thingToTransform = thing(thingToTransform);
//... repeat
```
Currently to express that we have to do:
```cs
var thingToTransform = GetThing();
thing = thing.Map(x => x(thingToTransform).UnrwapOr(thing);
```
Ideally we could express this (Maybe not with "FlatMap") as:
```cs
thingToTransform = thing.NewFunc(x => x(thingToTransform);
```
Would be nice to be able to coalesce these concepts. I'm open to either names or nicer ways to express this as well.<issue_closed>
Status: Issue closed |
rbind/support | 523143534 | Title: rbind.io subdomain request
Question:
username_0: <!--
Please use this template for new rbind.io subdomain requests.
A volunteer will help you create the subdomain later. We don't really have enough human resources here, so please be serious about your website. We hope to see you really make use of your website in the future, instead of simply getting a free subdomain and letting it collect dust in a corner. Thank you!
-->
## Netlify website address
naughty-joliot-9c2e94.netlify.com
## Preferred rbind.io subdomain
aboreal.rbind.io
### Agreement
- [x] By submitting this request, I promise I will at least write one blog post or create one web page on my website after I get the rbind.io subdomain.
Answers:
username_1: @username_0 -we have configured the rbind subdomain you requested. Please:
1. Add the rbind subdomain in your Netlify account as the custom domain to your site, as is shown below. Simply use your `subdomain.rbind.io` as the record, no `www` needed.

2. There might be a hint "Check DNS configuration" in the domain section on Netlify after adding the rbind subdomain -- it can be safely ignored.
3. You may also find the last two sections of [this post](https://yihui.name/en/2017/11/301-redirect/) helpful for redirecting HTTP traffic to HTTPS automatically.
Thanks!
-Nan
username_0: Thanks a lot @username_1 !
Status: Issue closed
|
Stephan-S/FS19_AutoDrive | 838178789 | Title: [BUG] autodrive will not load into CP
Question:
username_0: **AutoDrive Version:** 1.1.0.8
**Release or Custom Zip:** Release
**Map (If applicable):** All
**Vehicle / Type (If applicable):**
**Multiplayer or Singleplayer:** Singleplayer
**Describe the bug**
I load a CP field course than I attempt to load a AutoDrive route to barn and Return to field but it will not load into CP.. I try loading any autoDrive route and non will load.
Here is My Log:
[log.txt](https://github.com/Stephan-S/FS19_AutoDrive/files/6185807/log.txt)
Here is a Video: https://youtu.be/UJRfQf5jvBg
**If applicable - Steps to reproduce the behavior**
**Screenshots**
If applicable, add screenshots to help explain your problem.
Answers:
username_1: ??? I don't have that method to use CP and AD together .....
Better to use the CP. for the loading path, and activate the item "Use AD for unloading / Loading", while in the AD set the delivery function, setting it as the starting point. the number of the field, for the unloading the unloading triger and the type of product, in AD also click on the button with the hexagon and the C, and the C becomes yellow, at that point start the CP, and the vehicle will do the his collection work, when full he starts the AD and will go to deliver ....
Right now I can't make screenshots, as soon as I can I see to put them.
username_2: Maybe you are to wide away from an AD coursepoint. So this dont work then for me.
Drive with the tractor near a AD point and try again, so then this works for me, CP v 6.03.00051 and AD 1.1.0.8 or 1.1.0.9-rc1
Lg
Andreas
username_3: Got nothing to do with loading an AD course into CP...
Status: Issue closed
username_4: No Bug in AD - just try to use driveto in AD to check if the route network is OK or not. |
jpsim/Yams | 298098345 | Title: Are multiline values supported?
Question:
username_0: I wonder whether multiline values like
```
key: |
value1
value2
```
are supported by your library. I always get parse errors trying to read such YAML files.
Answers:
username_1: What parse errors are you getting? Yes, multiline strings with the syntax you shared are supported by Yams.
username_1: Here's a small sample program demonstrating how this is parsed:
```swift
let yaml = """
key: |
value1
value2
"""
let parsedYAML = try! Yams.load(yaml: yaml)!
print(parsedYAML)
// Prints:
// [AnyHashable("key"): "value1\nvalue2"]
```
Status: Issue closed
username_0: Oh, thank you! It turned out that it was just my inability to write YAML files correctly. Sorry for that. |
googleforgames/open-match-docs | 522544392 | Title: Slack invite link is no longer active
Question:
username_0: Prior to KubeCon we should fix the link so that new folks can join the slack community.
Answers:
username_1: The Read more... is provided by the docsy and there's no way to override it :(. Working on a fix to update the link!
Status: Issue closed
username_1: #116 overrides the Readmore text and should looks much better. |
Klemen1337/node-thermal-printer | 325970351 | Title: Can you make the "timeout" configureable in the NetPrint function?
Question:
username_0: Can you make the "timeout" configureable in the NetPrint function?
I found myself getting too many Error: Socket Timeout error. 3 seconds might not be sufficient for network printers.
Thanks in advance
function NetPrint(host, port) { this.timeout = 3000;
....
Answers:
username_1: @Klemen1337 any updates on this?
I have an implementation [here](https://github.com/username_1/node-thermal-printer/commit/c964d657e9406696c93b749ccbfac83a3028dfb4)
If this is ok, I'm happy to raise a PR
username_0: Please raise a PR. Thanks |
opencontainers/artifacts | 833709003 | Title: Mixing "manifests" and "layers" in an artifact?
Question:
username_0: I understand that [support for "artifact indexes"](https://github.com/opencontainers/artifacts/blob/master/artifact-authors.md#future-scope) similar to "image indexes" is expected to come in the future.
The [OCI image specification](https://github.com/opencontainers/image-spec) doesn't support mixing _image references_ (the [`manifests` list of an image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md#image-index-property-descriptions)) with _image payload_ and _image config_ (the [`config` and `layers` list of an image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md#image-manifest-property-descriptions)). Is it planed to make the _OCI artifacts specification_ drift a bit from it allowing a mixture of both?
The use-case how I came to this is following. An artifact consists of:
- A configuration file (`config` in a manifest).
- Some files (`layers` in a manifest).
- A container image (`manifests` in an index).
But in general the idea is that an artifact can have it's own files and references to other artifacts (being a container image a special artifact type), being the artifacts tools capable of handle the referred artifacts as part of the artifact itself... For example, push/pull an artifact including all referred artifacts.
Of course you can put an image name or artifact name in the configuration and then recursively resolve those references, but having it all integrated makes the management much easier.
Answers:
username_1: Hi @username_0,
These are well-timed discussions.
Please see #29 with example usage at: https://github.com/deislabs/oras/blob/prototype-2/docs/artifact-manifest.md
Also, see Justins "One Manifest to Rule Them All" WIP proposal: #37 |
GemeenteUtrecht/zaakafhandelcomponent | 1011097522 | Title: Fix updating rols when notifications are received
Question:
username_0: Recently we experienced bursts of thousands of rol notifications that caused the collapse of OZ, ON and ZAC services.
The root cause is the updating of "betrokkene" for a given zaak where we add the initials + last name of the "medewerker" where only an identification is referenced. The instances zac-test and zac-acc are competing with each other and triggering notifications for each other, which goes into an infinite loop.
The error and options how to solve it are described here - https://gist.github.com/username_1/888c2e744a24e96e31b1ba78e1d2f2ad
**The 1st option is chosen:**
* ZAC will try to update notifications only when receiving `create` rol notifications
* ZAC will update only one rol.
* If the rol already contains non-empty name attributes it is not updated
* If the related user in ZAC doesn't have name attributes. the rol is not updated
* The suggested hash is not implemented yet.<issue_closed>
Status: Issue closed |
a2-4am/4cade | 547901755 | Title: Rescue Raiders crashes on //c+
Question:
username_0: Just after the copyright info past the first intro screen. Crashes to monitor. Let me know what I can do to diagnose.
Answers:
username_0: Tried this on actual hardware with some breakpoints installed (wasn't able to recreate in MAME). Crashes at 63C8:JSR $D800 because ROM is switched in instead of Bank 1 of LC.
username_0: Don't see it using WAIT/FCA8 anywhere. ?
username_0: Confirmed "+READ_RAM1_WRITE_RAM1" just before the prelaunch JMP $404A fixes Rescue Raiders with //c+. Doesn't affect any other machines I tested.
username_0: I was wrong about C060 and annunciators. They had no effect. But I will admit I don't know where it's being switched back to ROM.
username_1: It's the disk access code which is checking which bank was active at
the time in order to know how to switch back to it on exit, since the
I/O code is in banked LC.
On the IIc+, we must be seeing a colliding value that's confusing the
switching code.
Status: Issue closed
username_0: fixed |
hyb1996-guest/AutoJsIssueReport | 277783175 | Title: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
Question:
username_0: Description:
---
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at com.stardust.view.ViewBinder.invokeMethod(ViewBinder.java:126)
at com.stardust.view.ViewBinder.access$000(ViewBinder.java:16)
at com.stardust.view.ViewBinder$5.onClick(ViewBinder.java:117)
at android.view.View.performClick(View.java:5231)
at android.view.View$PerformClick.run(View.java:21240)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:179)
at android.app.ActivityThread.main(ActivityThread.java:5739)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:674)
Caused by: java.lang.reflect.InvocationTargetException
at java.lang.reflect.Method.invoke(Native Method)
at com.stardust.view.ViewBinder.invokeMethod(ViewBinder.java:124)
... 11 more
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.stardust.scriptdroid.ui.main.script_list.MyScriptListFragment.newScriptFile()' on a null object reference
at com.stardust.scriptdroid.ui.main.MainActivity.createScriptFile(MainActivity.java:200)
... 13 more
Device info:
---
<table>
<tr><td>App version</td><td>2.0.10b Beta</td></tr>
<tr><td>App version code</td><td>127</td></tr>
<tr><td>Android build version</td><td>eng.root.20171025.112716</td></tr>
<tr><td>Android release version</td><td>6.0.1</td></tr>
<tr><td>Android SDK version</td><td>23</td></tr>
<tr><td>Android build ID</td><td>A57_11_A.23_171025</td></tr>
<tr><td>Device brand</td><td>OPPO</td></tr>
<tr><td>Device manufacturer</td><td>OPPO</td></tr>
<tr><td>Device name</td><td>A57</td></tr>
<tr><td>Device model</td><td>OPPO A57</td></tr>
<tr><td>Device product name</td><td>A57</td></tr>
<tr><td>Device hardware name</td><td>qcom</td></tr>
<tr><td>ABIs</td><td>[arm64-v8a, armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[arm64-v8a]</td></tr>
</table> |
microsoft/BotFramework-Composer | 871526236 | Title: Publish process should detect if LUIS is required but absent from publishing profile
Question:
username_0: Currently, if you publish a bot that uses LUIS, but the publishing profile does not have a LUIS resource, you get a misleading error:
`Error - failed to bind luis prediction resource to luis applications`
Yes, of course it failed! Because we have no keys set.
Rather than allowing this to fail, the publishing process should determine if all required resources are present and direct user to the "edit publishing profile" process if one or more is missing.
This would also apply to QNA maker.
Answers:
username_0: Dupe of #7740
Status: Issue closed
|
dotnet/aspnetcore | 916702257 | Title: Typing @ in an attribute value in a .razor file replaces the quotes with two @ characters
Question:
username_0: _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/t/Typing--in-an-attribute-value-in-a-raz/1442223)._
---
Start with `<img src=`
Type " to add the quotes for the src attribute value.
Type an @ character.
Expected result: `<img src="@"`
Actual result: `<img src=@@`
---
### Original Comments
#### Feedback Bot on 6/6/2021, 11:03 PM:
<p>We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.</p>
---
### Original Solutions
(no solutions)
Status: Issue closed
Answers:
username_1: Fixed in 17.0-p2 |
Malowa/hello-world | 369434883 | Title: Assistance
Question:
username_0: Hi guys I would like to develop a corruption reporting web application kindly assist

Answers:
username_0: Hi guys I would like to develop a corruption reporting web application kindly assist |
GC-spigot/AdvancedEnchantments | 424980559 | Title: Multiple Arrow Effect
Question:
username_0: If I shot one time with a bow that have a chance to shot few more arrows like I set
Answers:
username_1: The only way to do it right now is using effect SPAWN_ARROWS as there's no enchant type that activates upon shooting an arrow.
Status: Issue closed
|
robotframework/robotframework | 593287609 | Title: Expose keyword line numbers via listener API v2
Question:
username_0: Test line numbers were exposed already in RF 3.2 (#3451) via `start/end_test` methods but for technical reasons we couldn't add this info to `start/end_keyword` methods. The technical reason is that the execution side code didn't make this possible, and it also the reason why we haven't been able to add `start/end_keyword` methods to the listener API v3 (#3296).
Answers:
username_0: We have PR #3518 about adding the keyword source information to `start/end_test` methods. It is compatible with RF 3.2, but unfortunately it cannot be merged due to the [reasons explained in its comments](https://github.com/robotframework/robotframework/pull/3518#issuecomment-608361005). If you happen to need this information badly, you can use the version in the PR until this feature is added to Robot Framework itself.
username_0: Probably the biggest benefit of getting keyword source information via the listener API (v2 or v3) would be making it a lot easier to create debuggers. If there are lot of needs for that, or for this functionality in general, we can consider creating RF 3.3 where the main enhancement would be cleaning up the execution code so that this and #3296 could be implemented. The current RF 4.0 has also many other big enhancements which means that it will take quite long time until it is released. RF 3.3 with better execution side logic would directly benefit other features planned for RF 4.0 (e.g. IF/ELSE), so it shouldn't post-pone RF 4.0 release much. Having an intermediate release with important features like this one might very well be worth that.
username_1: line number and source would also be extremely beneficial to show on stdout when a test fails.
Now showing only the failure reason.
username_1: Related draft.
https://github.com/robotframework/robotframework/pull/3672
username_1: This is now added to development-4.0 branch as https://github.com/robotframework/robotframework/commit/76c821f0e1b8b4d25effa44e290b3429677431af
username_0: This has been implemented by 67fd7f76203f94402977bb8b53a8a415a6897f5a. The version in alpha 3 didn't report line numbers correctly with the new IF/ELSE and this apparently isn't yet documented in the User Guide either.
username_0: It turned out source information isn't reported correctly in all cases. Fixing that is somewhat complicated and we decided to post-pone it to beta 2. If we get `start/end_keyword` added to listener v3 (#3296), supporting source with listener v2 will be easy and the current line number number implementation can be simplified as well.
Status: Issue closed
|
ether/etherpad-lite | 656843282 | Title: HTML Export produces broken lists with multiple unordered lists
Question:
username_0: **Describe the bug**
HTML-Export is broken IE `<ul class="bullet"></li><ul class="bullet"></ul></li></ul>` when multiple bullet/unordered lists are used.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a pad, ctrl+a to select everything
2. ctrl+shift+L to make an unordered list
3. tab to make a second level of unordered list
4. export HTML
Answers:
username_1: What's the expected html output and the actual output?
username_0: Sorry for not being clear.
Actual relevant output:
`<ul class="bullet"></li><ul class="bullet"><li>test</ul></li></ul>`
Expected:
`<ul class="bullet"><ul class="bullet"><li>test</li></ul></ul>` or maybe `<ul class="bullet"><li><ul class="bullet"><li>test</li></ul></li></ul>`
With two lines (first one bullet level, second two levels) this becomes:
`<ul class="bullet"><li>UL1<ul class="bullet"><li>UL2</ul></li></ul>` vs. expected: `<ul class="bullet"><li>UL1<ul class="bullet"><li>UL2</li></ul></li></ul>`
username_0: Yeah with content the first `<li>` is correct, without content its a single closing `</li>`. However, in both cases the last clasing `</li>` seems to be at the wrong place
username_0: This is a minor issue because browsers actually display it correctly when viewing the exported HTML, so the lists are readable and correctly intended. Also it seems to work when importing the exported HTML again. So I am not sure now, if this is a bug or not... :-)
username_1: It's not semantically clean HTML so ideally I will get time to fix before the 21st.
username_1: Confirmed.
Status: Issue closed
|
ros2/ros2cli | 436195454 | Title: ros2component verb to start standalone component
Question:
username_0: ## Feature request
#### Feature description
From some discussion around a recent show-and-tell:
Add a `ros2 component` verb to be an equivalent of `nodelet standalone`, where a component container is started with a single component loaded. It should also support the arguments that `ros2 component load` has (renaming, remapping, etc).<issue_closed>
Status: Issue closed |
Jays2Kings/tachiyomiJ2K | 1007505953 | Title: Accounts to sync data across different devices
Question:
username_0: ### Why/User Benefit/User Problem
(explain why this feature should be added)
If you read manga on many different devices having to manually sync everything can be a hassle
### What/Requirements
(explain how this feature would behave)
settings> account make a tachiyomi account and sign in
Answers:
username_1: Great idea! Please provide a monthly payment for the servers, as this a free app without servers.
Status: Issue closed
|
JirkaDellOro/Softwaredesign | 743292501 | Title: Durchsicht Jirka Lektion 07
Question:
username_0: https://github.com/username_0/Softwaredesign/blob/master/L07_Modularisierung/TypeScript/readme.md#reuquirejs
` var myVariable = require("/.nameOfTheFile"); `
wirklich so oder nicht eher so
`var myVariable = require("./nameOfTheFile"); `
Answers:
username_0: https://github.com/username_0/Softwaredesign/blob/master/L07_Modularisierung/TypeScript/readme.md#webpack
Beispielcode zweite Zeile: String groß? |
inaturalist/iNaturalistAPI | 174648982 | Title: more stuff for the taxon endpoint
Question:
username_0: We'll need more info for the new taxon page, namely:
- [ ] ancestors
- [ ] children
- [ ] taxon names
- [ ] conservation statuses
- [ ] listed taxa / establishment means
Answers:
username_1: Can you provide a little more detail about what you need? For ancestors/children would you need more than what details currently provides http://api.inaturalist.org/v1/taxa/12727 ? Seems like you'd minimally want scientific name, rank, and common name.
What about conservation statuses and establishment means? The API currently returns the best matching value of each when given a place. Do you need more than that? Do you need data about these across all places?
username_0: Definitely need sci name, common name, and rank for all ancestors and children.
For conservation statuses I need
* place.name
* status
* iucn
* url
* authority
* description
For establishment I need listed taxon
* place.name
* establishment_means
* list.title
* id (or again, enough info to link to the listed taxon
For ConservationStatus I need all of them associated with this taxon (not descendants or ancestors). For establishment and listed taxa... let's just stick with the requested place for now. I feel like we might get a better list of places if/when Scott's atlas idea pans out. In addition to what's already in the API, it might be nice to get the listed taxon's `list.title`, but not critical if it's a pain.
username_1: Do you only need the listed taxa with establishment means? Right now I have it returning all listed_taxa with a place or means (and not filtering by preferred_place_id)
username_1: I added the taxon photos, but didn't get the "taxon if descendant" bit in yet. The new code needs new data in ES, which is generated by a new rails branch. I indexed the first few thousand taxa on Staging and have the API running this branch there if you want to test it out
username_0: To more requests to help configure the map:
1) GBIF ID
2) Enough info to configure the taxonMap to show a range (I think this is just a boolean)
username_1: I turned `taxon.photos` into `taxon.taxon_photos` which now includes the taxon of the clade where the photo came from. Now that I think of it, I wonder how useful this will be since it's the taxon of the clade the photo is attached to, not the species featured in the photo. If it's not that useful, then maybe `taxon_photos` is overkill and we go back to `photos`.
I was thinking about doing the map stuff separately from taxon details. Right now it seems like we fetch GBIF IDs on demand, not in bulk on some schedule. There can also be good reasons to expire cached IDs and look them up again, and we probably don't want to do that for all taxa all the time. So that is the kind of thing we might want to fetch asynchronously when maps are rendered, not every time we return taxon details from the API. Maybe there's a separate taxon map layers API that the taxon map JS code can call and append the map layers menu accordingly.
username_0: I think the taxon photos stuff will still be useful. I'm happy to go with whatever seems the most reasonable to you regarding taxon map parameters. Happy to make another request if that seems cleaner to you.
username_1: pretty sure this is done now
Status: Issue closed
|
VATSIM-UK/UK-Sector-File | 197501816 | Title: New Durham Tees Valley (EGNV) SMR / ground map
Question:
username_0: In GitLab by @CharlieW198 on May 9, 2016, 20:02
# Summary of issue/change
Added new Durham EGNV ground map
# Affected areas of the sector file (if known)
Airports\EGNV\SMR\Regions.txt
Airports\EGNV\SMR\Geo.txt
Airports\EGNV\SMR\Labels.txt
Answers:
username_0: In GitLab by @hsugden on May 24, 2016, 14:14
Changed title: **{-EGNV SMR-}** → **{+New Durham Tees Valley (EGNV) SMR / ground map+}**
username_0: In GitLab by @hsugden on May 25, 2016, 20:02
mentioned in merge request !111
username_0: In GitLab by @hsugden on Aug 9, 2016, 16:28
Milestone removed
username_0: In GitLab by @hsugden on Dec 24, 2016, 16:44
closed
Status: Issue closed
|
wenqili/knowtebook | 494133001 | Title: Responsive frame embed
Question:
username_0: ```html
<div style="position: relative; width: 100%; height: 0; padding-top: 56.25%;">
<iframe style="position: absolute; top: 0; left: 0;" width="100%" height="100%" src="" frameborder="0" scrolling="yes" allowfullscreen></iframe>
</div>
<div style="position: relative; width: 100%; height: 0; padding-top: 177.78%;">
<iframe style="position: absolute; top: 0; left: 0;" width="100%" height="100%" src="" frameborder="0" scrolling="yes" allowfullscreen></iframe>
</div>
``` |
domenico-simone/SLUBI_training_brodde | 855813079 | Title: Invite Mikael to repository
Question:
username_0: He<NAME>!
I just wrote an email, but am not sure if your old SLU adress still works (and how about your Swedish number? ^^).
Jan told me that Mikael is continuing with the pipeline, can you invite him to this repository?
His username is: mikdur
Best wishes,
Laura |
giampaolo/psutil | 768782389 | Title: [OS] title
Question:
username_0: Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/dist-packages/psutil/__init__.py", line 89, in <module>
import psutil._pslinux as _psplatform
File "/usr/local/lib/python2.6/dist-packages/psutil/_pslinux.py", line 21, in <module>
import _psutil_linux
ImportError: /usr/local/lib/python2.6/dist-packages/_psutil_linux.so: undefined
symbol: prlimit
What version of psutil are you using? What Python version?
psutil-1.1.1
Python 2.6.6
On what operating system?
OS: Debian GNU/Linux 6.0
Uname: Linux 3.2.0-0.bpo.4-amd64 #1 SMP Debian 3.2.41-2+deb7u2~bpo60+1 x86_64 GNU/Linux
```
_Original issue: http://code.google.com/p/psutil/issues/detail?id=442_ |
ipython/ipython | 336490459 | Title: IPython.utils.signatures backport for Python 2 is deprecated in IPython 6, which only supports Python 3
Question:
username_0: Getting a `DeprecationWarning` in `ipython`, which appears to be issued by `ipython`. Points to [this line]( https://github.com/ipython/ipython/blob/6.4.0/IPython/lib/pretty.py#L91 ) as problematic. Warning included below. Should add this showed up when running under Python 3.
```python
/opt/conda3/lib/python3.6/site-packages/IPython/lib/pretty.py:91: DeprecationWarning: IPython.utils.signatures backport for Python 2 is deprecated in IPython 6, which only supports Python 3
from IPython.utils.signatures import signature
```
Answers:
username_1: Thanks ! Making that as easy for a first issue if anyone would like to contribute.
username_2: I'd like to help with this issue. Can you explain in more detail what needs to be done?
username_0: My guess is that this `import` would be replaced with [`import`ing the relevant `inspect` function]( https://github.com/ipython/ipython/blob/6.4.0/IPython/utils/signatures.py#L11 ).
Status: Issue closed
|
ddxygq/ddxygq.github.io | 625743289 | Title: sqoop用法之mysql与hive数据导入导出 | 柯广的博客
Question:
username_0: http://www.ikeguang.com/2019/12/20/sqoop-usage/
本文目录 一. Sqoop介绍 二. Mysql 数据导入到 Hive 三. Hive数据导入到Mysql 四. mysql数据增量导入hive 1 基于递增列Append导入 1). 创建hive表 2). 创建job 3) 执行job 2 Lastmodified 导入实战 1). 新建一张表 2). 初始化hive表: |
zircote/swagger-php | 122634339 | Title: Support Swagger extensions for AWS API Gateway
Question:
username_0: The Amazon API Gateway Importer tool (https://github.com/awslabs/aws-apigateway-importer) supports two extensions to a Swagger spec. Not sure if these are in the spec also, but at least they are in the importer tool. Anyway, there are 2 extensions:
- x-amazon-apigateway-integration
- x-amazon-apigateway-auth
I would like to extend my current PHP annotations to also specify values for these 2 new extensions.
Answers:
username_1: Swagger PHP has support for vendor properties (x-***)
`@SWG\Info(x={"amazon-apigateway-auth"=12345})`
username_0: Awesome. I was hoping it was there and I just couldn't find it.
Thanks,
David
>
Status: Issue closed
username_2: How can I do this in .NET. I think I would have to add it in the SwaggerConfig.cs file as an OperationFilter but not sure how. |
danielyxie/bitburner | 974301141 | Title: You can create a file that creates a broken folder.
Question:
username_0: export async function main(ns) {
ns.write(`/file.txt`, "test", "w");
}
calling write with / at the beginning of the filename creates a / directory that can't be accessed with any terminal commands<issue_closed>
Status: Issue closed |
dglo/dash | 818032567 | Title: [jacobsen on 2007-08-01 19:30:09] : pDAQ: dash - Process2ndBuild.py - make .sn.tar hard links world-writeable [CHECK AT SPS BEFORE CLOSING]
Question:
username_0: ...so that Alex can delete them
Answers:
username_0: [jacobsen on 2007-08-23 15:07:23]
1679: dash - Process2ndBuild.py - make .sn.tar hard links world-writeable
username_0: [jacobsen on 2007-09-24 20:02:18]
Verified at SPS
Status: Issue closed
username_0: No associated GitHub commit
Status: Issue closed
|
romeric/Fastor | 668048670 | Title: Writing to views of TensorMap fails
Question:
username_0: Hello,
this MWE does not compile for me on g++ 10:
```
using namespace Fastor;
double data[4] = { };
auto A = TensorMap<double, 2, 2>(data);
Tensor<double, 2, 1> b;
b = { {1},{1}};
A(fseq<0, 2>(), fseq<1, 2>()) = b;
```
It seems that writing to blocks of TensorMap fails, while it works for standard Tensors as intended.
Thank you in advance :-)
Status: Issue closed
Answers:
username_1: Thanks for reporting the issue. It is now fixed 6d1ecc3. The fix is available if you download the latest copy of Fastor. |
noryb009/lick | 1146870796 | Title: Adding the Hash check after the fact?
Question:
username_0: I installed lick on a win10 box with the new replacement for bios booting. It won't run Puppy Linux because of this and my own stupidity. I see in the doc where I was supposed to respond to a screen on first boot up to tell lick to allow booting by directing it to a has check file but managed to not do that so... Its operator error but how can I get lick to check the hash check file for puppy linux after its been installed without that being specified on first run? I tried to reinstall but now I have two sets of options to boot puppy but never seemed to get the chance to select hash check.
Answers:
username_1: The screen to enroll the hash should only appear if it's required, and it isn't on all systems, so you may not see it at all. Even if you miss it the first time, it will reappear on subsequent boots.
What happens when you try to boot to Puppy Linux?
Status: Issue closed
username_1: Closing due to inactivity - feel free to reopen if you want. |
Huachao/vscode-restclient | 417713370 | Title: the json in requstbody has blank line will be cut off
Question:
username_0: - VSCode Version: 1.31.1
- OS Version: Windows_NT x64 10.0.17134
- REST Client Version: 0.21.2
Steps to Reproduce:
1. Create request to a host,make the json has blank line
```
POST https://example.com/comments HTTP/1.1
content-type: application/json
{
"name": "sample",
"time": "Wed, 21 Oct 2015 18:27:50 GMT"
}
```
2. the json data will be cut off from blank line,it will be like this:
```
{
"name": "sample",
```
Status: Issue closed
Answers:
username_1: @username_0 this is by design, you should remove the blank lines in request body except for `multipart/xxx` MIME type |
fbpic/fbpic | 297857722 | Title: Better error handling when reaching out-of-memory
Question:
username_0: Out of memory (on GPU) usually happens at this line:
https://github.com/fbpic/fbpic/blob/dev/fbpic/boundaries/particle_buffer_handling.py#L283
Thus, I would be good to put this line in a `try`/`except` statement, where the `except`:
- should capture a `CudaAPIError`
- should print a useful message for the user
- should call MPI Abort so that the simulation stops running if one of the GPU fails due to out-of-memory!<issue_closed>
Status: Issue closed |
TRQ1/nodejs-study | 463567023 | Title: how to install express.js
Question:
username_0: what is express.js?
Node.js에서 사용되는 web framework 이다.
how to install it?
1. node.js 설치
2. npm init 명령어로 package.json 생성
3. 프로젝트 내에 디렉토리 안에서 npm install express
위 과정으로 프로젝트를 생성 가능
Answers:
username_0: gitignore 사용시 node.js에서 필요한 ignore list는 아래 링크에서 확인 가능
https://github.com/github/gitignore/blob/master/Node.gitignore
Status: Issue closed
|
vauvenal5/ProvNet | 384809938 | Title: Add new tag is not working.
Question:
username_0: Creating new tags is currently broken since a string is being passed instead of an id to the corresponding web3 contract function in the responsible epic. This can either be fixed by a client side generated id coming from the tag length or by refactoring the tag list in the contract code to work like the URL list. |
dart-lang/sdk | 504499381 | Title: Unused imports does not detect unused import as part of a repeated alias
Question:
username_0: Suppose you have:
```
import 'package:foo' as foo;
import 'package:foo_extension' as foo;
```
And later down the road someone removes all uses of the symbols imported from `foo_extension` from the library.
The analyzer will not report the `foo_extension` import as unused as long as foo itself is used even if it's just for the symbols in `package:foo`.
I'd expect that the check goes one level deeper with aliased imports and verifies that indeed something from the import is being used through the alias.
Answers:
username_1: @srawlins – you are machine! |
dask/dask | 211936394 | Title: distributed.scheduler - ERROR - error from worker Python int too large to convert to C long
Question:
username_0: Hi,
I am reading CSV files in order to convert them into parquet, the numbers are around 10*E^22.
The error is:
```
Exception: OverflowError('Python int too large to convert to C long',)
distributed.scheduler - ERROR - error from worker tcp://127.0.0.1:46891: Python int too large to convert to C long
```
Does that mean I will not be able to use Dask for that?
I tried Apache drill and it did not report any errors during the conversion process.
Answers:
username_1: Hi Solomon, it would be really helpful here if you can provide a minimal
failing example that we can use to easily reproduce the issue. Please see
http://stackoverflow.com/help/mcve for guidelines.
Status: Issue closed
|
Originate/exosphere | 261438610 | Title: Unwanted Terraform changes
Question:
username_0: I've been running into some unwanted changes that Terraform wants to apply, even when I've made absolutely no changes to the application:
<img width="1674" alt="screen shot 2017-09-28 at 11 55 50 am" src="https://user-images.githubusercontent.com/7258240/30986995-4420961e-a44a-11e7-92ae-4138e459783d.png">
<img width="1674" alt="screen shot 2017-09-28 at 11 55 57 am" src="https://user-images.githubusercontent.com/7258240/30986998-4689cd94-a44a-11e7-84cc-cedfa3939920.png">
Looks like a new Exocom instance is always being created, and its corresponding route53 record updated. This has been happening for a while.
What's new is that now a new revision for each of the task definitions is being created and in turn the ECS service has to point to that new definition. I did a diff on the task definition JSON in the web console and the only difference is the revision number and active state, so it doesn't seem like any meaningful changes are actually happening.
<img width="1526" alt="screen shot 2017-09-28 at 12 45 07 pm" src="https://user-images.githubusercontent.com/7258240/30987180-ec1baf20-a44a-11e7-9202-fa73f95ad028.png">
<img width="1526" alt="screen shot 2017-09-28 at 12 45 17 pm" src="https://user-images.githubusercontent.com/7258240/30987179-ec199d16-a44a-11e7-9d27-c39bc2ef6715.png">
Status: Issue closed
Answers:
username_0: Closing this for not as it doesn't seem to be a blocking issue |
jspm/jspm-cli | 129143516 | Title: Rate limit reached – Teamcity build
Question:
username_0: This one is more of a question we are having when integrating JSPM into our Teamcity build process, rather than an issue directly with JSPM.
When we have been testing out switching to using JSPM as part of our build, we’ve started seeing the 'Github Rate limit' message appear do to too many requests from our build server. I’ve seen a previous issue relating to this #20 , and subsequently how to configure authorisation for [Travis builds](https://gist.github.com/topheman/25241e48a1b4f91ec6d4).
I was wondering how this might apply for a build server such as Teamcity and if anyone has any idea how to configure the auth to ensure this doesn’t happen. Has anyone had any experience of this, or know if this is possible?
Status: Issue closed
Answers:
username_0: Will try upgrading to 0.17 – thanks for the advice |
hojak99/TIL | 366602271 | Title: spring boot 에서 multi datasource 로 transaction 설정하기
Question:
username_0: 각각의 datasource 에 transaction manager 를 정의해주고
```
@Transactional(value=''admin")
.
.
.
.
@Transactional(value=''user")
```
와 같이 value 값을 달아주면 각각의 트랜잭션으로 돈다고 한다
Answers:
username_0: 참고 url
- https://okky.kr/article/308953
username_0: 포스트 완료
Status: Issue closed
|
utkuozbulak/pytorch-cnn-visualizations | 655165545 | Title: Hi! different functions of feature maps.
Question:
username_0: Great work! Can you tell me the function of different feature maps, what function are they used to tell us? Thank you !
Status: Issue closed
Answers:
username_1: Hello,
You can read relevant papers to get informed on this topic (papers are linked in the README).
username_0: Hi! the guided backprop function means that during BP, the gradient = 1 where gradient > 0 and the gradient = 0 where gradient < 0 through RELU activation? Is it True? Thank you! |
rspamd/rspamd | 394124453 | Title: Please extend the antivirus module for ESET Mail Security Linux / FreeBSD
Question:
username_0: ### Classification (Please choose *one* option):
* [ ] Crash/Hang/Data loss
* [ ] WebUI/Usability
* [ ] Serious bug
* [ ] Ordinary bug
* [ ] Feature
* [X] Enhancement
### Reproducibility (Please choose *one* option):
* [ ] Always
* [ ] Sometimes
* [ ] Rarely
* [ ] Unable
* [ ] I didn’t try
* [X] Not applicable
### Rspamd version:
### Operation system, CPU:
### Description (Please provide a descriptive summary of the issue):
I'ld like to see ESET antivirus support.
Please extend the antivirus module for ESET Mail Security Linux / FreeBSD
Please
https://www.eset.com/business/server-antivirus/mail-security-linux/
www.eset.com
Thanks
username_0
### Compile errors (if any):
### Relevant logs (see details [here](https://rspamd.com/doc/faq.html#how-to-debug-some-module-in-rspamd)):
### Expected results:
### Actual results:
### Debugging information (see details [here](https://rspamd.com/doc/faq.html#how-to-figure-out-why-rspamd-process-crashed)):
### Configuration (e.g. `rspamadm configdump module`):
### Additional information:
Answers:
username_1: You can tell postfix to use the content_filter method to integrate ESET:
https://help.eset.com/ems_linux/4/en-US/setting_esets_postfix.html
What would be better on integrating it into rspamd?
username_0: The main difference is that Milter happens pre-queue, i.e. before Postfix accepts the mail. Content filtering happens post-queue.
The downside to the post-queue approach is that Postfix cannot reject the mail in real-time.
username_1: That's true but for antivirus scanning it's ok to get involved after the mail was accepted by rspamd - I think. Better than no antivirus...
username_0: Eset has a antispam component, too
username_2: As far as I can see none of the protocols for eset mail security are sufficient for rspamd (don't know exactly for cgp and gwia):
https://help.eset.com/ems_linux/4/en-US/architecture_overview.html
Would icap (gateway security) be an option? |
akhodakivskiy/VimFx | 118229873 | Title: can't scroll on news.ycombinator.com
Question:
username_0: Commit e1fe9923fa57168764b97adebf04094bcb654f79 breaks scrolling (j/k/u/d/<space>) on http://news.ycombinator.com.
Answers:
username_0: Here is a minimal patch that makes things work. It seems fragile and I can't even guess whether it will still solve the problem that the original patch was meant to address. But maybe it will at least point in the correct direction.
```
diff --git a/extension/lib/scrollable-elements.coffee b/extension/lib/scrollable-elements.coffee
index 024e380..d664393 100644
--- a/extension/lib/scrollable-elements.coffee
+++ b/extension/lib/scrollable-elements.coffee
@@ -45,6 +45,7 @@ class ScrollableElements
@updateLargest()
isScrollable: (element) ->
+ return true if element == @window.document.documentElement
return element.scrollTopMax >= @MINIMUM_SCROLL or
element.scrollLeftMax >= @MINIMUM_SCROLL
@@ -67,6 +68,7 @@ class ScrollableElements
# making it possible for unscrollable elements to slip in. This method tells
# whether the largest element really is scrollable, updating it if needed.
hasOrUpdateLargestScrollable: ->
+ return true if @largest == @window.document.documentElement
if @largest and @isScrollable(@largest)
return true
else
```
Having played with things a bit because of this issue, I found another problem that predates e1fe992 and still exists even with the patch above:
1. From the main http://news.ycombinator.com page, navigate to a comment page for any one of the items listed.
2. Use any method to return to the previous page.. for example shift-H
3. None of the scroll keys work (j/k/u/d/space/etc)
username_0: So for the second problem described above, where scroll keys stop working after going back to the previous page, the offending commit is d41fc98be643e88e5afa7941ccf7bdaa1e7d02bf. HTH.
Status: Issue closed
username_1: Thanks for your great bug report @username_0!
I was shocked when I found the reason scrolling didn’t work on Hackernews: They don’t have a doctype on their pages, meaning they run in quirks mode! Thought hackers would know that everything is easier in standards mode (_with_ a doctype). Perhaps the site isn’t run by hackers.
Anyway, VimFx needs to be able to handle pages in quirks mode as well. While there was _some_ code for it before, it was not enough since the commit you pointed out. So your patch here was not the solution, but thanks anyway for trying! :+1:
As for your your second problem (returning to the main page by going backwards in history), I could not reproduce it in latest master + your patch. Please open a new issue if you can still reproduce it on latest master.
username_0: Hi Simon,
Thanks for taking a look at this so quickly. Your fix works to solve both problems described above.
Cheers! |
robolectric/robolectric | 1039517639 | Title: Robolectric can not be used testing code which uses Libraries built on Guava 16
Question:
username_0: ### Description
We did update to the most recent Robolectric Version and have following problem situation:
Our Android-Project is using a lot of shared-legacy-libraries which internally using a very old Guava Version 16.
It is legacy Code and would be a huge effort to update all those libraries.
With the previous Robolectric Version 3.7.1 we have been able to compile them by setting the Android Project to Guava version 19. (Therewith the affected methods are deprecated but are still present and working)
Compiling an Android Test-Project using Robolectric Version 4.6.1 seems to implicitly enforce a more higher Version than Guava V19.
We then are getting following errors:
`java.lang.NoSuchMethodError: com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()Lcom/google/common/util/concurrent/ListeningExecutorService;
`
We previously used Robolectric Version 3.7.1, there the Guava 16 code in the Libraries has not been an issue.
Maybe it would be a possibility to configured this somehow with gradle,
but in the migration there was not covered this topic
**So our Major Question is :**
Is there a possibility to run Robolectric (4.6.1) -Tests on code, that uses libraries which make use of an old Guava Version e.g. V16 ?
Or is it possible to still use the latest Robolectric Version 4.6.1 without enforcing that implicitly the guava Verison will be
elevated for the Code under Test?
Thanks in advance.
### Steps to Reproduce
1. Having code that uses an external jar library, which makes use of Guava 16 classes/ Methods (that are not available in the later Guava version) and invokes methods like `com.google.common.util.concurrent.MoreExecutors.sameThreadExecutor()`
2. Setting up a Robolectric (4.6.1) Project invoking testcode, which invokes a method of a jar library that internal makes use of Guava 16
3. Running the test, and a above message appears
### Robolectric Version
Robolectric: 4.6.1
### Link to a public git repo demonstrating the problem:
n/a
Answers:
username_1: You are welcome to configure the Gradle dependency using https://docs.gradle.org/current/userguide/dependency_downgrade_and_exclude.html.
I am guessing that you are using Guava from a JAR file? This means you will have two versions of Guava on the classpath. Robolectric does not do anything specifically to elevate any dependencies. If you refactor your code to use gradle to depend on V16 of Guava, then gradle will resolve the right version and you will be OK. However, if you use gradle from a JAR, you will have two versions on the classpath, and then you leave it up to the application class loader to select one.
Closing this as there is really nothing we can do :(
Status: Issue closed
username_0: Hi username_1
if I'm downgrading guava to the version that is being used by configuring that by gradle, with
`testImplementation ("com.google.guava:guava:19.0"){
force = true
}`
then it can be compiled, but finally during runtime `org.robolectric.internal.bytecode.ShadowProviders` having a problem cause they need a newer guava version that 19. That I ment by the tern "elevated". it does elevate the requirements of the Library used in the code under test. Running the tests thenproduces a lot of failing tests with the message:
`java.lang.NoSuchMethodError: com.google.common.collect.ImmutableList.sortedCopyOf(Ljava/util/Comparator;Ljava/lang/Iterable;)Lcom/google/common/collect/ImmutableList;
at org.robolectric.internal.bytecode.ShadowProviders.<init>(ShadowProviders.java:25)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
`
So configuring gradle like mentioned by you above, seems not be an option. It seams that the code under tests needs to align its guava version with the Guava Version of the testing Framework. That is very sad... It would be nicer, if it would not enforce those requirements for the code under test.
username_1: The contract is that app code and libraries defer to Gradle for handling dependencies. So if your app has `implementation("com.google.guava:guava:16")` and Robolectric has `implementation("com.google.guava:guava:27")`, gradle will resolve the dependency and select Guava version 27, which will then be used in both your app and test code.
The issue is that your app code seems to have a jar file containing Guava, which causes two copies of Guava to exist on the classpath. This bypasses Gradle's ability to manage dependencies.
Another option is for you to `shade` the Guava dependency in your app (i.e. rename the Guava classes in your JAR). There are some tools like `jarjar` that can potentially do this.
If you feel strongly that Robolectric should adhere to Guava 19 APIs, you are welcome to scope out the work and see how much code will have to change. |
lunu-bounir/search-all-tabs | 729746538 | Title: Implement sidebar design in firefox
Question:
username_0: Hello,
Thank you for this great extension.
I am writing to request a sidebar implementation so that users can use this extension in firefox sidebar pane. Currently it is only available as popup. A problem with popup implementation is once the user clicked on a search result, it takes to the relevant tab and immediately closes the popup. I want the results window in the popup to persist to be able to navigate the tabs one by one until I found the correct tab. However since the popup closes immediately, I have to initiate a new search and type the search phrase again. A sidebar implementation in firefox is perfect for this use case since it will not close the extension unless the user wants to.<issue_closed>
Status: Issue closed |
kubernetes/test-infra | 274044185 | Title: How to judge the e2e test Success in case of some test failed
Question:
username_0: In my environment when I run e2e test, it will has some test-cases failed because of time out by chance. I config that as long as any one test-case failed, the CI failed.
But I find that e2e-test in kubernetes/kubernetes alse has time out failed, and still SUCCESS, I want to know how to judge it, by some label?
Answers:
username_1: some jobs have `GINKGO_TOLERATE_FLAKES=y` configured, like https://github.com/kubernetes/test-infra/blob/master/jobs/env/pull-kubernetes-e2e.env#L2, if that what you are asking.
username_0: If set GINKGO_TOLERATE_FLAKES=y, the failed case will rerun, if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. It means that the case failed in above finally success with rerun, but still record as failed in the result?
username_0: GINKGO_TOLERATE_FLAKES works with FLAKE_ATTEMPTS?
FLAKE_ATTEMPTS=1
if [[ "${GINKGO_TOLERATE_FLAKES}" == "y" ]]; then
FLAKE_ATTEMPTS=2
fi
.......
"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- \
"${auth_config[@]:+${auth_config[@]}}" \
--ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" \
username_1: well, seems `GINKGO_TOLERATE_FLAKES` hard-coded `FLAKE_ATTEMPTS` to 2, set `FLAKE_ATTEMPTS` locally would not have any effect from the snippet
username_0: Thanks @username_1 , now I set FLAKE_ATTEMPTS to 2 and the by chance failed test-cases wil rerun and finnally result SUCCESS :)
username_1: sgtm, I'll close the issue.
Status: Issue closed
|
davidakachaos/workless_revived | 239654457 | Title: Scaler :heroku_cedar still exists?
Question:
username_0: I've noticed issues with the dynos not shutting down. I think this can be related to it. Any idea?
Answers:
username_0: Trying to understand the logic of the scalers, I was digging in the files of the gem repo, and found the `:heroku_cedar` was removed as of [a3e0ff0](https://github.com/username_1/workless_revived/commit/a3e0ff01e054e7aaae9f2b2251f0bc50983bf8f4)
So reading the `:heroku` scaler, I don't see any issues with the logic, but may be with the variable type. In [line 16](https://github.com/username_1/workless_revived/blob/v2.0.0/lib/workless/scalers/heroku.rb#L16) you check `return if workers == workers_needed`. Is it possible that `workers` method in line 21 is returning a string? (line 10 has the same variable in conditional)
I've sent a [small pull-request](https://github.com/username_0/workless_revived/commit/5531d7424d7675add47e4609ca3579e91af4b417) to see if that's the problem causing dynos to not scale down.
username_0: After some more debugging, I've found that the worker just kept trying to complete the job over and over again, and it was already the 11th try.
In [Failing Jobs](https://github.com/username_1/workless_revived#failing-jobs) description says that `workless` try just 3 times and then removes the job. Seems that that setting isn't working 😕
username_1: The documentation needs work, it still reflects the old gem (workless) So I need to comb through the documentation and update were needed (PRs very welcome!)
About the failing jobs, I don't know what exactly is going on there. I'll need to find some time to dig into this. Thank you for your report and the PR!
username_0: Thanks for your quick attention on this! I changed the name of the issue to reflect the correct occurrence.
username_1: Please go to this issue on the [Workless](https://github.com/lostboy/workless/issues/) repository. Thank you!
Status: Issue closed
|
Axelrod-Python/Axelrod | 86587909 | Title: Need documentation for the noisy tournaments
Question:
username_0: We now have noisy tournaments but this is not documented (http://axelrod.readthedocs.org/en/latest/usage.html). Would be good to include this.
Answers:
username_0: #207 gets the first set of strategies from Axelrod's initial tournament. Once that's in and we're happy with the format etc I will get started on the second tournament.
username_0: Sorry that comment should have been on #199.
Status: Issue closed
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.