repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
devkitPro/SDL | 924629308 | Title: Running SDL in the OverlayDisp applet context
Question:
username_0: I'm working on a custom OverlayDisp replacement app. It's the applet that renders the quick action menu (when you long press home), the notifications and the power menu.
I've succesfully managed to get a framebuffer that hos overlays on the running title using the vi service and now i'd like to use SDL for the hardware acceleration.
I've already implemented the code to set the viDisplay and viLayer fields as overlay without having to edit the source but when testing i discovered that SDL_CreateWindow fails with `Could not initialize EGL`.
This error is thrown [here](https://github.com/devkitPro/SDL/blob/switch-sdl2/src/video/SDL_egl.c#L436) when `eglInitialize` fails (the function pointer is set [here](https://github.com/devkitPro/SDL/blob/switch-sdl2/src/video/SDL_egl.c#L377)).
Note that this happens before i try to edit the driver data of SDL, the code looks like this:
```
if (SDL_Init(SDL_INIT_VIDEO) < 0)
SDLLOG(1)
sdl_win = SDL_CreateWindow("sdl2_gles2", 0, 0, 1280, 720, 0);
if (!sdl_win)
SDLLOG(2)
```
To run my app i build the nsp and use it to replace the exefs of OverlayDisp with atmoshpere's layered fs.
I've also tried running the nro from the homebrew launcher and surprisingly that renders fine, obviously without overlaying.
At this point i'm not sure about what to do, is there some permission or system flag i'm overlooking ?
A thing i noticed is that the nso/nsp i get is ~4 Mb while the nro is around 9, could it be possible that some part of the opengl driver is left out ?
(I'm pretty sure this isn't related to SDL itself but the mesa repo doesn't have the issue tracker and i don't know whom to ask) |
renrizzolo/react-native-sectioned-multi-select | 1096497534 | Title: Adding option to hide console logs
Question:
username_0: Can any console logs, such as the one on line 1105 be turned off, or better yet, maybe have a prop to turn it on/off? Thanks
`console.log({ styles, colors })`
Answers:
username_1: This was unintentionally comitted! Removed in `0.9.1`.
Status: Issue closed
|
bcgov/entity | 791501846 | Title: FTP Poller : SFTP Listening for ACK and FEEDBACK files
Question:
username_0: ## Description:
- Listen for a TRG file for ACK file, and change the status of disbursement
- Listen for a TRG file FEEDBACK file, upload to minio and trigger an event for reconciliation.
Acceptance for a Task:
- [ ] Requires deployments
- [ ] Add/ maintain selectors for QA purposes
- [ ] Test coverage acceptable
- [ ] Linters passed
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed<issue_closed>
Status: Issue closed |
andra1/Memento | 522813558 | Title: Do user research on subreddit for people who smoke weed
Question:
username_0: Ask them a series of questions. Make a google form/ survey.
1. would they want a product like this?
2. how much would they pay for it?
3. what features would they love? and how much would they pay?
4. how are they current solving this problem? just recording the voice memos
Answers:
username_0: I talked to <NAME> and did a user interview with him. This was not a pain point for him as he is introverted and doesn't want things being recorded of him. He said that he wants this when he his at weddings or just out and about.
username_0: Here are the list of people who said they would "totally use something like this"
<NAME>
Sayali
Kiran
Dad
What are the similar qualities of these people? How can I attract more of this niche. I can also totally see Shiv using something like this. |
flathub/org.kde.okular | 606061715 | Title: Text to speak options not working at all !
Question:
username_0: Hi. I tested "Speak whole document" & "Speak current page" options & found that they never worked at all !
I perform:
setting | Accessibility | Speech | Engine
but "Engine" did not detect any thing when I tried it ......
Is this a permission problem or incorrectable limitation of flatpak package ?
I'm on Fedora Linux 30 X64 bit with espeakNG & eSpeak GStreamer multimedia playback both installed on my system ......
Answers:
username_0: Kindly, see the following screenshot:

As you see, no tts engine at all ! I tried to click it to see if something will be shown, but nothing !
Kindly, to fix this.
username_1: I opened a bug for the KDE runtime: https://bugs.kde.org/show_bug.cgi?id=420583
username_0: @username_1
Thank you very much for your care.
username_2: Any updates?
username_0: Still not working at all with 20.12.3
username_0: @username_1
The bug your are opened at KDE was already fixed since 3 / 7 / 2020 but this issue still not fixed till now !
username_0: Issue still not fixed on version 21.08.1 of Okular ! |
geomoose/gm3 | 551166530 | Title: demo could benefit from multiple="false" (radio buttons) in catalog
Question:
username_0: Right now the demo lets users add multiple basemap layers at a time, while this could be useful in the case of transparency, the more common case will be inefficiencies with multiple basemaps on and confusion over why the desired layer is not displaying.
Looking at #351, this has come up before. Do we want to look at it again? Maybe a restricted option for 1:1 cases. Maybe we could come up with a mapbook validator that would check things like this?
Answers:
username_1: hmm, this is an interesting one.
We've run into this from time to time, butnit hasn't really been a big
deficiency because we have a smaller and faurly static user group. I can
see thisnbeing more of a problem with occasional use.
What about using some sort of status bar with circles and numbers inside
indicating hiw many of which type of layer are on/active.
alt tags for describing what each number represents.
bobb
username_2: I think when start having larger user groups that aren't as accustomed to GIS it will become more of an issue. Right now our site has several high resolution imagery layers plus a couple lidar hillshades which will use a lot of resources if users don't know to turn off one layer to switch to a layer below it (plus the inconvenience of not being able to click once to choose which 'basemap' layer you want)
username_3: refs: PR #451
username_0: Radio buttons appear to work in the demo, closing.
Status: Issue closed
|
rrrene/inch | 197356050 | Title: updating to a newer YARDoc
Question:
username_0: yard-0.8.7.5 uses a deprecated method in Rake, and with the release of Rake 12, that method (`Rake::TaskManager#last_comment`) is gone for good. Is there any particular reason to pin an older version of yard?
Answers:
username_1: Oh yeah, we should update that. 👍
It's not working out-of-the-box though, I'll have to investigate how YARD's API has changed:
```
/home/rene/.rvm/gems/ruby-2.1.6@inch/gems/yard-0.9.5/lib/yard/code_objects/base.rb:364:in `method_missing': undefined method `meths' for #<yardoc constant CONFIG> (NoMethodError)
from /home/rene/.rvm/gems/ruby-2.1.6@inch/gems/yard-0.9.5/lib/yard/code_objects/proxy.rb:194:in `method_missing'
from /home/rene/.rvm/gems/ruby-2.1.6@inch/gems/yard-0.9.5/lib/yard/code_objects/method_object.rb:137:in `overridden_method'
from /home/rene/projects/inch/lib/inch/language/ruby/provider/yard/object/method_object.rb:62:in `overridden?'
from /home/rene/projects/inch/lib/inch/code_object/converter.rb:75:in `public_send'
from /home/rene/projects/inch/lib/inch/code_object/converter.rb:75:in `block in to_hash'
from /home/rene/projects/inch/lib/inch/code_object/converter.rb:73:in `each'
from /home/rene/projects/inch/lib/inch/code_object/converter.rb:73:in `to_hash'
from /home/rene/projects/inch/lib/inch/code_object/proxy.rb:14:in `for'
from /home/rene/projects/inch/lib/inch/codebase/object.rb:32:in `initialize'
from /home/rene/projects/inch/lib/inch/codebase/objects.rb:19:in `new'
from /home/rene/projects/inch/lib/inch/codebase/objects.rb:19:in `block in initialize'
from /home/rene/projects/inch/lib/inch/codebase/objects.rb:18:in `map'
from /home/rene/projects/inch/lib/inch/codebase/objects.rb:18:in `initialize'
from /home/rene/projects/inch/lib/inch/codebase/proxy.rb:7:in `new'
from /home/rene/projects/inch/lib/inch/codebase/proxy.rb:7:in `initialize'
from /home/rene/projects/inch/lib/inch/codebase/proxy.rb:12:in `new'
from /home/rene/projects/inch/lib/inch/codebase/proxy.rb:12:in `parse'
from /home/rene/projects/inch/lib/inch/codebase.rb:12:in `parse'
from /home/rene/projects/inch/lib/inch/cli/command/base_list.rb:24:in `prepare_codebase'
from /home/rene/projects/inch/lib/inch/cli/command/suggest.rb:26:in `run'
from /home/rene/projects/inch/lib/inch/cli/command/base.rb:52:in `run'
from /home/rene/projects/inch/lib/inch/cli/command_parser.rb:100:in `run_command'
from /home/rene/projects/inch/lib/inch/cli/command_parser.rb:62:in `run'
from /home/rene/projects/inch/lib/inch/cli/command_parser.rb:52:in `run'
from /home/rene/projects/inch/bin/inch:23:in `<top (required)>'
from /home/rene/.rvm/gems/ruby-2.1.6@inch/bin/inch:23:in `load'
from /home/rene/.rvm/gems/ruby-2.1.6@inch/bin/inch:23:in `<main>'
from /home/rene/.rvm/gems/ruby-2.1.6@inch/bin/ruby_executable_hooks:15:in `eval'
from /home/rene/.rvm/gems/ruby-2.1.6@inch/bin/ruby_executable_hooks:15:in `<main>'
```
username_0: Yup. `rake test` spewed dozens of screens of angry errors and crashed before reporting that tests failed... but running `inch` in-tree works fine and produces (apparently) sensible output?
username_1: Not when run on Ruby itself (that's what the stacktrace above is from). I will have to find some time to fix, unless somebody else wants to dive into it and submit a PR.
username_2: This is I get from github dependency graph:
The yard dependency defined in Gemfile.lock has a known high severity security vulnerability in version range < 0.9.11 and should be updated. https://nvd.nist.gov/vuln/detail/CVE-2017-17042
Gemfile.lock update suggested:
yard ~> 0.9.11
username_3: Yep. Need to update my gems which use `yard`, due to security issue @username_2 mentioned above, but can't cause `yard` needs to be updated first.
```sh
Bundler could not find compatible versions for gem "yard":
In Gemfile:
yard (~> 0.9.12)
inch (~> 0.7) was resolved to 0.7.1, which depends on
yard (~> 0.8.7.5)
```
username_3: Anyone? 👋
It seems that @username_1 (hope he's O.K. 😞) abandoned the project, at least for now, since there also lots and lots of unresolved issues on [inch_ci-web](https://github.com/inch-ci/inch_ci-web) and the [site](https://inch-ci.org) itself doesn't work properly anymore.
username_4: I've been trying to make progress on this, but it is difficult because the test suite isn't passing. I have a small PR that makes some progress on the test suite, but still need to keep digging.
username_5: @username_3 Don't worry, @username_1 is good, he was just very very busy throughout 2017: http://trivelop.de/2018/01/01/looking-back/
username_1: @username_4 I merged your PR and updated YARD to `~> 0.9.12`. There are less tests failing than with previous `0.9.x` versions of YARD, but we still have to fix at least the `rake test:ruby` ones before publishing the next "real" release of Inch (you can, of course, point your installations to use `username_1/inch` in the meantime).
@username_0 @username_2 @username_3 Sorry, for not being more responsive on this. I am trying to maintain my OSS projects as best I can, but my available open-source-time is decreasing gradually over the last 1.5 years. Inch is often neglected in terms of "time spent on it", since I no longer use Ruby professionally (this is not meant as an excuse but rather an explanation).
I am really dependent on contributions here, which is why I am super grateful to @username_4 and anybody else who chose to spend time on this.
@username_5 I am not sure what you are adding to the conversation here. As with previous written communications between us, I get the distinct impression that you are either (a) really, really bad at communicating with humans or (b) one of those trolls who really like to *use* and *complain about* open source software, but really dislike actually *contributing* something back.
username_1: For the record (since I believe in admitting one's errors): @username_5 and I had a chat via email and it now really seems like a case of "lost in translation" (I won't edit the comment above, so this comment makes sense).
On the subject at hand: I hope we can get Inch to a state where we can release a pre-version for `v0.8.0` and continue work from there. :+1: Volunteers are very welcome!
Inch was my first somewhat original Open Source project and I would very much like for it to live on.
username_4: @username_1 I would find it helpful to see some documentation about Inch interacts with YARD. I think it would be a good use of your limited time to explain the architecture behind the gem. As with most things that introspect other programs, it's a difficult piece of code to dig into without any prior knowledge.
I'll see what I can do to get the test suite closer to passing. I think a good first step might be to fix all of the warnings that pop up while running the suite.
Thanks for your work in writing and maintaining Inch over the years - it's a great tool. I hope we can round up a community to help you maintain it!
username_1: Thanks to @username_4's efforts, Inch `0.8.0.rc1` is now on RubyGems. :+1:
username_6: Guys any chance on this one? Would love to be able to use inch but it locks the yard dependency to a really old version :(
username_6: @username_1 I tested it. There's nothing wrong. It's just the fact that it is a beta and due to the way we upgrade gems automatically, locking to a beta (which is required in the gems file) will cause it to stop being upgraded.
username_1: @username_6 :+1: Here you go: https://rubygems.org/gems/inch/versions/0.8.0 |
ONSdigital/design-system | 1165295171 | Title: Secondary header navigation
Question:
username_0: ### What feature would you like to add to the ONS Design System?
A new navigation menu for secondary level items.
### Why should this new feature be added to the Design System?
Primarily for the Service Manual website to service the top level sections of each area. The current navigation components are not satisfactory enough to handle such a deep heirarchy.
This was user tested in Feb 2022: https://collaborate2.ons.gov.uk/confluence/x/OjZkB
### Supporting material
Prototype: https://deploy-preview-32--ons-prototype-kit.netlify.app/prototypes/sub-navigation/service-manual/
### Contacts
DS Team |
mattfrear/Swashbuckle.AspNetCore.Filters | 382696564 | Title: Null values in example are not included
Question:
username_0: Due to line 26 in the [`SerializerSettingsDuplicator`](https://github.com/username_1/Swashbuckle.AspNetCore.Filters/blob/8ae28f5/src/Swashbuckle.AspNetCore.Filters/Examples/SerializerSettingsDuplicator.cs#L26), the response examples in the generator swagger definition are missing their null values in the example, while they should be there.
I've just tested it for in my project and it seems to work if I skip the linked `Ignore` assignment. The result is in the image below. If I do not skip the ignore line, the `farm_uuid` line would be missing (which is not desirable).

Except for the comment on the line, which seems to link to a somewhat unrelated issue (at least, it seems to me). When I check the swagger definition the following is generated, which seems perfectly fine to me:
```json
{
"responses": {
"200": {
"description": "Success",
"schema": {
"$ref": "#/definitions/Location"
},
"examples": {
"application/json": {
"uuid": "11111111-2222-3333-aaaa-bbbbbbbbbbbb",
"location_type": "CowLocation",
"location_name": "My Named Location 1",
"location_number": 12345678,
"capacity": 1000,
"parent_uuid": "44444444-5555-6666-cccc-dddddddddddd",
"farm_uuid": null,
"iso_tag": 999000000000001
}
}
}
}
}
```
So, I'm not sure on why this is there, but I would like to remove it and use the actual settings used as used by the controller, in order to have a consistent definition.
Answers:
username_1: Released https://www.nuget.org/packages/Swashbuckle.AspNetCore.Filters/4.5.2
Status: Issue closed
username_0: You're awesome, many thanks! |
starkbaum/sucon | 138228683 | Title: Snippets view is not working proper
Question:
username_0: description:
after adding one snippet, the overview wasn't available any more
objective:
snippets should work in proper way


Answers:
username_1: i cant reproduce this error :(
username_0: 
--> Freigabe anfordern
Resultat:

Status: Issue closed
username_1: done |
inbo/crow | 550160621 | Title: Add altitude scale in feet
Question:
username_0: The original (static) VP charts contain an altitude scale in feet, which was explicitly asked for by the Airforce. Can such a scale be added on the right side?
Or, alternatively: a checkbox or radio button to switch between meters and feet.
Answers:
username_1: Having the axis on the right side sounds like a good idea
username_2: Needs polishing, but a basic version is now visible: https://inbo.github.io/crow/
username_1: Nice, would make the same number format is used on both (currently `####` vs `##,###`). I think it is fine to use `####`
username_2: Done!
Status: Issue closed
|
openucx/ucx | 304760720 | Title: Optimization: Expose `ucp_context_attr.request_size` as a constant
Question:
username_0: `ucp_context_attr.request_size` is used to allocate stack space for a request before calling a method such as `ucp_tag_send_nbr`. However, this reservation is intended to use `alloca`, which can't be optimized as efficiently by LLVM as a known-in-advance fixed-size reservation (eg of a fixed size array).
Additionally, wrapping languages (such as Rust) do not support `alloca`.
Given that this field value never changes - it is a compilation time constant - is there a way this value be exposed in the API? (eg as a `#define`)? Obviously, this isn't simple, but it would make for more optimizations - and that is a good thing for users of UCX. Alternatively, we could expose an over-estimate and use a test case to make sure it's always sufficient.
(As an aside, in my Rust wrapper, I'm going to just hard code this value to 256 [or another power of two] - the current 1.3 branch on Musl / Linux for my choice is less than this).
Answers:
username_1: exposing this field is prone to breaking ABI compatibility, if such compatibility is not required - a user can detect the size of the request by a ./configure script (and hope it would not grow)
username_0: Yep, it will break between versions of UCX; I'd argue that was a good thing. If I compile for a particular version of UCX (or any other library), then I *should not* expect my code to work with any other version with the *exception* of minor security fixes.
In C land, this view of mine is best enforced by using static linkage. It's far more secure and far more robust in the face of complex administrative challenge.
What would be involved in adding a little something to a ./configure that could work when UCX is being cross-compiled (another view point: all libraries should always be treated as if cross-compiled. It stops build system assumptions making there way into production). |
EricDarve/numerical_linear_algebra | 388073322 | Title: a couple typos and clarifications
Question:
username_0: Page 177 - (second line) approximation spelled incorrectly.
Page 243 - (in the Arnoldi code) H = zeros(kmax, kmax) is declared twice.
Page 250 - (in number 1.) the 'three' \lambda I? Should the 'three' be there?
Page 261 - (3rd paragraph from the bottom) smaller 'than' not 'then'
Page 279 - (First paragraph in Section 5) "Here's we'll see." Should be "Here we'll see"
Page 354 - (Second paragraph) Entries below can skipped - need 'be'.
Page 357 - (First paragraph after the code snippet) "The example matrix is the one ______ in this chapter above." Should be used instead of use.
Answers:
username_1: I rewrote this. It's a short explanation but hopefully clear. This is kind of a minor point.
Status: Issue closed
|
facebook/react-native | 56090190 | Title: Provide more descriptive error message when source file is not found
Question:
username_0: I required wrong file by mistake and got this message:

The actual message was obscured in Xcode console:
```
Error:
stack:
ModuleError@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:105:23
require@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:202:28
http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:28141:33
require@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:243:30
http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:27651:26
require@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:243:30
http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:946:27
require@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:243:30
applyWithGuard@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:871:25
require@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:194:39
global code@http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle:28295:9
URL: http://localhost:8081/Examples/Movies/MoviesApp.includeRequire.runModule.bundle
line: 202
message: Requiring unknown module "./test/actions/RepoActionCreators". It may not be loaded yet. Did you forget to run arc build?"
```
It would be nice to display the actual error.
Answers:
username_1: @nicklockwood fixed this a while ago
Status: Issue closed
|
symfony/symfony | 487785262 | Title: Way to optimize `ContainerBuilder::findTaggedServiceIds`
Question:
username_0: `ContainerBuilder::findTaggedServiceId()` always iterates over all definitions: https://github.com/symfony/symfony/blob/4.4/src/Symfony/Component/DependencyInjection/ContainerBuilder.php#L1303
I think there should be way to index definitions by tags somehow.
Unfortunately, due to mutable `Definition`, I don't see an easy way right now: only hacks. One possible hack is to subscribe container on definition tags changes:
```php
<?php
namespace Symfony\Component\DependencyInjection;
// ...
/** final */ class Definition
{
private $changesCallback;
/**
* @internal
*/
public static function subscribeOnChanges(self $definition, Closure $callback): void
{
if ($definition->changesCallback !== null) {
throw new \LogicException('Definition already has changes callback.');
}
$definition->changesCallback = $callback;
}
// ...
public function setTags(array $tags)
{
$this->tags = $tags;
if ($this->changesCallback !== null) {
$changesCallback = $this->changesCallback;
$changesCallback($this, 'tags', $this->tags);
}
return $this;
}
public function clearTag($name)
{
unset($this->tags[$name]);
if ($this->changesCallback !== null) {
$changesCallback = $this->changesCallback;
$changesCallback($this, 'tags', $this->tags);
}
return $this;
}
public function clearTags()
{
$this->tags = array();
if ($this->changesCallback !== null) {
$changesCallback = $this->changesCallback;
$changesCallback($this, 'tags', $this->tags);
}
return $this;
}
}
```
Note that `subscribeOnChanges` declared as static to exclude it from autocomplete-list when working with instance of `Definition`.
Then `Definition::subscribeOnChanges($definition, $this->definitionChangesCallback)` must be called somewhere from the `ContainerBuilder`.
Such hack breaks BC if someone extended `Definition` with some own class, but it could be detected during `Definition` registration: in this case, index by tags must be disabled.
What do you think?
Answers:
username_0: It's relatively insignificant impact, even if this function in total takes ~0.3 sec on our container.
I'm closing it for now, but it would be cool to find way to make indexes in the future. For example, to find all services within specific namespace or with specific interface.
Status: Issue closed
|
Eugeny/tabby | 1176765512 | Title: Translation request PT_BR
Question:
username_0: Greetings
I have completed the portuguese brazil translation, how should I proceed so that it can be implemented in the next release.
Thank you
<!--
# RULES:
* **ENGLISH ONLY** - this issue tracker is English-only. Please respect the people who take time to help you with your problems.
* Search existing issues first: https://github.com/username_1/tabby/issues
-->
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_1: Thanks! Will add them in a moment. How do I correctly name it in the language list - _Português do Brasil_?
Status: Issue closed
|
influxdata/telegraf | 1064050765 | Title: Make "Merge" plugin work with aggregator results
Question:
username_0: ## Feature Request
### Proposal:
The Merge plugin only works with raw input data, every 1 minute, it doesn't play nicely with aggregator outputs.
### Current behavior:
```
[agent]
collection_jitter = "0s"
[[inputs.cpu]]
percpu = false
totalcpu = true
collect_cpu_time = false
report_active = false
fieldpass = ["usage_idle"]
[[aggregators.quantile]]
period = "5s"
drop_original = true
quantiles = [0.95]
order = 1
[[aggregators.basicstats]]
drop_original = true
period = "5s"
stats = ["count", "rate", "sum"]
order = 2
[[aggregators.merge]]
drop_original = true
order = 3
fieldpass = ["usage_idle_*"]
```
Output:
```
cpu,cpu=cpu-total usage_idle_095=82.67260012136957 1637893260000000000
cpu,cpu=cpu-total usage_idle_count=5,usage_idle_sum=402.1946738461496,usage_idle_rate=0.28685386306114324 1637893260000000000
cpu,cpu=cpu-total usage_idle_count=5,usage_idle_sum=390.2688202755629,usage_idle_rate=-2.64097419767619 1637893265000000000
cpu,cpu=cpu-total usage_idle_095=83.12142252112594 1637893265000000000
```
You see the lines are with the same measurement, tags and timestamp but didn't merge.
From the debug log it looks like the merge works at different time window than `quantile` or `basicstats`.
### Desired behavior:
Merge aggregate results.
Answers:
username_1: Hi,
To ensure I understand your request, you were expecting the output to only show these two lines?
```
cpu,cpu=cpu-total usage_idle_095=82.67260012136957 1637893260000000000
cpu,cpu=cpu-total usage_idle_095=83.12142252112594 1637893265000000000
```
Thanks!
username_1: Hi,
Can you clarify what you were looking for, otherwise I am going to close this issue.
Thanks!
username_0: @username_1 Hi, I need to combine the `usage_idle_095`, `usage_idle_count`, `usage_idle_sum`, `usage_idle_rate` to a single line.
username_1: I have not played with aggregators like this before, but I do not believe aggregators can be combined as you are assuming. I added some debug output to the three aggregators you are trying to use and found:
* The basicstats and quantile aggregators are receiving the same data
* The merge aggregator is not receiving any metrics in the first place
Per the [aggregator docs](https://github.com/influxdata/telegraf/blob/master/docs/AGGREGATORS_AND_PROCESSORS.md#aggregator), aggregators run against metrics collected in the time period and are not passed between aggregators. If you run with the `--debug` option you will see additional output from the aggregators about the ranges that they are looking for metrics:
```
2021-12-20T14:54:51Z D! [aggregators.merge] Updated aggregation range [2021-12-20 07:54:30 -0700 MST, 2021-12-20 07:55:00 -0700 MST]
2021-12-20T14:54:51Z D! [aggregators.quantile] Updated aggregation range [2021-12-20 07:54:50 -0700 MST, 2021-12-20 07:54:55 -0700 MST]
2021-12-20T14:54:51Z D! [aggregators.basicstats] Updated aggregation range [2021-12-20 07:54:50 -0700 MST, 2021-12-20 07:54:55 -0700 MST]
2021-12-20T14:54:55Z D! [aggregators.basicstats] Updated aggregation range [2021-12-20 07:54:55 -0700 MST, 2021-12-20 07:55:00 -0700 MST]
2021-12-20T14:54:55Z D! [aggregators.quantile] Updated aggregation range [2021-12-20 07:54:55 -0700 MST, 2021-12-20 07:55:00 -0700 MST]
2021-12-20T14:55:00Z D! [aggregators.quantile] Updated aggregation range [2021-12-20 07:55:00 -0700 MST, 2021-12-20 07:55:05 -0700 MST]
2021-12-20T14:55:00Z D! [aggregators.basicstats] Updated aggregation range [2021-12-20 07:55:00 -0700 MST, 2021-12-20 07:55:05 -0700 MST]
2021-12-20T14:55:00Z D! [aggregators.merge] Updated aggregation range [2021-12-20 07:55:00 -0700 MST, 2021-12-20 07:55:30 -0700 MST]
```
In the below output, you will see the hash calculated by the basicstats and quantile aggregators are the same, the same data is getting passed into both, not the previous aggregators data. Additionally, the merge aggregator receives nothing:
```s
2021-12-20T14:46:29Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"ryzen", Flush Interval:10s
merge received 0 metrics
cpu map[cpu:cpu-total host:ryzen] map[usage_idle:98.3016920526716] 1640011600000000000
basicstats metric hash: 15405109440427227228
cpu map[cpu:cpu-total host:ryzen] map[usage_idle:98.3016920526716] 1640011600000000000
quantile metric hash: 15405109440427227228
cpu,cpu=cpu-total,host=ryzen usage_idle_095=98.3016920526716 1640011605000000000
cpu,cpu=cpu-total,host=ryzen usage_idle_count=1,usage_idle_sum=98.3016920526716 1640011605000000000
``` |
postcss/postcss-custom-properties | 306008820 | Title: Feature request: add support for `noValueNotifications: 'off'`
Question:
username_0: Would be great to be able to pass `"off"` to the `noValueNotifications` option in addition to `"warning"` and `"error"` (just like eslint rules) to be able to selectively turn off these notifications without having to disable all warnings.
Will gladly submit a PR if you agree on the idea.
Thanks for the great work,
Answers:
username_1: Yes, I would accept a PR, especially if you show prior art for the option name/value.
username_1: Are you sure this doesn’t already do what you want?
Status: Issue closed
|
drmingdrmer/xptemplate | 88336727 | Title: 关于使用<Tab>键产生的一些问题
Question:
username_0: 这个错误出现的很偶然,有时候<Tab>键可以正常工作,可是有时候(并不总是)写代码写到一半时候我想用Tab键来缩进,此时,就会产生错误,具体错误在下面。原谅我并不能确定是具体什么情况下按tab键会产生这个问题,我刻意尝试过许多次去重现这个问题,然而都失败了。然后今天写代码到一半又出现了。在错误产生后,如果在snippet中继续使用Tab,会导致gvim直接卡死,不能再进行其它任何操作。不知道是否有人也碰到过这个问题。
处理 function XPTforceForward..<SNR>87_FinishCurrent发生错误:
E716: Dictionary 中不存在键: mark
处理 function XPTforceForward..<SNR>87_FinishCurrent时发生错误:
E15:无效的表达式: renderContext.leadingPlaceHolder.mark
..........
Answers:
username_1: 你的键设置是什么样的?能否粘1下你的.vimrc文件看看你的配置?
username_0: """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""""""Vunble""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
filetype off "required!
set rtp+=~/.vim/bundle/vundle/
call vundle#rc()
" let Vundle manage Vundle
Bundle 'gmarik/vundle'
"power-liner
Bundle 'https://github.com/LoKaltog/vim-powerline.git'
"vim-scripts repos
Bundle 'L9'
Bundle 'FuzzyFinder'
"cpp.vim
"Bundle 'cpp.vim'
"NERD tree
Bundle 'https://github.com/scrooloose/nerdtree'
Bundle 'octol/vim-cpp-enhanced-highlight'
""""""语法检查
Bundle 'scrooloose/syntastic'
""""""""""Python
""Bundle 'orenhe/pylint.vim'
"""""""YCM
Bundle 'Valloric/YouCompleteMe'
""Minibuffer
Bundle 'minibufexpl.vim'
""Taglist
Bundle 'Taglist.vim'
""xtemplete
Bundle 'username_1/xptemplate'
""jedi python的补全
Bundle 'davidhalter/jedi'
filetype plugin indent on "
""""""""""""""""""""""""""""""end of Vundle""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""vim-powerline""""""""""""""""""""""""""""""""""
set laststatus=2
set t_Co=256
let g:Powerline_symbols='unicode'
set encoding=utf8
""""""""""""""""""""""""""""end of powerline""""""""""""""""""""""""""""""
""""""""""""""""""""""""""""""nerdter"""""""""""""""""""""""""""""""""""""
autocmd vimenter * NERDTree "设置开启vim时即打开nerdter
let g:NERDTreeWinSize=16
autocmd StdinReadPre * let s:std_in=1
autocmd VimEnter * if argc() == 0 && !exists("s:std_in") | NERDTree | endif
map <C-n> :NERDTreeToggle<CR>
autocmd bufenter * if (winnr("$") == 1 && exists("b:NERDTreeType") && b:NERDTreeType == "primary") | q | endif
""""""""""""""""""""""""""""""""""end""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""ctags"""""""""""""""""""""""""""""""""""""""
set tags=tags;
"""""""""""""""""""""""""""""end"""""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""Taglist"""""""""""""""""""""""""""""""""""""
let Tilst_Ctags_Cmd='/usr/bin/ctags'
let Tlist_Auto_Open=1 "当打开vim时便启动
let Tlist_Show_One_File=1 "只显示当前文件的
let Tlist_Exit_OnlyWindow=1
let Tlist_Use_Right_Window=1
""""""""""""""""""""""""""""""""end""""""""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""map setting"""""""""""""""""""""""""""""""""
[Truncated]
""""""""""""""""""""end""""""""""""""""""""""""""""""""""""""""""""""""""""
""""""""""""""""""""YCM""""""""""""""""""""""""""""""""""""""""""""""""""""
set completeopt=longest,menu,preview
autocmd InsertLeave * if pumvisible()==0|pclose|endif
inoremap <expr><CR> pumvisible()?'<C-y>':'<CR>'
let g:ycm_collect_identifiers_from_tags_files=1
let g:ycm_min_num_of_chars_for_completion=2
let g:ycm_cache_omnifunc=0
let g:ycm_seed_identifiers_with_syntax=1
let g:ycm_confirm_extra_conf=0
""""""""""""""""""""""""end""""""""""""""""""""""""""""""""
""""""""""""""""""""""""syntastic""""""""""""""""""""""""""
""let g:syntastic_ignore_files=[".*\.py$"]
""""""""""""""""""""""""xptemplate"""""""""""""""""""""""""
let g:xptemplate_vars="SPcmd=&BRloop=\n"
let g:xptemplate_vars.="&BRfun= "
let g:xptemplate_vars.="&SParg="
let g:xptemplate_brace_complete ='([{"<'
let g:xptemplate_vars.="&author=liu"
let g:xptemplate_vars.="&email=<EMAIL>"
Status: Issue closed
username_1: 修正了1下,更新到最新的master 应该可以修正这个问题,是因为在使用模板过程中切换到其他buffer会引起map的记录信息错误。折腾了1会找到了重现方式:展开1个模板,切换到另1个文件,再回来,再在模板之外打几个字符(这时下面会有提示XPTemplate session ends: XPT:changes outside of place holder),这时再按<tab>可以复现问题。
如果这个过程在更新前后2个版本上可以重现和不重现错误,可以确认问题被修正了。
非常感谢提供的反馈信息,对我帮助非常大 :+1: |
tfuqua/good_meadow | 189794939 | Title: Home - Description text size
Question:
username_0: ### Location
Home screen 'about' area
### Description
The body texts looks huge. Let's make it the same size as the rest of the site's body text
### Screenshot
<img width="1241" alt="screen shot 2016-11-16 at 1 10 47 pm" src="https://cloud.githubusercontent.com/assets/17103891/20359527/49ee8c8a-abfe-11e6-8962-18e8145aeac8.png">
Answers:
username_1: Updated to 16px to match other sites. Idk why I had it set to 22px -__-
Status: Issue closed
username_0: Excellent! |
elytra/BetterBoilers | 288310302 | Title: Boiler stilll thinks the structure is complete even if it is actually broken
Question:
username_0: Minecraft Version: 1.12.2
Forge Version: 14.23.1.2586
Mod Version: 1.1
Singleplayer/Multiplayer: Singleplayer
Additional Information:

Answers:
username_1: That's a fault with the boiler scanning code. It only checks to see if it's a functioning multiblock every five seconds. I'm working on better scanning code at the moment, but right now I'm afraid there isn't really anything I can do.
Status: Issue closed
username_1: I'm gonna keep this open to keep it on the radar for working on new stuff.
username_1: Minecraft Version: 1.12.2
Forge Version: 14.23.1.2586
Mod Version: 1.1
Singleplayer/Multiplayer: Singleplayer
Additional Information:
 |
linkerd/linkerd2 | 739024277 | Title: Option to disable automatic cronjob injection at the controller
Question:
username_0: ## Feature Request
I would like to be able to configure the injector to prevent injection to cronjob's pods.
### What problem are you trying to solve?
Jobs are not working well with the sidecar injected, they are never ending and requires workarounds to fix that.
### How should the problem be solved?
The injector should be able to ignore pods created by jobs.
### Any alternatives you've considered?
Create a mutation webhook to inject the disable injection annotation to each job template in the cluster.
### How would users interact with this feature?
Config option to the injector.
*Currently, the promise of installing Linkerd on live cluster smoothly is not true, since if you have cronjobs, you will break their behavior.
Going through all the cronjobs manifests in the company which resides in multiple github repositories is not viable.*
Answers:
username_1: Another option would be to extend [`linkerd-await`](https://github.com/username_1/linkerd-await) to shutdown the proxy after the wrapped process exits. This would also require that the proxy expose a (localhost-only) admin endpoint to support shutdown.
username_0: If I understand correctly it will still require a change in the job manifest to add the linkerd-await container.
I'm looking for a more transparent way to make sure when Linkerd installed on a production cluster it doesn't break anything, including cronjobs.
username_2: FWIW, the answer to this is to not opt-in entire namespaces. You're going to get breakage almost no matter what if you do that (completely excluding the cronjob issue).
username_0: I'm rolling out linkerd slowly, but the end goal is to roll it out to all the namespaces, so things like that needs to be sorted out in advance.
username_2: Yup!
username_0: Sounds good, I'll probably pick that soon.
username_0: That's what we've got so far: https://github.com/Soluto/linkerd-disable-injection-mutation-webhook
It's in working condition, we currently support only Jobs annotations, because this is the only resource that makes sense to disable injection for the whole cluster. |
hexojs/hexo | 235736172 | Title: 本地预览正常为最新版本,上传到github后,线上浏览发现还是原来版本。github上面的文件已经替换
Question:
username_0: 
以上为本地版本,最新发布的文章。

这里为线上版本,我已经将代码发布到了github里面。

请问一下这是为什么,我替换文件后,显示的却还是上一个版本的内容。 #
Answers:
username_1: 可能是网络或浏览器cache所致。
Status: Issue closed
|
fossasia/pslab-firmware | 683668632 | Title: Logic analyzer: Cannot remap inputs in two channel mode
Question:
username_0: [Two channel](https://github.com/fossasia/pslab-firmware/blob/feb2acd0ee48178ca62b74def8607cd01a45d79f/PSLab_Original/proto2_main.c#L738) logic analyzer mode is supposed to be able to use any two digital inputs. However, in practice only ID1 and ID2 work. When requesting another input, the ADC buffer block which is supposed to hold timer values for that channel is all zeros.
Status: Issue closed
Answers:
username_0: No, it's working as intended. It's just that the `channel_number` sent to the device when starting the logic analyzer is not the same `channel_number` sent when requesting data. The former changes depending on which channel should be sampled, while the latter is always 0 and 1 in two-channel mode. I'll add a note of this in pslab-python. |
totalspectrum/spin2cpp | 41756087 | Title: spin2cpp.c - array index out of bounds error
Question:
username_0: As compiled on os x:
spin2cpp.c:717:13: error: array index 3 is past the end of the array (which contains 3 elements) [-Werror,-Warray-bounds]
optchar[3] = 0;
^ ~
spin2cpp.c:713:13: note: array 'optchar' declared here
char optchar[3];
^
1 error generated.
Status: Issue closed
Answers:
username_1: Thanks! I've modified the Makefile a bit to put build outputs inside a build directory; hopefully that will make the OSX build a little cleaner (it certainly makes cross-compilation easier).
username_2: sorry, did not mean to reference this issue |
kubernetes-sigs/controller-runtime | 783115157 | Title: Add old object for webhook Defaulter interface
Question:
username_0: admission [Defaulter interface](https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/webhook/admission/defaulter.go) like this:
```
// Defaulter defines functions for setting defaults on resources
type Defaulter interface {
runtime.Object
Default()
}
```
it does not include old object, sometimes need get the old value from the old object. of course, we can get the old object from apiserver with a client, but this would introduce unnecessary overhead.
we can change the interface to this, if it is acceptable, I can submit a PR.
```
type Defaulter interface {
runtime.Object
Default(old runtime.Object)
}
```
Answers:
username_1: You can get the old values from the self pointer, it's filled in for you with whatever the object is. See https://github.com/username_1/rabbitmq-operator/blob/main/api/v1beta1/rabbitvhost_webhook.go#L39-L41 for an example.
username_0: @username_1 thanks, for some reason, sometimes we can't get old value from the object.
for example I have a VM CRD, the VM has a field `macAddress`, the defaulter webhook would generate a new macAddress if it is not specifield in the yaml file, like below:
1. kubectl apply -f vm.yaml
``` yaml
# vm.yaml
apiVersion: xxx.io/v1
kind: VirtualMachine
metadata:
name: centos
spec:
newworkInterfaces:
- name: eth0
macAddress:
```
2. the default webhook would generate a macAddress,after webhook handled the VM resource is:
```
apiVersion: xxx.io/v1
kind: VirtualMachine
metadata:
name: centos
spec:
networkInterfaces:
- name: eth0
macAddress: 00:01:02:03:04:05
```
3. kubectl apply -f vm.yaml again. cause in vm.yaml, the macAddress is still empty. in the webhook the new vm object's `macAddress` is empty not `00:01:02:03:04:05`. this is caused by CRD resource don't support:
```
// +patchMergeKey=name
// +patchStrategy=merge
```
related k8s issue [53558](https://github.com/kubernetes/kubernetes/pull/53558).
this is the problem I faced now, of course, it is a problem of k8s CRD not controller-runtime. update the Defaulter interface is a workaround, and maybe there are other situation need this too.
username_1: Yes, it is doing what you told it to. `macAddress:` is the same as `macAddress: null` which is overwriting the previous value so it's `nil` again by the time you see it again. Remove the field from the second apply and try again.
username_0: nope, remove `macAddress` field from yaml file couldn't fix the problem, cause `macAddress` is a sub field of `networkInterfaces`, and `networkInterfaces` is a list. the merge strategy is described [merge-individual-elements-of-a-list-of-complex-elements](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#merge-individual-elements-of-a-list-of-complex-elements). take a attention, CRD resource don't support this merge strategy
username_1: Fair, that would do it too. Regardless, the value as seen in the webhook body is available in the object. That's all you can get without making your own get call which introduces all kinds of race conditions so I don't think there is anything that can be improved on the controller-runtime side here.
If you do want to make a client call, you can use the lower level webhook API and pass in the client object.
username_2: Hi @codenrhoden , I think what we need here is to let controller-runtime pass what's inside AdmissionRequest's `OldObject` field to the Default interface, to make it possible to default some value for the new object from the old one, especially for a List type field. Without the build-in ability to get the old object in Default interface, we have to use a lower level webhook API, which is more verbose and complicated.
Say that we have a CRD like below:
```go
type GuestbookSpec struct {
Foo string `json:"foo,omitempty"`
Bars []Bar `json:"bars,omitempty"`
}
type Bar struct {
Name string `json:"name,omitempty"`
UUID string `json:"uuid,omitempty"`
}
```
Now what we want to achieve in the defaulting webhook here is to fill-in UUID field with a generated UUID if it's empty when creating a GuestBook resource. For updates, we want to keep previous generated UUID values if it's empty in the YAML file used in `kubectl apply -f`.
We would assume that in the `func (r *Guestbook) Default()` call for updates, the given GuestBook object should include the previously generated UUID values, but that's not the case. When updating a GuestBook object with empty UUID fields, the Default func always sees empty UUID fields.
However, if we use a lower level webhook API, one that is much closer to Kubernetes webhook API, we can get the old GuestBook object from the AdmissionRequest's `OldObject` field, which is exactly what we need here, to re-fill the new object with UUID fields from the old one.
I believe this is a common problem for CRDs with a List type field and some generate-once field of its element, which could be easily solved if controller-runtime would parse the `OldObject` field and pass it into the Default func call (or a new interface to cope with backward-capability). Without this help from controller-runtime, one would have to turn to the lower level API, which could be both verbose and unwieldy.
I'd give a PR here if that's something welcome.
username_1: Given how niche of a thing that is, I don't think that needs to be exposed in the high-level interface. Using the low-level one seems fine. |
microsoft/coe-starter-kit | 893908911 | Title: Guidance on upgrading from older version
Question:
username_0: **Is your feature request related to a problem? Please describe.**
As the Starter Kit has grown and evolved a lot since it was first published, I would like to see some guidance on how to upgrade from older versions, as to enable companies using it to leverage any new additions (e.g. automatic archiving).
**Describe the solution you'd like**
- What needs to be done to upgrade from a much older version to a current version
- Any other upgrade guidelines (e.g. from a relatively recent version to a current version)
- How to ensure that no existing data is list
- How to deal with any customisations
**Additional context**
We deployed the starter kit around a year ago, and have some older versions deployed (Core 1.45; Nurture 1.13; Compliance and Report 1.17). We also did some smaller customisations via own solutions: e.g. we added a few more fields to the apps table (to capture country/region of an app, for example), and updated accordingly where needed (show those fields in the Developer Compliance Center and the Power Platform Admin view). As I think the structure of the starter kit has changed a good amount (some new environment variables? slight structure change?), it is difficult to figure out how to get to the latest version
Answers:
username_1: Hello.
Here is our documentation on how to [extend the starter kit](https://docs.microsoft.com/power-platform/guidance/coe/setup#extending-the-starter-kit) and on [installing updates to the starter kit](https://docs.microsoft.com/power-platform/guidance/coe/setup#installing-updates)
You may run in to one-off issues like this one when upgrading from a version that old as well: https://github.com/microsoft/powerapps-tools/issues/491
Unfortunately there isn't a simple way to realign divergent versions.
Status: Issue closed
username_1: No further action for toolkit so closing. Thanks for using CoE |
oizo/intellij-colorblind-scheme | 267839360 | Title: Cannot Import to v2017.2.5
Question:
username_0: Hi,
I am also a color blind dev working primarily in Java and a newcomer to Intellij. I like your theme from the screenshots. When I try to import this in v2017.2.5 it tells me the `icls` file does not contain any settings. Do you know what I'd need to do to import this into this version of Intellij?
Answers:
username_1: Hey, I'll have a look at it as soon as i have time, and get back to you.
username_1: Oh, if you want to import the settings, then you'll need to import the `ColorBlind.jar`. To use the `ColorBlind.icls` file you'll have to copy/paste it into your `/path/to/IntelliJ-settings/colors` folder, and then in IntelliJ select the scheme from Settings->Editor->Colors & Fonts
Status: Issue closed
|
jlippold/tweakCompatible | 449118331 | Title: `ReProvision` working on iOS 12.0.1
Question:
username_0: ```
{
"packageId": "com.repo.xarold.com.reprovision",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.repo.xarold.com.reprovision",
"deviceId": "iPhone10,2",
"url": "http://cydia.saurik.com/package/com.repo.xarold.com.reprovision/",
"iOSVersion": "12.0.1",
"packageVersionIndexed": false,
"packageName": "ReProvision",
"category": "System",
"repository": "(null)",
"name": "ReProvision",
"installed": "0.4.2",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.repo.xarold.com.reprovision",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Re-sign applications on your device",
"latest": "0.4.2",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
qlik-oss/core-website | 335764722 | Title: Write the use-cases/assisted-prescriptions.md page
Question:
username_0: Shall cover information on the use case Assisted Prescriptions. The outline structure is already there:
- [ ] Business Scenario
- [ ] Technologies
- [ ] Architecture (diagram)
- [ ] Source Code (links)
Status: Issue closed
Answers:
username_0: Closing as Won't Fix. We are directly linking into the GitHub repos for more detailed information. |
davisking/dlib | 356700661 | Title: COMPILE_TIME_ASSERTS fail when referencing dlib as a shared library on OSX
Question:
username_0: ## Expected Behavior
According to the instructions (http://dlib.net/compile.html - Using dlib from C++), and following the tutorial (https://github.com/davisking/dlib/blob/master/examples/CMakeLists.txt) I would expect to be able to reference dlib as a shared library when `make`-ing my c++ shared library.
## Current Behavior
cmake succeeds, `make` command fails with lots of `COMPILE_TIME_ASSERT`s failing. (Output attached)
[make-dlib-jni.txt](https://github.com/davisking/dlib/files/2347722/make-dlib-jni.txt)
Actually, they all seem to be the same assertion:
```
COMPILE_TIME_ASSERT( pixel_traits<out_pixel_type>::has_alpha == false );
```
## Steps to Reproduce
### Attempt 1:
1. clone dlib from github
2. cd dlib && mkdir build && cd build && cmake && make && sudo make install
3. Create a new local repo (dlib-jni in this case).
4. Use the following CMakeLists.txt:
```
cmake_minimum_required(VERSION 2.8.12)
project(dlib-jni)
set(CMAKE_CXX_STANDARD 11)
add_subdirectory(../dlib dlib_build)
include_directories(src)
message(STATUS "Using dlib-${dlib_VERSION}")
find_package(OpenCV 3 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
find_package(JNI REQUIRED)
message (STATUS "JNI_INCLUDE_DIRS=${JNI_INCLUDE_DIRS}")
message (STATUS "JAVA_INCLUDE_PATH =${JAVA_INCLUDE_PATH}")
message (STATUS "JNI_LIBRARIES=${JNI_LIBRARIES}")
message (STATUS "JAVA_JVM_LIBRARY=${JAVA_JVM_LIBRARY}")
include_directories(${JNI_INCLUDE_DIRS})
add_library(dlib_jni SHARED src/dlib-jni.cpp)
target_link_libraries(dlib_jni dlib::dlib opencv_core opencv_highgui)
```
5. Run cmake command: `
cmake -DOpenCV_DIR=/usr/local/opt/opencv/share/OpenCV/OpenCVConfig.cmake ../`
Output:
```
-- The C compiler identification is AppleClang 9.1.0.9020039
-- The CXX compiler identification is AppleClang 9.1.0.9020039
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
[Truncated]
include_directories(${JNI_INCLUDE_DIRS})
add_library(dlib_jni SHARED src/dlib-jni.cpp)
target_link_libraries(dlib_jni dlib::dlib opencv_core opencv_highgui)
```
I can attach my dlib-jni.cpp file if you like, but it's _extremely_ hacky. Like, horribly so. I'm not a C++ programmer really, and am just trying to get a POC working.
* **Version**: 19.15.99
* **Where did you get dlib**: First attempt, from github master. Second attempt, from brew
* **Platform**: MacOS High Sierra 10.13.6
* **Compiler**:
Obtained using `g++ --version`
```
Apple LLVM version 9.1.0 (clang-902.0.39.2)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
Also:
`cmake version 3.12.1`
Answers:
username_0: Ok, I'm closing this. It was a coding error. 🙄
For those who care, I had done this:
```
dlib::cv_image<rgb_pixel_apha> img(image); // this image can be used inside dlib
```
But everything else was templated to `rgb_pixel`, eg:
```
std::vector<dlib::matrix<rgb_pixel>> faces;
for (auto face : detector(img))
...
```
Status: Issue closed
|
xforce/anno1800-mod-loader | 988332259 | Title: Cannot start the game with Mod Loader 8.02 as always advise the application was unable to start (0xc000142) Please Help
Question:
username_0: HI Team:
Thank you for keep updating the Mod Loader.
But since Mod Loader 7.2 I suppose after I install the Mod Loader the system always tell me the application was unable to start correctly (0xc000142). But I can start the game perfectly without the Mod Loader. My PC system is CPU AMD Ryzen 7 5800X with GTX 3800 grafic card. I suppose it is above the minium system requirment as I see comment before that some old CPUs are not supported for old Mod Loader version. Please kindly help. Thank you
Answers:
username_1: I think 0xc000142 happens when a dependency is missing, did you install the vcredist?
username_0: Hi Xforce.
Thanks for reply. Yes, I follow the instruction and install the vcredist already. But it still advised (0xc000142) error. Thx
username_0: Hi Xforce.
Thanks for reply. Yes, I follow the instruction and install the vcredist already. But it still advised (0xc000142) error. Thx |
bcgov/entity | 497109055 | Title: Director Name Change - Output/PDF
Question:
username_0: ### Director Name Change - Output/PDF
## Description:
Implement output for free director name change filing. Includes display in filing history.
**Dependencies**
**Acceptance Criteria**
See epic.
**Validation Rules**
Ready to Build (DoR):
- [ ] Stakeholders have approved
- [ ] User story completed
- [ ] What are the dependencies
- [ ] Validation rules defined (UI, Data, Role-Action)
- [ ] Is a formal UAT required
Acceptance / DoD:
- [ ] Design / Solution accepted by Product Owner
- [ ] Acceptance criteria has been defined (happy path, known sad paths)
- [ ] Test coverage acceptable
- [ ] Peer Reviewed
- [ ] Accessibility reviewed and acceptable [checklist](https://github.com/bcgov/entity/docs/coding-standards/accessibility.md)
- [ ] UX Approved
- [ ] PR Accepted
- [ ] Production burn in completed<issue_closed>
Status: Issue closed |
sherlock-project/sherlock | 543364584 | Title: Add search "Windy"
Question:
username_0: ```
"windy": {
"errorType": "status_code",
"rank": 1948,
"url": "https://community.windy.com/user/{}",
"urlMain": "https://community.windy.com/",
"username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
},
```<issue_closed>
Status: Issue closed |
NathanKloer/SciFly | 408623361 | Title: Submit Button in Cart is Failing
Question:
username_0: When I click on the Submit button in the cart, then no further action happens. Clicking on the submit cart does not result in next step of confirmation.
STEPS to reproduce:
1. Login to the site
2. Add 1 product to cart
3. Do not change quantity
4. Click on Submit,
**Expected Result**: User moves on to Confirmation Page
**Actual Result:** Nothing happens.<issue_closed>
Status: Issue closed |
kubernetes/ingress-gce | 394592968 | Title: Feature Request - Configuration via ConfigMap
Question:
username_0: I would like to be able to inject the ingress configuration that is usually gathered from static annotations instead through a configmap. I am happy to look into implementing this myself if this would be an acceptable addition. I was hoping in the ingress I could write:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
confgMapName: ingress
...
```
where the configmap is
```
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress
data:
kubernetes.io/ingress.global-static-ip-name: "my-ip-name"
kubernetes.io/ingress.allow-http: "false"
```
This would allow me to reuse the same declarative Gitops repo and simply terraform a new cluster and static IP, then have terraform inject the IP name as config.
Answers:
username_1: @username_0 I'm not sure your proposal makes sense. IIUC, this would require a change to the Ingress API to support the "configMapName" field? This is definitely a non-starter.
Status: Issue closed
username_0: @username_1 That's ok, we could have added this functionality in an annotation instead of in the `spec` field e.g. `gce.ingress/configmap: ingress`. I have decided to go with the NGINX ingress instead, it seems a little more mature and configurable and isn't leaking resources as we rebuild. Thank you for the reply :D |
ng-alain/ng-alain | 397209947 | Title: Perforamnce issue on g2-bar
Question:
username_0: - ng-alain version: X.Y.Z
- Angular version: X.Y.Z
## Reproduction link
https://ng-alain.github.io/ng-alain/#/dashboard/analysis
## Steps to reproduce
1, Open https://ng-alain.github.io/ng-alain/#/dashboard/analysis
2. You will see "Stores Sales Trend" bar chart
3, Click "Visits" to swith to "Visits Trend", it takes more than 10 seconds to render bar chart.
Answers:
username_1: You are right. or used [delay](https://ng-alain.com/chart/bar/en#g2-bar) property delay rendering g2-bar.
Status: Issue closed
|
mori0091/cparsec3 | 818769424 | Title: [pug-lang] '!' (complement / not) operator causes "Type mismatch" error
Question:
username_0: **Describe the bug**
`! x` causes "Type mismatch" error in some case even if type of `x` was `bool` or `int`.
**To Reproduce**
Both of the following code causes "Type mismatch" error:
~~~
// case (1)
(|x| !x) 1 ;
~~~
~~~
// case (2)
(|x| !x) true;
~~~
**Expected behavior**
In case (1), it shall be `-2` of type `int`.
In case (2), it shall be `false` of type `bool`.
**Development environment (please complete the following information):**
- OS and version: Ubuntu 20.04 on WSL
- Compiler and version: gcc 9.3.0
**Additional context**
None.<issue_closed>
Status: Issue closed |
WIPACrepo/pyglidein | 226294670 | Title: get glidein logs
Question:
username_0: We'd like to get the glidein logs:
- stdout/stderr of glidien
- condor startd, master
and upload them to a central server.
I'd like to use HTTP PUT, maybe with basic authentication (could use the pool password if you wanted). That is simple enough that it should always work, without requiring cvmfs or anything installed on the worker node.
Answers:
username_1: Potential Solution:
Run [Minio](https://minio.io/) in a Docker container at IceCube. Minio acts as an open source version of S3. A Glidein site would be provided with a key and secret that they could put into their configuration file.
When [submit](https://github.com/WIPACrepo/pyglidein/blob/master/submit.py) is run on the client a signed POST URL is generated using the [minio python bindings](https://docs.minio.io/docs/python-client-api-reference). This URL is shipped as input with the job.
The glidein bash script would have a trap [here](https://github.com/WIPACrepo/pyglidein/blob/2572bb5e79e6f592acfe4d42c07ec8d6182a87de/glidein_start.sh#L201) that would tar up the startd logs and upload them to the minio server at IceCube.
At IceCube a second process on the pyglidein server would be started called log_importer. This would use the minio python bindings to watch activity on buckets. When a new file arrives, the service would download the file and use the ElasticSearch python bindings to do a bulk import of the data. A good example of this in action is [here](http://unroutable.blogspot.co.uk/2015/03/quick-example-elasticsearch-bulk-index.html).
username_0: Note that you don't technically need the ElasticSearch python bindings if they become hard to work with. As an example, here is the [inserter code](https://gist.github.com/username_0/e40e0e2deb82639333a2670fb5040798) from iceprod.
username_1:  [Send Pyglidein StartD Logs Back to IceCube](https://trello.com/c/1lhcJ7rE/6-send-pyglidein-startd-logs-back-to-icecube)
username_1: #112
username_1: The team talked this afternoon after reviewing the the logging code. One idea that came up was to inject the URL of the uploaded log file into a classad that got shipped home.
username_0: Note for running multiple minio instances behind a reverse proxy:
https://github.com/krishnasrinivas/cookbook/blob/68b6dab51f557ed437449104970abcf3bacf4b7b/docs/multi-tenancy-in-minio.md
username_1: My first attempt at this uses presigned put and get S3 urls generated by the client process at each grid site. Each site would have to add a `[StartdLogging]` section to their configuration that includes three variables:
1. `send_startd_logs`: This can be set to True or False
2. `url`: The S3 endpoint URL. This can either be AWS or a Minio instance.
3. `bucket`: The name of the bucket that the log files should go to.
I added a new client flag called `--secrets` to the client command. It defaults to `.pyglidein_secrets` if not set by the user. The file is configured the same way as the config file, but should only contain secrets. The reason for pulling secrets out of the configs is to ensure users don't push secrets to the pyglidein repo. When StartdLogging is enabled the secrets file should also contain a `[StartdLogging]` section with these variables:
1. `access_key`: S3 Access Key
2. `secret_key`: S3 Secret Key
For each job the client submits to a cluster, it generates a presigned put and get url. These are passed as environment variables to the job. A `log_shipper` script is forked at job start time on the execute node that tars up the log directory and uploads the file to the S3 endpoint every five minutes. The glidein start script now respects SIGTERM and SIGINT. The condor process is killed and one more log shipment is run after receiving a SIGTERM or SIGINT from the scheduler.
A `PRESIGNED_GET_URL` classad is injected into each glidein startd using the `STARTD_ATTRS` expression. The classad can be accessed in the condor history file for debugging issues after a crash.
To create the IAM user, S3 Bucket, and Policy in AWS for shipping logs I created a cloudformation template that generates these resources. This template could be invoked for each site that wants to send logs. This ensures each site has its own set of credentials and permissions to write to a single S3 bucket in AWS. https://github.com/WIPACrepo/pyglidein/blob/logging/cloud_formations/logging_bucket.json The bucket life-cycle is set to delete files older than 90 days so the size of the bucket doesn't get out of control.
In the event of the site going away the entire cloudformation could be deleted causing all the resources that were created to be deleted as well.
Status: Issue closed
|
rathoresrikant/HacktoberFestContribute | 368762022 | Title: Detecting a loop in a linked list
Question:
username_0: Details
Programming languages : C, C++, Java, Python
Explanation : Yes
Directory : Data Structures
Please refrain from using plagiarised code. Happy coding !
Answers:
username_1: On it.
username_2: Can I take this?
username_3: resolved in c++ with explanation
Status: Issue closed
|
docksal/ci-agent | 449956964 | Title: Docksal CI Agent on GitLab issues
Question:
username_0: I use https://github.com/docksal/ci-agent/blob/develop/README.md as an instruction to establish Docksal CI Agent on GitLab. I have a sandbox server in Digital Ocean,
1. I set up `DOCKSAL_HOST_SSH_KEY` as a variable in GitLab CI / CD Setting which contains *private* key for the Sandbox server `build` account. Is this correct? I have this

2. There is no GitLab example in [Project configuration](https://github.com/docksal/ci-agent/blob/develop/README.md#project-configuration). I suggest adding such an example to the `README.md`.
I use the following `.gitlab-ci.yml`:
```
build_test:
image: docksal/ci-agent:php
script:
- source build-env # Initialize the agent configuration
- build-init # Initialize the remote sandbox environment
- build-exec "cd www && fin drush st" # Test performing within the build
when: manual
```
Answers:
username_1: @username_0 there is an open issue and a PR for the GitLab CI configuration example:
https://github.com/docksal/ci-agent/issues/14
https://github.com/docksal/ci-agent/pull/19
I don't have a GitLab instance in use, so that PR has been on hold for a while. Check it out and chime in on the implementation.
Regarding `DOCKSAL_HOST_SSH_KEY` - it has to be base64 encoded.
See https://github.com/docksal/ci-agent#required
Status: Issue closed
username_1: GitLab config examples have been added - https://github.com/docksal/ci-agent/pull/58 |
BioPhoton/css-star-rating | 254210270 | Title: "Half stars" not working although set ".half" class
Question:
username_0: First of all, thanks for your library. Its awesome and I really like it.
But when I tried to your demo as below, although ".half" class is set, rating value does not have a half star at the end.
```
<div class="rating large star-icon direction-rtl value-1 half color-default label-top">
<div class="label-value">1.5</div>
<div class="star-container">
<div class="star">
<i class="star-empty"></i>
<i class="star-half"></i>
<i class="star-filled"></i>
</div>
<div class="star">
<i class="star-empty"></i>
<i class="star-half"></i>
<i class="star-filled"></i>
</div>
<div class="star">
<i class="star-empty"></i>
<i class="star-half"></i>
<i class="star-filled"></i>
</div>
</div>
</div>
```
I also attach output when above html code.

Did I miss something? Please help me to resolved it.
Thanks and best regards,
Status: Issue closed
Answers:
username_1: take a look at the css doc. I put it online now => https://biophoton.github.io/css-star-rating/section-1.html
Hope this helps.
best Michael
username_2: Same problem. I cannot figure out from documentation how to solve |
GoogleCloudPlatform/cloud-spanner-r2dbc | 442875227 | Title: Consider tracking transaction state in the SpannerConnection
Question:
username_0: The state of the active transaction can be tracked in the `SpannerConnection` if we find this necessary. Tracking is useful if we want to make repeated calls to `beginTransaction()` a no-op if a transaction is already active.
Tracking was achieved by the client library using following rules:
- `Commit() or Rollback() is called`: Transaction is in a terminal state
- `Exception is thrown in middle of transaction`: Transaction is in aborted state
- `beginTransaction() is called`: Transaction is in started state
Answers:
username_0: Going to close this; seems like it is ultimately unnecessary to achieve functionality of the driver.
Status: Issue closed
|
fastbuild/fastbuild | 168525340 | Title: .lib Node built with lib.exe mistakenly being detected as Exe node instead of Library node
Question:
username_0: We're working from the FastBuild for UE4 wiki (https://github.com/fastbuild/fastbuild/wiki/fastbuild-for-Unreal-Engine-4), which we've modified a bit, but are still struggling to make work properly. We're with a branch of UE4 4.12.5.
We're getting this error:
```
29> BFF file 'D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\fbuild.bff' has changed (reparsing will occur).
29> D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\fbuild.bff(23,1): FASTBuild Error #1005 - DLL() - Unsupported node type in 'Libraries'. (Node: 'D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\Win64\UE4Editor\Development\UE4Editor-UnrealEd.lib', Type: 'Exe')
29> DLL('DLL-2')
```
And here's the related excerpt from the fbuild.bff file:
```
DLL('DLL-1')
{
.Linker = '$VSBasePath$/bin/amd64/lib.exe'
.LinkerOutput = 'D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\Win64\UE4Editor\Development\UE4Editor-UnrealEd.lib'
.Libraries = 'D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\Win64\UE4Editor\Development\UE4Editor-UnrealEd.lib.response'
.LinkerOptions = ' /NOLOGO /errorReport:prompt /MACHINE:x64 /SUBSYSTEM:WINDOWS /DEF /NAME:"UE4Editor-UnrealEd.dll" /LIBPATH:"ThirdParty/Ogg/libogg-1.2.2/lib/Win64/VS2015" /LIBPATH:"ThirdParty/Vorbis/libvorbis-1.3.2/Lib/win64/VS2015/" /LIBPATH:"ThirdParty/Vorbis/libvorbis-1.3.2/Lib/win64/VS2015/" /LIBPATH:"ThirdParty/Windows/DirectX/Lib/x64" /LIBPATH:"ThirdParty/HACD/HACD_1.0/lib/Win64/VS2015/" /LIBPATH:"ThirdParty/VHACD/lib/Win64/VS2015/" /LIBPATH:"ThirdParty/FBX/2016.1.1/lib/vs2015/x64/release/" /LIBPATH:"ThirdParty/FreeType2/FreeType2-2.6/Lib/Win64/VS2015" /LIBPATH:"ThirdParty/PhysX/PhysX-3.3/lib/Win64/VS2015" /LIBPATH:"ThirdParty/PhysX/APEX-1.3/lib/Win64/VS2015" /LIBPATH:"../Binaries/ThirdParty/libsndfile/Win64" /NODEFAULTLIB:"LIBCMT" /NODEFAULTLIB:"LIBCPMT" /NODEFAULTLIB:"LIBCMTD" /NODEFAULTLIB:"LIBCPMTD" /NODEFAULTLIB:"MSVCRTD" /NODEFAULTLIB:"MSVCPRTD" /NODEFAULTLIB:"LIBC" /NODEFAULTLIB:"LIBCP" /NODEFAULTLIB:"LIBCD" /NODEFAULTLIB:"LIBCPD" @"%1" /OUT:"%2"'
}
DLL('DLL-2')
{
.Linker = '$VSBasePath$/bin/amd64/link.exe'
.LinkerOutput = 'D:\dev\UE4\UnrealEngine\Engine\Binaries\Win64\UE4Editor-Engine.dll'
.Libraries = { 'DLL-1' }
.LinkerOptions = ' /MANIFEST:NO /NOLOGO /DEBUG /errorReport:prompt /MACHINE:x64 /SUBSYSTEM:WINDOWS /FIXED:No /NXCOMPAT /STACK:5000000 /DELAY:UNLOAD /DLL /PDBALTPATH:%_PDB% /OPT:NOREF /OPT:NOICF /INCREMENTAL:NO /ignore:4199 /ignore:4099 /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"PhysX3PROFILE_x64.dll" /DELAYLOAD:"PhysX3CookingPROFILE_x64.dll" /DELAYLOAD:"PhysX3CommonPROFILE_x64.dll" /DELAYLOAD:"nvToolsExt64_1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"APEXFrameworkPROFILE_x64.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"libogg_64.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"libvorbis_64.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"libvorbisfile_64.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /DELAYLOAD:"vulkan-1.dll" /LIBPATH:"ThirdParty/PhysX/PhysX-3.3/lib/Win64/VS2015" /LIBPATH:"ThirdParty/PhysX/APEX-1.3/lib/Win64/VS2015" /LIBPATH:"D:\dev\UE4\UnrealEngine\Engine\Source\ThirdParty\Box2D\Box2D_v2.3.1\build\vs2015\bin\x64\Release" /LIBPATH:"ThirdParty/Ogg/libogg-1.2.2/lib/Win64/VS2015" /LIBPATH:"ThirdParty/Vorbis/libvorbis-1.3.2/Lib/win64/VS2015/" /LIBPATH:"ThirdParty/Vorbis/libvorbis-1.3.2/Lib/win64/VS2015/" /LIBPATH:"ThirdParty/libOpus/opus-1.1/win32/VS2012/x64/Release/" /LIBPATH:"ThirdParty/DirectShow/DirectShow-1.0.0/Lib/Win64/vs2015/" /NODEFAULTLIB:"LIBCMT" /NODEFAULTLIB:"LIBCPMT" /NODEFAULTLIB:"LIBCMTD" /NODEFAULTLIB:"LIBCPMTD" /NODEFAULTLIB:"MSVCRTD" /NODEFAULTLIB:"MSVCPRTD" /NODEFAULTLIB:"LIBC" /NODEFAULTLIB:"LIBCP" /NODEFAULTLIB:"LIBCD" /NODEFAULTLIB:"LIBCPD" @"D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\Win64\UE4Editor\Development\UE4Editor-Engine.dll.response" /OUT:"%2" /IMPLIB:"D:\dev\UE4\UnrealEngine\Engine\Intermediate\Build\Win64\UE4Editor\Development\UE4Editor-Engine.lib" /PDB:"D:\dev\UE4\UnrealEngine\Engine\Binaries\Win64\UE4Editor-Engine.pdb" /ignore:4078 %1'
}
```
Is there a better way to go about this? It seems like a DLL node should be able to use a .lib/Library as a dependency. Unfortunately, to make DLL('DLL-1') into proper (e.g. Library('LIB-1')) looks like it would expect a compiler, and to actually compile the object files in the same FastBuild step as executing the librarian... instead, it seems to not like this because it uses lib.exe instead of link.exe + /DLL...
Answers:
username_1: Fast build annoyingly relies on output file extensions rather than simply respecting the wishes of the user. In general, it's parsing of command lines and and other "heuristic" behavior is the weakest part because it leads to confusion.
It's a great build system but it would be amazing if it was more general and only did what you tell it.
username_2: Do you have an example of this? In the case we're talking about here it doesn't seem to me that this is a factor at all. You can build a dll called thing.stuff if you want, but it has to be a DLL. You should also name the import lib whatever you want (/IMPORTLIB:) but if you omit the name, it will expect the default name as generated by the MSVC linker.
Is there some other case you've seen a problem where FASTBuild does rely on file extension in an unsafe or restrictive way? Perhaps that's a bug worthy of a new issue?
username_2: It seems the Unreal script (authored by cjsu) is making some bad assumptions about DLL() and how it functions (either due to lack of docs on my part, or perhaps some missing functionality). It seems like it needs to be changed to use Library() for static libraries.
I've raised an issue on cjsu's github to see if we can get to the bottom of this: https://github.com/cjsu/fastbuild-ue4/issues/1
username_3: Hi @username_0, I mentioned a workaround here (requires a small change to FBuild): https://github.com/cjsu/fastbuild-ue4/commit/37ccda5a5a0e3c22d71528e637ef43de32713c3e#commitcomment-15908076
That may be a temporary solution, the real issue is my misuse of DLL nodes in FBuild and not FBuild itself.
username_0: Thanks for the help @username_2 and @username_3! For now, using @username_3's modification for EXE nodes works well, though an updated version which does the right thing will be great!
One odd thing is that for single .cpp file changes, FastBuild is slower than UE4+VS. Even when skipping the UE4 build steps and passing the .bff file directly to FastBuild, it still takes considerably longer to build the .obj file. Linking seems equally fast, though.
In fact, in general, it seems like FastBuild builds objs considerably slower than the native approach, which is puzzling because the compiler toolchain is the same. I wonder whether it's some compiler options, but I haven't narrowed it down to anything specific yet.
Overall, for a full build, I'm seeing equal build times with FastBuild distribution and with UBT+VC14 native - I believe that if the object file build times weren't slower for FastBuild, it would be considerably faster than the native solution (especially with more nodes and caching). Does anyone have any thoughts here?
Thanks again!
username_2: This is strange. Certainly compiling with the same settings should yield at least the same performance. One thing I did notice in the UE4 script's documentation:
```C++
bUsePCHFiles = false; //temporary until compilation issue fixed
```
If this is indeed disabling the use of precompiled headers, that would likely have a very large negative impact on compile times.
username_3: Ah yes, PCH support is still not enabled in the version on GitHub. I have it working locally now so I'll include support for them in the version that adds proper Library usage. (Issue here: https://github.com/username_3/FASTBuild-UE4/issues/4).
If I remember correctly, enabling PCHs reduced our full rebuild time from 8 minutes to 5 minutes, but without FASTBuild/IncrediBuild, we had 45 minute build times.
I don't think that @username_0 's issue would be due to that though since I'm assuming she kept that option disabled for the non-fbuild test. I assume the slowness is on the first pass where the object gets built, could this be due to preprocessing and compiling from the preprocessed source being plain slower than compiling directly from cpp, at least in MSVC?
username_0: Actually, I have PCHs enabled for both FastBuild and non-FastBuild tests. UE4 doesn't build properly without PCHs (with or without FastBuild, and it was just too much work to get it to build properly, so we tried switching FastBuild to PCHs and that's what we're using :))
username_0: Currently, we completely disable FastBuild for single file rebuilds due to the slowness - we aren't using the cache yet but it is enabled, so I should also try disabling caching and distribution for single-file builds, just to see if that fixes it as you say, @username_3 :+1:
username_0: So, making the C# script check the number of actions and disable distribution and caching below a certain threshold seems to have fixed the issues with it taking a very long time to compile into object files. I've checked this into our mainline now so that we can always have FastBuild enabled without net slowdowns vs. native Unreal builds.
username_2: It would be nice to work out why you are getting worse performance in the single file rebuild case.
With the MS compiler, -dist and -cacheread have a small overhead, but mostly that shouldn't be noticeable.
-cachewrite (or -cache) on the other hand has a significant overhead with the MS compiler because it effectively disables the precompiled headers. This is because historically, sharing precompiled headers between computers was not possible. In newer VS versions this is possible, so this might be something that can be improved. This tradeoff is described here: http://fastbuild.org/docs/features/caching.html
With FASTBuild caching right now, with the MS compiler, the ideal case is to have your automated build system populate the cache (-cache), and your developers only read from the cache (-cachread).
Is it possible you are running in -cache or -cachewrite mode? You might find that instead of disabling the cache you can set it to -cacheread mode instead.
username_4: Hi!
After trying to narrow down this issue it looks like the performance loss might be coming from the fact that in the distributed build scenario (using -dist) the local compilation jobs are not using PCH and thus slower compared to non distributed builds.
Why?
Because the way we build the unreal bff in our Unreal FASTBuild integration (https://github.com/liamkf/Unreal_FASTBuild) is as follows:
ObjectList A
- Creates PCHHeader.pch
- Uses PCHHeader.pch
- Compiles: PCHHeader.cpp
- Outputs: PCHHeader.cpp.obj
ObjectList B
- Uses PCHHeader.pch
- Compiles: ObjectB.cpp
- Ouputs: ObjectB.cpp.obj
ObjectList C
- Uses PCHHeader.pch
- Compiles: ObjectC.cpp
- Ouputs: ObjectC.cpp.obj
In this scenario what would happen in a distributed build scenario is that ObjectB and ObjectC will not be considered as using a PCH (GetFlag( FLAG_USING_PCH ) == false) which means that in ObjectNode::DoBuildWithPreProcessor2() the following code
```
if ( GetFlag( FLAG_MSVC ) )
{
// If building a distributable/cacheable job locally, and
// we are not going to write it to the cache, then we should
// use the PCH as it will be much faster
if ( GetFlag( FLAG_USING_PCH ) &&
( FBuild::Get().GetOptions().m_UseCacheWrite == false ) )
{
usePreProcessedOutput = false;
}
}
```
Will result into usePreProcessedOutput == true, which will cause the ObjectNode::BuildArgs() to strip the /Yu and /Fp args:
```
if ( pass == PASS_COMPILE_PREPROCESSED )
{
// Can't use the precompiled header when compiling the preprocessed output
// as this would prevent cacheing.
if ( StripTokenWithArg( "/Yu", token, i ) )
{
continue; // skip this token in both cases
}
if ( StripTokenWithArg( "/Fp", token, i ) )
{
continue; // skip this token in both cases
}
```
Here are some suggestions for potential fixes:
1) Group all objects using a specific PCH into one single ObjectList. This is how FASTBuild is expecting PCH usage I believe.
So from our example above:
ObjectList A
- Creates PCHHeader.pch
- Uses PCHHeader.pch
- Compiles: PCHHeader.cpp, ObjectB.cpp, ObjectC.cpp
[Truncated]
usePreProcessedOutput = false;
}
}
```
I quickly tried option 2) which fixed our slow distributed build over 4 machines. We are building the UnrealTournament 4 code base.
A non Distributed build with PCHs was giving 12minutes.
A Distributed build **WITHOUT** the local PCH fix was giving 24minutes.
A Distributed build **WITH** the local PC fix is giving around 12minutes.
So it still doesn't seem that we are getting much benefits from the 3 remote workers.
Some potential explanations could be:
- The cost/benefit ratio of pre-processing/distribution is not positive in our case. The compilation happening on the remote workers is considerably slower than on the host (no pch, transfer in/out cost).
- It also like there might be some potential scheduling improvements that could be done to make work overlap better between workers. Maybe the fact that we are clearing the fdb cache (regenerating the bff) on every build and not using previous build metrics is not helping our case.
Finally, I'm attaching is a snapshot of a visualizer we have been developing (integrated to Visual Studio). It allows us to see how work is spread out across the machines. I have added some comments to it as well.
We are working on finishing some last features + documentation before releasing it.

username_2: I've just merged some changes from the Perforce mainline (https://github.com/fastbuild/fastbuild/commit/9866da8d72ea0abd5f95227100263255eaf43cd8) which change the way that precompiled headers are managed when using the cache.
The changes mean that:
- Precompiled headers can now be cached
- Objects built using precompiled headers are now cached while still using the precompiled header (compiling faster and not adding link-time overhead)
- Running the cached in -cachewrite or -cache mode has no additional overhead compared to -cacheread mode
The end result of these changes is significantly improved compile and link times when using the cache (10%-30% in the cases I've seen)
These changes will be in the next version (v0.91) but if you build your own exe from the dev branch you can try them right away.
username_4: Nice work!
I have tried your changes and can confirm that PCHs and all OBJs using them are getting cached which speeds up the full rebuild (with no changes) quite a lot.
This is the monitor screenshot with all the Pink timespans representing the cache hits, the light blue corresponds to the dlls linking actions.

I have also added a system graphs feature to the monitor which shows various performance stats.
It seems like when we are at 100% cache hits we are averaging at 50% of CPU time (I guess it's due to preprocessing work) and we are maxing out our IO (copying from the local cache to the build locations).
username_5: Wow, nice work and very nice monitor :)
Are you planning to release it at some point ?
username_4: @username_5 thanks! yes the plan is to release it soon as a VS extension. Hopefully it will be useful as a visualization/debugging/profiling tool for FASTBuild.
The required code hooks on the FASTBuild side should be minimal so it'll hopefully be an easy integration.
I'll start a new thread about this where we can discuss the integration details and get your feedback.
username_2: That visualizer is very nice! It's great to be able to see the scheduling across PCs. Would be great to spin up another ticket to talk about how to move forward on that as you suggested.
FASTBuild uses [LZ4](https://github.com/Cyan4973/lz4) for compression of the cache objects. It uses the default compression settings, which give around 2.1:1 compression. Depending on your use case, it could be interesting to expose a toggle that enables the High Compression mode. This is about 10% of the speed to compress, but decompresses at the same speed and provides around a ~2.7:1 compression ratio. I could see a case where a build machine is populating the cache and developers are consuming it that this might be an interesting trade-off. I'm also toying with the idea of adding a command line option to FASTBuild where it would iterate and re-compress the cache independently of any build as another way to approach this trade-off.
On the profiling front, just to make sure it's known if needed, if you are building your own FASTBuild executable, you can use the "Profile" build configuration to spit out a "profile.json" in the working dir which can be visualized in Chrome's "chrome://tracing" viewer. That can be interesting to make sure nothing is unexpectedly slow for a particular use-case. Here's an example of what that looks like:

PS - That Unreal link time is pretty unfortunate :(
username_0: @username_4 Awesome stuff!!! We've been planning to write our own visualizer, and by the looks of it, yours has pretty much everything we'd want, so it may just take that TODO off my task list if you do release it! :+1: If I can be of help along the way, please let me know!
username_0: @username_2 Thanks for the cache updates! Does this mean that it makes sense to have worker machines populating their own cache now, versus only enabling cachewrite on a build machine?
username_4: @username_0 Great to see that it would be useful to you.
A first version has been released already and I opened a ticket to discuss the integration with @username_2 here: https://github.com/fastbuild/fastbuild/issues/127
If you want to give it a shot you can find the vsix extension package and the setup instructions here: https://github.com/username_4/FASTBuildMonitor
username_2: @username_0
**> Does this mean that it makes sense to have worker machines populating their own cache now, versus only enabling cachewrite on a build machine?**
I guess it depends on the environment, but I tend to think having the build machines being the only writers to the cache is probably preferable, since local edits by users would generally be ephemeral and not that useful to put into the cache, so limiting bandwidth/disk-space usage of the cache is probably better than the cache improvement you may sometimes get.
This is how I have it setup.
(Note that FASTBuild does intelligently avoid caching some ephemeral object files already like those created if using UnityInputIsolateWritableFiles)
username_2: How does everyone feel about closing out this ticket? I think we've forked off new tickets for anything outstanding (like the Visualizer).
Status: Issue closed
|
UW-COSC-2030-FA-2017/hw1-fundamental-c-concept-EastonTuttle | 264425110 | Title: bool Collection<Type>::notContained(Type object) has slight logic bug
Question:
username_0: In the following code segment, as soon as the first `objects[i] == object` check occurs the function returns will return either true or false. The way it's currently written, you're only checking for the existence of an object in the first (0th) position. You simply need to remove the else block from here.
```
for (int i = 0; i <= size; i++) {
if (objects[i] == object){
return false;
}
else{
return true;
}
}
``` |
openthread/openthread | 156099277 | Title: astyle issue under ubuntu
Question:
username_0: When running 'make pretty' under ubuntu, error would happen complaining as below
Invalid option file options:
style="allman"
the environment I am using is
Artistic Style Version 2.04
Ubuntu 14.04.4 LTS
the command could work when do some modification on the file .astyle-opts
--style="allman" ===> --style=allman
Answers:
username_1: Fixed with #77.
Status: Issue closed
|
spesmilo/electrum | 283691868 | Title: Command line electrum restore expects you to be connected to a network; this doesn't work for cold storage
Question:
username_0: using:
python3 electrum restore -w /var/cold/mywallet.dat --testnet xprv9y..... tries to access the network for blockchain headers
This doesn't work on a machine that is disconnected from all networks to be used for cold storage.
It should not need access to the network, especially because if this same operation is attempted through the UI, it works fine and a wallet file is created.
This seems to be a bug
Answers:
username_1: Try with the `--offline` flag.
Status: Issue closed
username_0: Yep that did the trick. It would be great if that switch was in the help text: python3 electrum help
Thanks for the quick reply!
username_1: Not sure; maybe. It's only useful for some of the commands.
If you run `electrum help restore`, it will tell you about it. |
ex-aws/ex_aws | 276830691 | Title: list_endpoints_by_platform_application ?
Question:
username_0: I am trying to use:
ExAws.SNS.list_endpoints_by_platform_application
however, I get this error:
```
** (UndefinedFunctionError) function ExAws.SNS.list_endpoints_by_platform_application/1 is undefined or private. Did you mean one of:
* list_platform_applications/0
* list_platform_applications/1
```
isn't list_endpoints_by_platform_application implemented?
also, howcome I can't find `sns.ex` file in the repo?!
Answers:
username_0: After a while, I could retrieve results as:
ExAws.SNS.list_endpoints_by_platform_application(platform_application_arn,[])
however, the User Data is not retrieved ,which is shown in the AWS console as User Data:

Any idea please?
username_1: Please follow the
Status: Issue closed
|
telefonicaid/lwm2m-node-lib | 59488829 | Title: Complete error-related test cases
Question:
username_0: There are some test cases described in the test suite but not implemented (regarding errors in client-side information management and not found resources). They should be implemented to strengthen the test suit. |
madMAx43v3r/mmx-node | 1088985883 | Title: Windows Native Support/GUI Wallets
Question:
username_0: Will you add native support for Windows and a graphical wallet? I think this is a great thing for wider accessibility among people who don't want to/know how to use linux/command line stuff but still want to trade or farm.
Always good to see new coins popping up.
Answers:
username_1: Yeah there will be a GUI for sure, it'll take some time to develop though.
username_2: Hi @username_1, interesting project, I made a simple design for MMX GUI. [Figme Link](https://www.figma.com/file/cqcqWPrfqlDPHYNqBTvOSD/mmx-gui?node-id=0%3A1).
If you think it is useful, I can keep going and try to finish the design.
I have some frontend experience, If time permits, I can also try to implement it via Electron.

username_3: @username_2 Wow, nice work buddy. Look forward to seeing this GUI released. @username_1 the backend is heavy enough, will you consider supporting the 3rd party GUI in the architecture/programming level?
username_1: I have a built-in HTTP server with JSON API, it's not finished yet, but eventually that's what GUIs will and can use.
username_3: @username_1 Awesome, great to know. |
yWorks/yGuard | 695045414 | Title: Research additional obfuscation techniques
Question:
username_0: There are many possible ways of obfuscation. It would be good to create a table / chart in the documentation which ones exist and are supported by yGuard. Inspiration could be drawn from [this paper](https://www.sciencedirect.com/science/article/pii/S0167404816300529). The scope of this ticket is to create a new documentation page with mentioned details. |
aws-amplify/amplify-js | 509848924 | Title: ServiceWorker enablePush is not a function
Question:
username_0: Totally noob on SW here!
I'm trying, as the docs says, to register a service worker and enable push notifications to use it with the PinPoint service.
My setup is nuxt with aws-amplify installed (also tried aws-amplify/core as I read somewhere that there's no need to install the full library)
Library version is 1.2.2
Code is as simple as follows:
```
const serviceWorker = new ServiceWorker()
const registeredServiceWorker = await serviceWorker.register('mySW.js', '/')
registeredServiceWorker.enablePush('PinpointID')
```
and it's executed after the login happens.
Error I get:
```
index.js:174 Uncaught (in promise) TypeError: registeredServiceWorker.enablePush is not a function
at VueComponent._callee$ (index.js:174)
at tryCatch (commons.app.js:7744)
at Generator.invoke [as _invoke] (commons.app.js:7970)
at Generator.prototype.<computed> [as next] (commons.app.js:7796)
at asyncGeneratorStep (commons.app.js:73)
at _next (commons.app.js:95)
```
I'm absolutely stuck. Any ideas?
Thank you!
Answers:
username_1: @username_0, it looks like a bug in our documentation, `enablePush` should be called on `serviceWorker` and not on the registered object. can you try the following code and let me know if it works for you
```
const serviceWorker = new ServiceWorker()
const registeredServiceWorker = await serviceWorker.register('mySW.js', '/')
serviceWorker.enablePush('PinpointID')
```
username_0: Hi Amp,
Just found it digging on the source code finally. Thank you anyway. I'll try to make a PR to fix that.
Also, enablePush requires a public_key but if you use the react native it uses the appId from pinpoint (my guess). So I don't see how it matches with pinpoint. I can not see where to get that public key. So I can't see how to send push notifications to my pwa with nuxt.
Any light would be appreciated. I feel like falling in a rabbit hole...
Status: Issue closed
username_1: Totally noob on SW here!
I'm trying, [as the docs says](https://aws-amplify.github.io/docs/js/service-workers), to register a service worker and enable push notifications to use it with the PinPoint service.
My setup is nuxt with aws-amplify installed (also tried aws-amplify/core as I read somewhere that there's no need to install the full library)
Library version is 1.2.2
Code is as simple as follows:
```
const serviceWorker = new ServiceWorker()
const registeredServiceWorker = await serviceWorker.register('mySW.js', '/')
registeredServiceWorker.enablePush('PinpointID')
```
and it's executed after the login happens.
Error I get:
```
index.js:174 Uncaught (in promise) TypeError: registeredServiceWorker.enablePush is not a function
at VueComponent._callee$ (index.js:174)
at tryCatch (commons.app.js:7744)
at Generator.invoke [as _invoke] (commons.app.js:7970)
at Generator.prototype.<computed> [as next] (commons.app.js:7796)
at asyncGeneratorStep (commons.app.js:73)
at _next (commons.app.js:95)
```
I'm absolutely stuck. Any ideas?
Thank you!
username_1: @username_0, I'm not sure I completely understand your question. You need a applicationServerKey as mentioned here in this [article](https://developers.google.com/web/fundamentals/codelabs/push-notifications#subscribe_the_user) to send push notifications from a server (public key is assigned to applicationServerKey in the library)
Analytics(pinpoint) doesn't use this public_key. It just sends/records an analytic event whenever the service worker state is changed. |
OBOFoundry/pipeline-mp | 447697972 | Title: Long standing "Waiting for next available executor"
Question:
username_0: In the build records of the job on the jenkins server, that have been running for quite a while. I can see this message:
```
08:01:30 Still waiting to schedule task
08:01:30 Waiting for next available executor
```
Is this a resource limitation problem?
Answers:
username_1: Odd as there are two executors and nothing actually running.
username_1: Okay, I think this is cleared.
I am, however, unclear at why the artifacts do not seem to be changing run to run. This may be a product of something I do not understand in the odk however.
I might suggest a uuid in ontology files as the date may not have the granularity to detect run changes.
Status: Issue closed
|
saltstack/salt | 102901959 | Title: compound match consisting of only nodegroup fails.
Question:
username_0: I'd expect all 3 of the salt commands below to reach the same minions, however the one consisting of a compound matcher with only a nodegroup fails to reach any minions. Master and all minions are on 2015.5.3.
```
<EMAIL>[salt]:~ $ sudo salt --out=yaml -C 'N@sto' test.ping
No minions matched the target. No command was sent, no jid was assigned.
{}
<EMAIL>[salt]:~ $ sudo salt --out=yaml -N sto test.ping
shss7: true
shss365: true
shss91: true
shss90: true
shss362: true
shss144: true
shss135: true
shss323: true
shss325: true
shss98: true
shss132: true
shss131: true
<EMAIL>[salt]:~ $ grep sto: /etc/salt/master.d/nodegroups.conf
sto: L@shss7,shss31,shss69,shss90,shss91,shss98,shss99,shss131,shss132,shss135,shss144,shss145,shss213,shss293,shss323,shss324,shss325,shss327,shss362,shss363,shss365
<EMAIL>[salt]:~ $ sudo salt --out=yaml -C 'L@shss7,shss31,shss69,shss90,shss91,shss98,shss99,shss131,shss132,shss135,shss144,shss145,shss213,shss293,shss323,shss324,shss325,shss327,shss362,shss363,shss365' test.ping
shss7: true
shss365: true
shss362: true
shss90: true
shss91: true
shss144: true
shss135: true
shss323: true
shss325: true
shss98: true
shss132: true
shss131: true
<EMAIL>@<EMAIL>[salt]:~ $ sudo salt --versions-report
Salt: 2015.5.3
Python: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)
Jinja2: 2.2.1
M2Crypto: 0.20.2
msgpack-python: 0.1.13
msgpack-pure: Not Installed
pycrypto: 2.0.1
libnacl: Not Installed
PyYAML: 3.10
ioflo: Not Installed
PyZMQ: 14.3.1
RAET: Not Installed
ZMQ: 4.0.4
Mako: Not Installed
Tornado: Not Installed
```
Answers:
username_0: My plan of using nodegroups with lists of hosts for role management kinda depends on this working... :)
username_1: @username_0, thanks for the report.
username_0: also, related to nodegroups, I have a complicated compound nodegroup that references a bunch of other nodegroups, which all eventually render down to lists of hostnames. When I do
```
salt -N complex_nodegroup test.ping
```
I get the expected result. However, when I define a pillar using
```
base:
complex_nodegroup:
- match: nodegroup
- example_data
```
and then query it using
```
salt -N complex_nodegroup saltutil.refresh_pillar
salt -N complex_nodegroup pillar.get example_data
```
The correct servers return nothing. Less complex nodegroups seem to work just fine.
Status: Issue closed
username_2: Closing per the last comment here.
username_0: Hmm, I expressed myself poorly. I've deleted the misleading comment. I can still reproduce this issue.
username_1: @username_0, testing for something like this would be complex to insert into our test runner, although it could be done. I added the label thinking that we should look at it to also possibly add to our manual tests. |
asmeurer/pytest-flakes | 703419390 | Title: error running a __init__.py only
Question:
username_0: I noticed there are a few caces involving `__init__.py` files and the flakes pytest plugin that result in the following error
```
$ pytest --flakes my_repo/__init__.py
=============================== test session starts ================================
platform linux -- Python 3.8.0, pytest-6.0.2, py-1.9.0, pluggy-0.13.1
rootdir: /path/to/my_repo, configfile: pyproject.toml
plugins: flakes-4.0.1
collected 0 items / 1 error
====================================== ERRORS ======================================
ERROR collecting test session
.venv/lib/python3.8/site-packages/_pytest/runner.py:294: in from_call
result = func() # type: Optional[_T]
.venv/lib/python3.8/site-packages/_pytest/runner.py:324: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
.venv/lib/python3.8/site-packages/_pytest/main.py:576: in collect
yield from self._collect(fspath, parts)
.venv/lib/python3.8/site-packages/_pytest/main.py:662: in _collect
yield next(iter(m[0].collect()))
.venv/lib/python3.8/site-packages/_pytest/nodes.py:457: in collect
raise NotImplementedError("abstract")
E NotImplementedError: abstract
============================= short test summary info ==============================
ERROR - NotImplementedError: abstract
!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!
================================= 1 error in 0.12s =================================
```
Status: Issue closed
Answers:
username_0: It seems to be a pytest error that only occurs with pytest + random plugin enabled + only testing `__init__.py`
username_1: It looks like it's probably a pyflakes bug. Check if `pyflakes my_repo/__init__.py` gives the same error, and also that you have the latest version of pyflakes.
username_1: I noticed there are a few caces involving `__init__.py` files and the flakes pytest plugin that result in the following error
```
$ pytest --flakes my_repo/__init__.py
=============================== test session starts ================================
platform linux -- Python 3.8.0, pytest-6.0.2, py-1.9.0, pluggy-0.13.1
rootdir: /path/to/my_repo, configfile: pyproject.toml
plugins: flakes-4.0.1
collected 0 items / 1 error
====================================== ERRORS ======================================
ERROR collecting test session
.venv/lib/python3.8/site-packages/_pytest/runner.py:294: in from_call
result = func() # type: Optional[_T]
.venv/lib/python3.8/site-packages/_pytest/runner.py:324: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
.venv/lib/python3.8/site-packages/_pytest/main.py:576: in collect
yield from self._collect(fspath, parts)
.venv/lib/python3.8/site-packages/_pytest/main.py:662: in _collect
yield next(iter(m[0].collect()))
.venv/lib/python3.8/site-packages/_pytest/nodes.py:457: in collect
raise NotImplementedError("abstract")
E NotImplementedError: abstract
============================= short test summary info ==============================
ERROR - NotImplementedError: abstract
!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!
================================= 1 error in 0.12s =================================
```
username_1: Actually I didn't notice the error is from pytest, not pyflakes. Actually I can reproduce this for my own project too.
username_1: It could be a pytest bug. I don't see pytest-flakes in the traceback, so it seems likely.
username_0: Well, actually, after a bit of research I found that if `FlakesItem` would implement the `collect` method inherited from `Collector` which is in the mro stack, the problem would not occur and can be counted as a clean fix:
```python
def collect(self):
return (self,)
```
Status: Issue closed
|
rniemeyer/knockout-jqAutocomplete | 25234314 | Title: TypeError: Unable to process binding "jqAuto..
Question:
username_0: Hi! Under AMD (with r.js optimizing) all does work. Now I'm migrating to CJS-like environment (guided by webpack), and have got the error:
```
Uncaught TypeError: Unable to process binding "jqAuto: function (){return {
value:placeTry,source:getAoids,dataValue:placeData,labelProp:"label",valueProp:"value",options:{minLength:1,delay:200}} }"
Message: object is not a function a4.min.js:17352
init a4.min.js:17352
(anonymous function) a4.min.js:12840
ko.dependencyDetection.ignore a4.min.js:11333
(anonymous function) a4.min.js:12839
ko.utils.arrayForEach a4.min.js:10355
applyBindingsToNodeInternal a4.min.js:12825
applyBindingsToNodeAndDescendantsInternal a4.min.js:12700
applyBindingsToDescendantsInternal a4.min.js:12682
applyBindingsToNodeAndDescendantsInternal a4.min.js:12709
applyBindingsToDescendantsInternal a4.min.js:12682
applyBindingsToNodeAndDescendantsInternal a4.min.js:12709
applyBindingsToDescendantsInternal a4.min.js:12682
applyBindingsToNodeAndDescendantsInternal a4.min.js:12709
applyBindingsToDescendantsInternal a4.min.js:12682
applyBindingsToNodeAndDescendantsInternal a4.min.js:12709
applyBindingsToDescendantsInternal a4.min.js:12682
applyBindingsToNodeAndDescendantsInternal a4.min.js:12709
ko.applyBindings
...
```
Chrome debugger stops at
```js
var widget = $(element).autocomplete(config).data("ui-autocomplete");
```
All other things in a project does work including ko itself, binding to jquery datepicker and knockout.punches. My koSetup.js file starts with
```js
'use strict';
require('jquery');
require('jquery.ui');
var ko = require('knockout');
require('knockout.punches');
require('knockout-jqAutocomplete');
ko.punches.interpolationMarkup.enable();
...
```
I'm new on the client side and can miss something obvious:)
Answers:
username_1: If the problem is solved you should have pasted the solution so future users of webpack + this package would know what to do - but i will keep searching
@username_0
username_2: @username_0 did you get this working? I am trying to include this in a webpack and I am getting that it can't resolve jquery-ui/automcomplete. I was wondering how you got around that?
I have jquery-ui installed, but webpack doesn't seem to be able to make the leap to only look at /autocomplete |
aws-amplify/amplify-js | 463322531 | Title: MFA Option for U2F / Hardware Keys
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. |
flutter/flutter | 448407291 | Title: Flutter JavaDocs do not update the URL
Question:
username_0: Flutter's JavaDocs never change the URL when packages and classes are clicked on.
Repro Steps:
1. Open the JavaDocs: https://api.flutter.dev/javadoc/
2. Click on a package link, then click on a class link
3. Notice that the URL in the browser never changed
Answers:
username_0: CC @username_1 @username_2
username_1: Thanks, added to go/add-to-app-work
username_2: It's 'cause we're outputting standard Javadoc HTML. You have to click the "NO FRAMES" link on the class doc to see the individual page.
Arguably, those using our Java API docs are used to and expect Javadoc-style output?
username_0: I would expect Android specialists more so than generic Java developers, and those devs should be used to docs like: https://developer.android.com/reference/android/app/Activity |
cpe305/fall2016-project-johnnicholson | 181907342 | Title: Create PutPerson
Question:
username_0: create a method to modify a person entity, This needs to check that old password matches before changing passwords, otherwise, What they can change is specified in docs/3DPrinterQueue.md
Answers:
username_1: Not fully tested but implemented
Status: Issue closed
|
electron-userland/electron-builder | 529729203 | Title: Useless programmatic API docs
Question:
username_0: <!-- Which version of electron-builder are you using? -->
<!-- Please always try to use latest version before report. -->
* **Version**: 21.2.0
<!-- Which version of electron-updater are you using (if applicable)? -->
<!-- What target are you building for? -->
* **Target**: Windows & Linux
<!-- Enter your issue details below this comment. -->
Hello,
I have a [VueJS](https://github.com/vuejs/vue) app bundled with [ParcelJS](https://github.com/parcel-bundler/parcel/tree/1.x) and I'd like to build it programmatically.
However, the [programmatic API documentation](https://www.electron.build/api/electron-builder) doesn't explain anything about how to do anything.
Here are the details :
* Parcel bundler transpiles into `./dist`
* I'd like Electron Builder to compile `.appx` and `.appimage` files into `./release`
* 64-bit only
Thanks
<!-- If you want, you can donate to increase issue priority (https://www.electron.build/donate) -->
Answers:
username_0: up
username_0: up
username_0: up
username_0: up
username_0: up
username_0: Up
username_1: Yikes. I want to do something similar and the API says basically nothing. Does anyone know anything?
username_0: Unfortunately, I still got no answer so I'm stuck with `electron-packager` for now, which doesn't provide `.appx` nor `.appimage` builds. :/
username_0: The lack of answers from authors.
username_0: Up
username_1: Up
username_2: Why don't you just write a script in your packages.json that does the job for you? So a bit like this:
```
[parceljs command] && [electron builder command]
```
And you configure electron builder in such a way that it takes the dist output as input and you set the appx config correctly.
Then you can just invoke it from the command line and it will be build for you automatically. No need to use any internal APIs.
username_0: I prefer building from a [`build.js`](https://git.kaki87.net/username_0/template-electron-vue-parcel/src/branch/master/build.js) script and only have `"build": "node ./build.js"` in [`package.json`](https://git.kaki87.net/username_0/template-electron-vue-parcel/src/branch/master/package.json) for it to be easier to read.
username_3: Up
username_0: Up
username_4: Here's a preview of the latest generated API docs
https://jolly-roentgen-9c9aba.netlify.app/api/electron-builder.html
username_0: So I suppose those are generated from JSDoc, could you please add `@description` tags ?
Names of modules, methods, arguments and return values aren't really helpful here :/
username_4: That's a pretty big ask for just one contributor to add retroactively to all functions.
Perhaps you could request a few/several functions in particular that would be useful and what you're looking for exactly out of it?
I've never written tech docs before, so I'm somewhat interested in contracting someone do to so that already has the expertise (and time).
I've also added an example from one of my project setups using the API.
https://jolly-roentgen-9c9aba.netlify.app/api/programmaticusage
username_3: Would it be useful to create an issue for the `@description ` tags so that it can be tackled bit by bit if someone has the will to do it, or at least do some functions.
@username_4 thanks for the example, I was in the process of adding one myself.
username_4: I suppose so? It might be more constructive than the title of this issue... 😕
username_0: Sorry for that issue title.
2 years ago, I was angry at the time, because I spent many time looking for answers to my question.
I asked to so many communities on Reddit, Discord, Gitter, Slack, etc. and never got any answer, not a single one.
However I think that underestimating the value of API docs is bad, we might be a few who asked for it, but I'm pretty sure many people like me are stuck with electron-packager because it's the one which has usable API docs.
I'll have a look at the example though, thanks.
username_0: For that I would have to already know which ones I need, but I don't.
However you can have a look at my original message to see what I need.
username_0: I just had a look at the examples : it contains many callbacks and parameters which aren't really self-explanatory and some contains calls to functions that aren't defined in the example itself 🤔
username_4: You sure you didn't read the configuration docs?
```
builder.build({
targets: Platform.LINUX.createTarget(),
config: options // Your electron-builder.js config object. Literally the same structure as electron-builder.js/electron-builder.yaml and in the package.json "build" entry
})
```
```
const options = {
files: [
"dist"
],
directories: {
output: "release"
},
win: {
target: {
target: 'appx',
arch: 'x64'
},
},
linux: {
target: {
target: 'AppImage',
arch: 'x64'
},
},
};
```
username_0: Thank you, but here's what I get :
```
(node:18826) UnhandledPromiseRejectionWarning: TypeError: types is not iterable
at processTargets (/home/user/Documents/project/node_modules/app-builder-lib/src/packager.ts:203:26)
at new Packager (/home/user/Documents/project/node_modules/app-builder-lib/src/packager.ts:219:7)
at Object.build (/home/user/Documents/project/node_modules/electron-builder/src/builder.ts:212:31)
at /home/user/Documents/project/build.js:14:27
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
```
username_0: @username_4 Please ?
username_4: You really should debug further on your own... or at least describe what you've tried already.
Spitballing here based on your error message of "not iterable" in `processTargets(...)`, convert `target` into an array, like below, for both OS configs.
```
target: [{
target: 'AppImage',
arch: 'x64'
}]
```
Status: Issue closed
username_0: I did what you told me to do :
```js
const electronBuilder = require('electron-builder');
await electronBuilder.build({
targets: electronBuilder.Platform.LINUX.createTarget(),
files: ['dist'],
directories: {
output: ['build']
},
win: {
target: [{
target: 'appx',
arch: 'x64'
}],
},
linux: {
target: [{
target: 'AppImage',
arch: 'x64'
}],
}
});
```
But I am still experiencing this error.
Thanks
username_3: @username_0 The structure should be:
```javascript
const electronBuilder = require('electron-builder');
await electronBuilder.build({
targets: electronBuilder.Platform.LINUX.createTarget(),
config: {
files: ['dist'],
directories: {
output: ['build']
},
win: {
target: [{
target: 'appx',
arch: 'x64'
}],
},
linux: {
target: [{
target: 'AppImage',
arch: 'x64'
}],
}
}
});
```
Notice the `config` property that you have omitted in your example.
username_0: *Mea culpa*. I ran into a few more errors after that, but I got almost everything working using the following code, and even (surprisingly) easily added Mac support.
```js
await electronBuilder.build({
targets: {
'linux': electronBuilder.Platform.LINUX.createTarget(),
'win32': electronBuilder.Platform.WINDOWS.createTarget(),
'darwin': electronBuilder.Platform.MAC.createTarget()
}[process.platform],
config: {
files: fs.readdirSync('.', { withFileTypes: true })
.filter(item => ![
'.git',
'.gitignore',
'build.js',
'README.md',
'yarn.lock',
...fs.readFileSync('./.gitignore', 'utf8')
.split('\n')
.filter(path => !['dist', 'node_modules'].includes(path))
].includes(item.name))
.map(item => `${item.name}${item.isDirectory() ? '/**' : ''}`),
directories: {
output: 'build'
},
win: {
target: [{
target: 'appx',
arch: 'x64'
}],
},
appx: {
applicationId: name.split('-').map(substring => `${substring[0].toUpperCase()}${substring.slice(1, substring.length).toLowerCase()}`).join('')
},
linux: {
target: [{
target: 'AppImage',
arch: 'x64'
}],
},
mac: {
target: [{
target: 'dmg'
}]
}
}
});
```
However, the `.appx` file seem unusable on Windows : even with [developer mode](https://www.tenforums.com/attachments/tutorials/294509d1598386508-turn-off-developer-mode-windows-10-a-turn_on_developer_mode_in_settings-1.jpg?s=266faac9aa5b00d4df57811c7025014f) enabled. Does it mean code signing is unavoidable ? If yes, what's the easiest way to do it programmatically ? Thanks
username_3: @username_0 you should read the docs more carefully. [AppX Package Code Signing](https://www.electron.build/configuration/appx#appx-package-code-signing)
Personally I would suggest either submitting it in the store or switching to NSIS.
Anyway, this conversation adds no value to the original issue.
username_0: My current project is a developer tool for AWS Serverless, so not something that is meant to be on the Microsoft Store. |
nats-io/nats-docker | 153121529 | Title: Passing options or config overrides?
Question:
username_0: Trying to set some options on `docker run`. Tried a few variations like this:
`docker run --rm -p 8222:8222 -p 6222:6222 -p 4222:4222 --name nats-main nats nats --pass=<PASSWORD>`
And then running the second node, but I get authorization errors because I'm doing something wrong here. I see in the docker hub README, you can pass `--routes` like this but can you not pass other flags too? Is there a better way, like can I have a local `.conf` file (on my docker host) and pass it in somehow to `-c`? That seems impossible from what I know with docker, just wondering how the `docker run` example would look if you want to use a conf file.
Thanks for making a docker image regardless! Was really nice to have this.
Answers:
username_1: Yes you should be able to pass flags and override what is in the configuration file.
username_2: @username_0 You can pass command line parameters to gnatsd. PR #4 has been merged helps with that too.
Status: Issue closed
|
docker/compose | 159576270 | Title: Using Var for volumes or network names
Question:
username_0: Hello,
I would like to be able to use env-var in my docker-compose.yml files in order to use my services in differents instances .
Example :
```
services:
elasticsearch:
volumes:
- ${MACHINE_PREFIX}-elastic:/usr/share/elasticsearch/data
networks:
- net
- ${MACHINE_MASTER_NAME}1/consul_default
networks:
net:
driver: overlay
${MACHINE_MASTER_NAME}1/consul_default:
external: true
volumes:
${MACHINE_PREFIX}-elastic:
external: true
```
The var is replaced in the service section, but in networks and volumes section it is not.
Would it be possible to add this feature ?
Answers:
username_1: Issue grooming: Sorry nobody ever replied. This seems like a better fit for github.com/docker/app.
Status: Issue closed
|
Tournamanager/webApp | 534993605 | Title: Tournament Match List Component
Question:
username_0: What has to be done:
Write a component that can display all the matches of the tournament.
Why it has to be done:
So that users can view the matches that finished and see the matches that are in the near future.
When the task is done:
When the user can see previous matches and when he can view upcoming matches.<issue_closed>
Status: Issue closed |
helix-toolkit/helix-toolkit | 1100386019 | Title: Render order
Question:
username_0: I have two issues:
1. Boxes don't look right when rotated
2. RenderOrder has no effect
I have three boxes I created with MeshBuilder they have different widths and heights but the same depth. All three boxes are located at the same point. With no rotation they look fine but when I rotate the camera they don't.
I have tried using the MeshGeometryModel3D RenderOrder but this has no effect. I would expect that if the biggest box has the highest RenderOrder I would not see the smaller two boxes.


.
Answers:
username_1: Usually you want to avoid having overlapping surface, otherwise it is causing z fighting due to floating point error. You can try to adjust depth bias on Mesh geometry model 3d to try to reduce z fighting.
username_0: depth bias fixed it thanks
Status: Issue closed
|
XX-net/XX-Net | 611599898 | Title: 不是说五月份不绑卡就没法用的吗?今天发现仍然可以用
Question:
username_0: 从五月一日到今天,我的GCP没有绑卡,username_4仍然可以使用。
已在console.cloud.google.com页面中查看确认是自己的appid而不是公告appid.
难道是谷歌疏忽了或者关闭有一个过程????
Answers:
username_1: 可能这个规定是针对新用户的,老用户不受影响。
username_2: 看6.1了,现在新建项目会提示要结算账户才可以。
username_3: 我已经添加信用卡了,就怕会扣费。。。
username_4: 一年内有300刀可以用,过了一年据说不开通收费模式也不会扣费的
username_5: 怎么设定啊,以前部署了7.8个appid,但是只能有3个绑定结算帐号,其他的用不用去掉,要重新部署吗,真不想动,就怕一动上不了网就麻烦了
username_6: 发现新id 要部署的话必须绑账号,以前部署过的老账号还处于免费状态
username_4: 哦,那就还好,老id还能顶一阵
username_7: 

我的appid也绑定银行卡了,但是在xx-net中部署显示出错。请问你们是用别的方式部署的吗?有教程吗?最好是傻瓜式教程。
username_4: 目前只能用google 官方的 gcloud部署了,没有别的办法 |
openembedded/meta-openembedded | 361663664 | Title: nodejs: no internationalization support
Question:
username_0: meta-oe's nodejs recipe currently builds node with `--without-intl`. That means there's no [internationalization](https://github.com/nodejs/node/blob/master/doc/api/intl.md) support.
One of the consequences is that the `inspector` module is not built at all. Recent (at least M71 onwards) Chromium releases need the inspector module as part of the build, so I wonder if the recipe can start depending on ICU and pass `--with-intl=system-icu` to `configure.py`.
Answers:
username_0: (Using `--with-intl=system-icu` might require checking if the ICU version provided in a branch matches the one Node expects; in the worst case, I guess `--with-intl=small-icu` should work)
username_1: Hey @username_0 you might take a look at [PACKAGECONFIG](https://www.yoctoproject.org/docs/2.4.2/ref-manual/ref-manual.html#var-PACKAGECONFIG) and create a Pull request which allows one to select weather internationalization is build or not. Which might get tricky when modelling the dependencies ``DEPENDS = "virtual/libintl"`` anyway a better place to discuss this is the mailinglist.
username_0: I don't see how virtual/libintl is related, but I've [sent an email to the list](http://lists.openembedded.org/pipermail/openembedded-devel/2018-September/120658.html) to discuss this.
Status: Issue closed
username_2: its fixed in master d7d0cc5227d0dc7d3ff91ded9da841d65c3f3632 |
umijs/umi | 370009788 | Title: 通过注释扩展路由,改变路由query,会导致该路由重新渲染
Question:
username_0: @username_1 在线编辑器总是加载失败,给个git仓库 地址吧 `https://github.com/username_0/umi-test.git `, 辛苦了~
Answers:
username_0: @username_1 在线编辑器总是加载失败,给个git仓库 地址吧 `https://github.com/username_0/umi-test.git `, 辛苦了~
username_1: 
试了下你提供的库,也没有重现。。
username_0: 
Status: Issue closed
username_0: 最新 [email protected] 版本,已经无此问题 |
Qiskit/qiskit-ibmq-provider | 448057004 | Title: Revise WebSockets interaction with Jupiter Notebooks
Question:
username_0: <!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected behavior?
Our current implementation of WebSockets is based on `asyncio`, which causes some issues with Jupyter Notebooks since the control of the event loop is directly managed by Jupyter (see https://github.com/jupyter/notebook/issues/3397).
We should revise the implementation deciding between:
* not attempting to start the event loop manually (probably the preferred solution), and instead try to add the task if it is already running
* use another package that allows nesting event loops (might have some implications in the complexity and side effects)
* as a last resort, verify that Exceptions raised from the WebSockets module are handled gracefully, triggering the http-only pooling as a fallback
Answers:
username_1: Option 1 is impossible without `await`-ing. The user should explicitely `await` for the result:
```python
result = await job.result()
```
This is not a trivial API change. First, it affects the specification since `job.result()` will no longer be blocking. Second, `job.result()` cannot be overloaded since it needs to by an `async` method which forces a non awaitable result into a `Future`.
Status: Issue closed
|
firebase/quickstart-android | 285183462 | Title: Error:Unknown host 'services.gradle.org'. You may need to adjust the proxy settings in Gradle. <a href="toggle.offline.mode">Enable Gradle 'offline mode' and sync project</a><br><a href="https://docs.gradle.org/current/userguide/userguide_single.html#sec:accessing_the_web_via_a_proxy">Learn about configuring HTTP proxies in Gradle</a>
Question:
username_0: after downloading and importing project qickstart-android .Android says Failed to sync Gradle project 'quickstart-android-master'.
I know i m doing some mistake But i don't know where i m doing the mistake.Please tell me how to fix it
Status: Issue closed
Answers:
username_1: @username_0 this is not an issue with the quickstart but has something to do with your system network configuration. I would suggest consulting StackOverflow or another forum. |
flutter/flutter | 515058089 | Title: Remove AndroidBuilder from the tool
Question:
username_0: This was originally added to workaround the limitation that Gradle couldn't be easily mock without leaking the entire build process. Once https://github.com/flutter/flutter/pull/43479 is in, there's no need for this wrapper class since the build system is simpler and has a more defined surface area.
Answers:
username_1: @username_0 what is the status of this issue?
username_0: I haven't worked on this issue yet. Most of the work is around refactoring the tests. |
creativetimofficial/argon-dashboard | 371155963 | Title: sidebar component with subitem
Question:
username_0: do we have example sidebar component with subitem can toggle active ?
Answers:
username_2: I'm adding apull request with a propose to add a feature like style tomultilevel
the code pen
https://codepen.io/username_2/pen/qJMPqM
username_3: Hello,
Thank you for your interest in our products and sorry for the late respone.
We provide a multi level menu in our pro version(https://demos.creative-tim.com/argon-dashboard-pro/pages/tables/datatables.html) in components -> multi level.
Hope it helps.
All the best,
Rares
Status: Issue closed
|
csrdelft/csrdelft.nl | 817278974 | Title: Maak een account aan bij het aanmaken van een profiel.
Question:
username_0: Nu moet de PubCie nog op een knopje klikken om een account aan te maken. Dit is omslachtig.
Het is misschien wel handig om accounts standaard te blokkeren bij aanmaken (en een banner op een profiel te zetten dat dit zo is), totdat iemand actie onderneemt. Zodat het wel makkelijk is om profielen te maken die nog niet kunnen inloggen. |
phyloref/phyloref-ontology | 352333556 | Title: Support the OWL RL profile
Question:
username_0: Supporting the [OWL RL profile](https://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules) would allow us to use OWL RL reasoners like [RDFox](https://www.w3.org/2001/sw/wiki/RDFox) or [HyLAR](https://www.npmjs.com/package/hylar). This would remove our dependence on OWL DL reasoners and might allow us to reason faster over larger phylogenies.
Answers:
username_1: Don't forget about [Arachne](https://github.com/username_1/arachne) or [Whelk](https://github.com/username_1/whelk) (soon to be documented). :-) |
department-of-veterans-affairs/va.gov-team | 702247200 | Title: Rewrite TypeOfFacility tests using react testing library
Question:
username_0: We want to rewrite our Enzyme based tests with tests in RTL that do not test implementation details
Original test: `src/applications/vaos/tests/new-appointment/components/TypeOfFacilityPage.unit.spec.jsx`
General guidelines:
- Don't test component implementation details (class names, DOM structure, etc), favor checking for something a user would look for
- There are some exceptions to this, like if we want to verify that the correct icon is used, which can only be checked by class
- Try to mock as little Redux state as possible, in favor of call helpers that set data using the actual page components
- Prefer longer tests to doing a single test per assertion. That model doesn't work very well with RTL or UI code (it's fine for plain js tests)
- Check code coverage to make sure major functionality is covered (`yarn test:unit --coverage --app-folder vaos`)<issue_closed>
Status: Issue closed |
tokyoquantopian/quantopian-doc-ja | 619376755 | Title: 金融用語集
Question:
username_0: 説明が必要と思われる金融用語集
|用語|英語表現|説明例|
|---|---|---|
|流動性(が高い、低い)|(il)liquid||
|ロング|long||
|ショート|short||
|ドローダウン|drawdown||
|マーケットインパクト|market impact||
|リバランス|rebalance||
|ポートフォリオ|portfolio||
|ポジション|position||
|出来高|turnover||
|エクスポージャー|exposure|
Answers:
username_1: ちなみにSphinxには `.. glossary::` というディレクティブがあります
https://www.sphinx-doc.org/ja/master/usage/restructuredtext/directives.html#glossary
これを使うと用語集が作成できます
ビルド例
https://www.sphinx-doc.org/ja/master/glossary.html
ソース
https://www.sphinx-doc.org/ja/master/_sources/glossary.rst.txt
使うかどうかは次回のmtg辺りでご相談させてください |
DoctorMcKay/node-globaloffensive | 540625695 | Title: server_id
Question:
username_0: What exactly is server_id in matchList (requestLiveGameForUser)? Should it be server's steamid? If so, it has wrong value or it's some kind of conversion problem:
`FULLY CONNECTED!!
[
{
roundstatsall: [],
matchid: '3386139258250068207',
matchtime: null,
watchablematchinfo: {
server_ip: null,
tv_port: null,
tv_spectators: 0,
tv_time: 1998,
tv_watch_password: <PASSWORD>,
cl_decryptdata_key: null,
cl_decryptdata_key_pub: null,
game_type: 32776,
game_mapgroup: 'mg_de_mirage',
game_map: 'de_mirage',
server_id: '9295711545907311873',
match_id: '3386139258250068207',
reservation_id: null
},
roundstats_legacy: {
kills: [Array],
assists: [Array],
deaths: [Array],
scores: [Array],
pings: [],
team_scores: [Array],
enemy_kills: [],
enemy_headshots: [],
enemy_3ks: [],
enemy_4ks: [],
enemy_5ks: [],
mvps: [Array],
enemy_kills_agg: [],
reservationid: null,
reservation: [Object],
map: null,
round: null,
round_result: null,
match_result: null,
confirm: null,
reservation_stage: null,
match_duration: 1909,
spectators_count: null,
spectators_count_tv: null,
spectators_count_lnk: null,
drop_info: null
}
}
]`
example server's SteamID:
`[A:1:1117543424:13807]
90131294278409216`
I'm just a begginer, I will be grateful if you could explain it to me.
Thanks!
Status: Issue closed
Answers:
username_1: Sorry, I don't know. |
annaSchugay/yiiblog | 323090342 | Title: рефакторинг главной страницы
Question:
username_0: 1. Разбить страницу на две колонки (~ 4 : 1)
2. В большей колонке вывести список статей + пагинацию.
3. в меньшей колонке вывести список категорий и количество статей в каждой категории
примерный вид
 |
jsettlers/settlers-remake | 71562684 | Title: Glitch: Ground Texture at extreme levels
Question:
username_0: Texture can not decide whether it is rock or grass or dirt ;)
I suppose, that the upper marked glitch is not really a glitch, more a problem of zooming and level of detail?

Answers:
username_0: 
username_1: This is not a background problem, but a ground type problem: No flattened ground may be next to a river or a mountain (border). We should not allow to build that close to rivers/mountains (or at least to change the height and landscape when building there).
The upper marking in your first image is not a glitcht, that is what a mountain border looks like when it its a straight line.
username_2: Michael is right, this needs to be fixed in the game logic when calculating the possible positions of a building. |
graphhopper/graphhopper | 234482378 | Title: To little RAM in Graph Processing leads to weird Exception in LocationIndex
Question:
username_0: I've been playing around with GH for a while now and everything worked fine so far, until I tried building the [latest and greatest all new and shiny graph of all of Europe](http://download.geofabrik.de/europe-latest.osm.pbf) with my little 16 GB machine (Mac OS) with 14Gb allocated for the JVM, when it was throwing the following error
```
2017-06-07 13:42:10,958 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - edges: 69505872, nodes 54314626, there were 2119927 subnetworks. removed them => 2119847 less nodes
Exception in thread "main" java.lang.IllegalStateException: location index was opened with incorrect graph: 54314817 vs. 54314626
at com.graphhopper.storage.index.LocationIndexTree.loadExisting(LocationIndexTree.java:265)
at com.graphhopper.GraphHopper.createLocationIndex(GraphHopper.java:1134)
at com.graphhopper.GraphHopper.initLocationIndex(GraphHopper.java:1149)
at com.graphhopper.GraphHopper.postProcessing(GraphHopper.java:842)
at com.graphhopper.GraphHopper.process(GraphHopper.java:650)
at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:619)
at com.graphhopper.tools.Import.main(Import.java:31)
```
The error is included in GH versions 0.82 and 0.9. It did not appear when I ran the whole process on a 64GB Ubuntu 16.04. Server on which it was using more then 26GB.
Answers:
username_1: This indicates a previously OutOfMemory error. What are the steps to reproduce this?
username_0: 1. Download the most recent version of GH on a MBP 13" 2016 with 16 GB RAM
2. Download the latest Graph for all of Europe linked in the issue above.
3. Run `./graphhopper.sh import path/to/eruope.pbf`
4. There is no step 4
username_1: "europe" does not fit into the default settings, what JAVA_OPTS are you using?
username_0: -Xmx14g -Xms14g
username_1: And what config.properties? The default with CH, one vehicle profile (car) and without elevation?
username_0: Yes I didn't do any further changes because I thought at first that the mistake happened in my code so I used vanilla GH.
username_1: Ok, I'll try, have a 16g laptop (not a mac). Any other errors before in the log? Which JDK version?
username_1: I cannot reproduce this with the specified configuration and a current Europe file. Can you have a look at the logging output if there was an error before.
username_0: `## using java 1.8.0_131 (64bit) from /Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home
## using existing osm file /Users/pascalblunk/Coding/Calimoto/Graphen/Raw OSM/europe-latest.osm.pbf
## building graphhopper jar: tools/target/graphhopper-tools-0.10-SNAPSHOT-jar-with-dependencies.jar
## using maven at /usr/local/Cellar/maven/3.5.0/libexec
## now import. JAVA_OPTS=-Xmx14g -Xms14g
2017-06-07 10:22:31,309 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - version 0.10|2017-06-07T08:21:59Z (5,14,4,3,3,2)
2017-06-07 10:22:31,317 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - graph CH|car|RAM_STORE|2D|NoExt|,,,,, details:edges:0(0MB), nodes:0(0MB), name:(0MB), geo:0(0MB), bounds:1.7976931348623157E308,-1.7976931348623157E308,1.7976931348623157E308,-1.7976931348623157E308, CHGraph|fastest|car, shortcuts:0, nodesCH:(0MB)
2017-06-07 10:22:31,338 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - start creating graph from /Users/pascalblunk/Coding/Calimoto/Graphen/Raw OSM/europe-latest.osm.pbf
2017-06-07 10:22:31,338 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - using CH|car|RAM_STORE|2D|NoExt|,,,,, memory:totalMB:12757, usedMB:199
2017-06-07 10:39:46,331 [main] INFO com.graphhopper.reader.osm.OSMReader - 10 000 000 (preprocess), osmIdMap:82 815 455 (1013MB) totalMB:13032, usedMB:12586
2017-06-07 10:45:48,349 [main] INFO com.graphhopper.reader.osm.OSMReader - 20 000 000 (preprocess), osmIdMap:179 656 547 (2139MB) totalMB:13062, usedMB:9966
2017-06-07 11:01:04,271 [main] INFO com.graphhopper.reader.osm.OSMReader - 30 000 000 (preprocess), osmIdMap:273 499 595 (3235MB) totalMB:13069, usedMB:6349
2017-06-07 11:09:45,920 [main] INFO com.graphhopper.reader.osm.OSMReader - 100 000 (preprocess), osmWayMap:0 totalMB:13105, usedMB:10379
2017-06-07 11:09:48,611 [main] INFO com.graphhopper.reader.osm.OSMReader - 200 000 (preprocess), osmWayMap:0 totalMB:13146, usedMB:7442
2017-06-07 11:09:49,062 [main] INFO com.graphhopper.reader.osm.OSMReader - 300 000 (preprocess), osmWayMap:0 totalMB:13146, usedMB:8767
2017-06-07 11:09:49,539 [main] INFO com.graphhopper.reader.osm.OSMReader - 400 000 (preprocess), osmWayMap:0 totalMB:13146, usedMB:9885
2017-06-07 11:09:50,303 [main] INFO com.graphhopper.reader.osm.OSMReader - 500 000 (preprocess), osmWayMap:0 totalMB:13146, usedMB:11123
2017-06-07 11:09:50,889 [main] INFO com.graphhopper.reader.osm.OSMReader - 600 000 (preprocess), osmWayMap:0 totalMB:12997, usedMB:8088
2017-06-07 11:09:51,259 [main] INFO com.graphhopper.reader.osm.OSMReader - 700 000 (preprocess), osmWayMap:0 totalMB:12997, usedMB:9345
2017-06-07 11:09:51,506 [main] INFO com.graphhopper.reader.osm.OSMReader - 800 000 (preprocess), osmWayMap:0 totalMB:12997, usedMB:10475
2017-06-07 11:09:51,836 [main] INFO com.graphhopper.reader.osm.OSMReader - 900 000 (preprocess), osmWayMap:0 totalMB:13137, usedMB:7639
2017-06-07 11:09:52,077 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 000 000 (preprocess), osmWayMap:0 totalMB:13137, usedMB:8770
2017-06-07 11:09:52,327 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 100 000 (preprocess), osmWayMap:0 totalMB:13137, usedMB:9899
2017-06-07 11:09:52,573 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 200 000 (preprocess), osmWayMap:0 totalMB:13137, usedMB:11027
2017-06-07 11:09:53,014 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 300 000 (preprocess), osmWayMap:0 totalMB:12976, usedMB:8206
2017-06-07 11:09:53,262 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 400 000 (preprocess), osmWayMap:0 totalMB:12976, usedMB:9260
2017-06-07 11:09:53,553 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 500 000 (preprocess), osmWayMap:0 totalMB:12976, usedMB:10514
2017-06-07 11:09:53,831 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 600 000 (preprocess), osmWayMap:0 totalMB:13142, usedMB:7532
2017-06-07 11:09:54,070 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 700 000 (preprocess), osmWayMap:0 totalMB:13142, usedMB:8667
2017-06-07 11:09:54,334 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 800 000 (preprocess), osmWayMap:0 totalMB:13142, usedMB:9842
2017-06-07 11:09:54,567 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 900 000 (preprocess), osmWayMap:0 totalMB:13142, usedMB:10896
2017-06-07 11:09:54,840 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 000 000 (preprocess), osmWayMap:0 totalMB:13138, usedMB:7898
2017-06-07 11:09:55,094 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 100 000 (preprocess), osmWayMap:0 totalMB:13138, usedMB:9115
2017-06-07 11:09:55,283 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 200 000 (preprocess), osmWayMap:0 totalMB:13138, usedMB:10169
2017-06-07 11:09:55,464 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 300 000 (preprocess), osmWayMap:0 totalMB:13138, usedMB:11190
2017-06-07 11:09:55,652 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 400 000 (preprocess), osmWayMap:0 totalMB:13150, usedMB:8188
2017-06-07 11:09:55,855 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 500 000 (preprocess), osmWayMap:0 totalMB:13150, usedMB:9414
2017-06-07 11:09:56,033 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 600 000 (preprocess), osmWayMap:0 totalMB:13150, usedMB:10435
2017-06-07 11:09:56,309 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 700 000 (preprocess), osmWayMap:0 totalMB:13145, usedMB:7537
2017-06-07 11:09:56,486 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 800 000 (preprocess), osmWayMap:0 totalMB:13145, usedMB:8515
2017-06-07 11:09:56,685 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 900 000 (preprocess), osmWayMap:0 totalMB:13145, usedMB:9695
2017-06-07 11:09:56,886 [main] INFO com.graphhopper.reader.osm.OSMReader - 3 000 000 (preprocess), osmWayMap:0 totalMB:13145, usedMB:10672
2017-06-07 11:09:57,180 [main] INFO com.graphhopper.reader.osm.OSMReader - 3 100 000 (preprocess), osmWayMap:0 totalMB:13158, usedMB:7783
2017-06-07 11:09:57,422 [main] INFO com.graphhopper.reader.osm.OSMReader - 3 200 000 (preprocess), osmWayMap:0 totalMB:13158, usedMB:8852
2017-06-07 11:09:57,659 [main] INFO com.graphhopper.reader.osm.OSMReader - 3 300 000 (preprocess), osmWayMap:0 totalMB:13158, usedMB:9922
2017-06-07 11:09:58,013 [main] INFO com.graphhopper.reader.osm.OSMReader - 3 400 000 (preprocess), osmWayMap:0 totalMB:13158, usedMB:10949
2017-06-07 11:09:58,201 [main] INFO com.graphhopper.reader.osm.OSMReader - creating graph. Found nodes (pillar+tower):325 427 021, totalMB:13154, usedMB:7827
2017-06-07 11:12:48,123 [main] INFO com.graphhopper.reader.osm.OSMReader - 200 000 000, locs:60 681 557 (0) totalMB:13238, usedMB:9700
2017-06-07 11:16:20,480 [main] INFO com.graphhopper.reader.osm.OSMReader - 400 000 000, locs:88 297 366 (0) totalMB:13272, usedMB:12258
2017-06-07 11:20:31,789 [main] INFO com.graphhopper.reader.osm.OSMReader - 600 000 000, locs:125 156 105 (0) totalMB:13248, usedMB:8744
2017-06-07 11:23:36,788 [main] INFO com.graphhopper.reader.osm.OSMReader - 800 000 000, locs:168 166 030 (0) totalMB:13266, usedMB:12792
2017-06-07 11:27:00,237 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 000 000 000, locs:208 332 380 (0) totalMB:13221, usedMB:8854
2017-06-07 11:28:26,072 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 200 000 000, locs:231 384 252 (0) totalMB:13087, usedMB:8263
2017-06-07 11:30:01,446 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 400 000 000, locs:258 767 837 (0) totalMB:13060, usedMB:7041
2017-06-07 11:31:34,710 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 600 000 000, locs:285 242 521 (0) totalMB:13099, usedMB:8156
2017-06-07 11:33:41,029 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 800 000 000, locs:310 953 253 (0) totalMB:13076, usedMB:8909
2017-06-07 11:34:39,871 [main] INFO com.graphhopper.reader.osm.OSMReader - 1 902 775 916, now parsing ways
2017-06-07 11:47:09,068 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Москва — Малоярославец — Рославль до границы с Республикой Беларусь (на Бобруйск, Слуцк)» — Спас-Деменск — Ельня — Починок» — Бывалки — Ширково, 66Н-0830 truncated to «Москва — Малоярославец — Рославль до границы с Республикой Бела
2017-06-07 11:47:22,993 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Москва — Малоярославец — Рославль до границы с Республикой Беларусь (на Бобруйск, Слуцк)» — Спас-Деменск — Ельня — Починок» — Взглядье — Ивано — Гудино — Добрушино, 66Н-0806 truncated to «Москва — Малоярославец — Рославль до границы с Республикой Бела
2017-06-07 11:47:28,367 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Москва — Малоярославец — Рославль до границы с Республикой Беларусь (на Бобруйск, Слуцк)» — Спас-Деменск — Ельня — Починок» — Сельцо, 66Н-1407 truncated to «Москва — Малоярославец — Рославль до границы с Республикой Бела
[Truncated]
2017-06-07 12:50:37,362 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Смоленск — Вязьма — Зубцов (участок Старой Смоленской дороги Смоленск — Вязьма)» — Тюшино — Нетризово — «Брянск — Смоленск до границы Республики Беларусь (через Рудню, на Витебск)» — Пересветово, 66Н-1049 truncated to «Смоленск — Вязьма — Зубцов (участок Старой Смоленской дороги См
2017-06-07 12:50:44,240 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Москва — Малоярославец — Рославль до границы с Республикой Беларусь (на Бобруйск, Слуцк)» — Спас-Деменск — Ельня — Починок» — Березкино — Большое Тишово, 66Н-0413 truncated to «Москва — Малоярославец — Рославль до границы с Республикой Бела
2017-06-07 12:57:17,270 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Смоленск — Вязьма — Зубцов (участок Старой Смоленской дороги Смоленск — Вязьма)» — Тюшино — Нетризово — «Брянск — Смоленск до границы Республики Беларусь (через Рудню, на Витебск)» — Пересветово, 66Н-1049 truncated to «Смоленск — Вязьма — Зубцов (участок Старой Смоленской дороги См
2017-06-07 13:07:07,205 [main] INFO com.graphhopper.search.NameIndex - Way name is too long: «Беларусь» — от Москвы до границы с Республикой Беларусь (на Минск, Брест)» — Смогири — Болдино — «Витязи — Духовщина — Белый — Нелидово», 66Н-1019 truncated to «Беларусь» — от Москвы до границы с Республикой Беларусь (на Мин
2017-06-07 13:20:05,042 [main] INFO com.graphhopper.reader.osm.OSMReader - 2 135 555 620, now parsing relations
2017-06-07 13:21:16,692 [main] INFO com.graphhopper.reader.osm.OSMReader - finished way processing. nodes: 56434473, osmIdMap.size:326015169, osmIdMap:3838MB, nodeFlagsMap.size:588148, relFlagsMap.size:0, zeroCounter:581081 totalMB:13264, usedMB:12730
2017-06-07 13:21:16,703 [main] INFO com.graphhopper.reader.osm.OSMReader - time pass1:2388s, pass2:7878s, total:10266s
2017-06-07 13:21:16,724 [main] INFO com.graphhopper.routing.subnetwork.PrepareRoutingSubnetworks - start finding subnetworks (min:200, min one way:200) totalMB:13264, usedMB:12730
2017-06-07 13:27:10,268 [main] INFO com.graphhopper.routing.subnetwork.PrepareRoutingSubnetworks - car findComponents time:353.51584, size:641653
2017-06-07 13:28:21,105 [main] INFO com.graphhopper.routing.subnetwork.PrepareRoutingSubnetworks - 2119927 subnetworks found for car, totalMB:12975, usedMB:8807
2017-06-07 13:28:27,798 [main] INFO com.graphhopper.routing.subnetwork.PrepareRoutingSubnetworks - optimize to remove subnetworks (2119927), unvisited-dead-end-nodes (1557739), maxEdges/node (17)
2017-06-07 13:42:10,958 [main] INFO com.graphhopper.reader.osm.GraphHopperOSM - edges: 69505872, nodes 54314626, there were 2119927 subnetworks. removed them => 2119847 less nodes
Exception in thread "main" java.lang.IllegalStateException: location index was opened with incorrect graph: 54314817 vs. 54314626
at com.graphhopper.storage.index.LocationIndexTree.loadExisting(LocationIndexTree.java:265)
at com.graphhopper.GraphHopper.createLocationIndex(GraphHopper.java:1134)
at com.graphhopper.GraphHopper.initLocationIndex(GraphHopper.java:1149)
at com.graphhopper.GraphHopper.postProcessing(GraphHopper.java:842)
at com.graphhopper.GraphHopper.process(GraphHopper.java:650)
at com.graphhopper.GraphHopper.importOrLoad(GraphHopper.java:619)
at com.graphhopper.tools.Import.main(Import.java:31)`
username_1: When you import can you remove the folder `europe-xy-gh`?
I'm not entirely sure why this can happen at all. The problem could be that an OutOfMem error could happen while CH preparation and the location index was created and flushed before, and then a new europe file is used with an existing but wrong location index.
username_0: I'll try tomorrow.
Mit besten Physiker-Grüßen
username_0: Okay apparently the error occurred because a previous run had failed (because of to little memory) and when I restarted with more memory, it was building on the already existing but damaged data. I'm sorry for wasting your time.
Status: Issue closed
username_1: No problem. Probably we should make the removal atomic. |
ros2/rclpy | 549898745 | Title: add publish/take _serialized_message functions to rclpy
Question:
username_0: In the style of https://github.com/ros2/rclpy/pull/495 it would be helpful for further (rosbag2) tools to have the capability to publish and receive serialized ros messages.
Answers:
username_1: @username_0 is this still up for grabs? I'd like to work on this
username_2: @username_1 Great! I'm happy to review any pull requests adding these functions. Let me know if you need any guidance.
username_1: Looks like we need an `rclpy_publish_serialized` function in `src/rcply/_rcply.c` which calls `rcl_publish_serialized`. For receiving serialized messages, `rclpy_take_raw` is already implemented in `_rclpy.c`, it just has to be called. I'll submit a PR soon
username_1: @username_2 How can I build rclpy and run the tests?
username_2: @username_1 To start, you'll need a version of the latest ROS 2 code by following [these instructions](https://index.ros.org/doc/ros2/Installation/Latest-Development-Setup/). Once you have the source code, you can build and test with colcon ([here's a tutorial](https://index.ros.org/doc/ros2/Tutorials/Colcon-Tutorial/)).
To specifically build rclpy and it's dependencies:
colcon build --packages-up-to rclpy
And to test rclpy
colcon test --packages-select rclpy
username_3: I thought that:
when we have `publish_raw`, we can introduce `Serialization Tools` at the python application level. For example, we can use `Protobuffer` as the Type-System in our python application. After serializing it, we publish it with `publish_raw`.
What we have to concern is that:
If we just want to use `publish_raw`, when `create_publisher` a `raw` parameter may be needed to avoid the MsgType check. Although, we can make a fake type. Conceptually, when we `publish_raw` no type is needed. a generic `RawType` for `raw` maybe needed.
username_4: You might be misunderstanding what the C / C++ functions for raw publishing are doing. You can't provide arbitrary binary data. The underlying RMW implementation requires the passed data to be in the wire representation of the topic. For DDS based implantation that usually means CDR - not protobuf.
username_3: @username_4, thanks so much for your explanation. I get your point.
In my opinion, sometimes, we can use DDS as the transportation layer without the serialization function/ability.
That means we may need a Publisher/Subscriber semantic without type-aware. (although `std_msgs::String` may be the fake/alternative one).
username_4: I think you will need to provide a very detailed rational why you would want to send these data as an opaque blob. With such an approach you loose a lot of the functionality in ROS (unable to introspect the data, loss of semantic, loss of typing of a topic) as well as DDS (unable to apply concepts like keys, content-based filtering, etc.). I almost sounds like an anti pattern to me.
username_3: sometimes, we may use protobuffer for its compatibility.
e.g.
1. we record some data in the early days and stored it in a database. the message's definition is something like:
```py
int32 x;
int32 y;
int32 w;
```
2. now we want to re-play it and do some simulation work with a new node. but the message is changed to:
```
int32 x;
int32 y;
```
with the help of potobuffer, we can easily gain compatibility with the help of publishing raw data.
Status: Issue closed
|
hinesboy/mavonEditor | 377320179 | Title: History 问题
Question:
username_0: 场景描述:
我有两个文档:A和B
点击A,编辑区域内容加载A内容
然后
点击B,编辑区域内容加载B内容,修改B的内容
然后,ctrl+z,ctrl+z,ctrl+z,ctrl+z,ctrl+z .....
最终编辑区域 B内容 会返回到 A文档内容
如果此时我保存,会把A文档更新到B文档去,这不是我期望的结果
我期望在编辑B时 ctrl+z 只对 B文档有效,内容不应该回到 A文档
Answers:
username_0: 没有回应吗 |
forcedotcom/salesforcedx-vscode | 1054280929 | Title: SFDX commands not found
Question:
username_0: I had a refresh done on our dev org. The refresh cause my token to expire so I needed to do a fresh 'authorize an org'. i logged out of all orgs. Authorized the new org and from there everything broke. All SFDX commands from the palette will show the error display " command xxxx not found). this is for all SFDX commands. If I run the commands from the terminal it just wont work. Tried uninstalling/re-installing the extensions, cli, and vscode itself and still getting the same errors. Tried moving to a different project folder, creating a new project, same problem. Check the environment PATH for Java and apex and those all work.
At this point I am going to have to do a hard windows reinstall to get things working again.
Answers:
username_0: leaving a screen shot with dev tools errors. maybe someone can make sense of it.

username_1: @username_0 I'll need some more information to go on:
* What version of the Salesforce extension for VisualStudio Code are you running?
* When you run (in a terminal) `sfdx --version`, what do you get?
* When you run (also in a terminal) `sfdx plugins`, what do you get? |
pre-commit/pre-commit | 556841231 | Title: Virtualenv dependency
Question:
username_0: I can see from `setup.cfg` that this project depends on `virtualenv`, but I don't see it imported anywhere from the code. Generally users of pre-commit will already be inside a virtualenv, so it seems like this doesn't belong as a dependency...
Answers:
username_1: E pre_commit.util.CalledProcessError: command: ('/home/username_1/workspace/pre-commit/venv/bin/python', '-mvirtualenv', '/tmp/pytest-of-username_1/pytest-10/test_python_hook0/0/.pre-commit/repofagd_7_9/py_env-python3', '-p', '/home/username_1/workspace/pre-commit/venv/bin/python')
E return code: 1
E expected return code: 0
E stdout: (none)
E stderr:
E /home/username_1/workspace/pre-commit/venv/bin/python: No module named virtualenv
pre_commit/util.py:126: CalledProcessError
----------------------------- Captured stdout call -----------------------------
[INFO] Initializing environment for file:///tmp/pytest-of-username_1/pytest-10/test_python_hook0/1.
[INFO] Installing environment for file:///tmp/pytest-of-username_1/pytest-10/test_python_hook0/1.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
------------------------------ Captured log call -------------------------------
INFO pre_commit:store.py:135 Initializing environment for file:///tmp/pytest-of-username_1/pytest-10/test_python_hook0/1.
INFO pre_commit:repository.py:69 Installing environment for file:///tmp/pytest-of-username_1/pytest-10/test_python_hook0/1.
INFO pre_commit:repository.py:70 Once installed this environment will be reused.
INFO pre_commit:repository.py:71 This may take a few minutes...
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
====================== 1 failed, 538 deselected in 0.46s =======================
```
the way pre-commit works is it creates isolated environments to install the hooks into
Status: Issue closed
|
frienddl-io/frienddl.io-support | 890533153 | Title: Add high score keeper
Question:
username_0: ## Issue Type
* [ ] Question
* [ ] Comment
* [ ] Bug
* [x] Enhancement
* [ ] Vulnerability
## Short Overview
<!---
Describe why you're logging an issue in 1 sentence.
-->
I'm logging an issue because I want to keep track of my personal high scores.
## Description
<!---
Optional field to type in more if needed.
For bugs, please enter steps to reproduce.
-->
Should keep track of high scores for a range of dates.
## Version
<!---
Optional field mainly for bugs.
-->
**Browser:**
* [ ] N/A
* [ ] Chrome
* [ ] Firefox
* [x] Both
<!---
Instructions to find in Chrome:
1. Go to chrome://extensions/
2. Find frienddl.io and select details
3. Version should be listed as one of the fields
Instructions to find in Firefox:
1. Go to about:addons
2. Find frienddl.io and select it
3. Version should be listed as one of the fields
-->
**Version Number:** N/A<issue_closed>
Status: Issue closed |
BCDevOps/developer-experience | 1050405175 | Title: ArgoCD (Shared) App Baseline Monitoring / Alerts
Question:
username_0: **Describe the issue**
To ensure proper support and response to Platform Services applications issues, baseline monitoring and alerting is necessary within Sysdig. This includes the following:
Saturation Monitoring (alerts > 80%):
- [ ] CPU Utilization
- [ ] Memory Utilization
- [ ] Storage Utilization
**Definition of done**
When the Platform Services application below have at least the Baseline Monitoring/Alerting Configurations setup.
- [ ] ArgoCD (Shared) |
guardian/prosemirror-typerighter | 529275730 | Title: Matches in the document can be misaligned if they appear after soft returns
Question:
username_0:  [Matches in the document can be misaligned if they appear after soft returns](https://trello.com/c/gGxpzDid/20-matches-in-the-document-can-be-misaligned-if-they-appear-after-soft-returns)
Answers:
username_0:  [Matches in the document can be misaligned if they appear after soft returns](https://trello.com/c/gGxpzDid/20-matches-in-the-document-can-be-misaligned-if-they-appear-after-soft-returns)
username_0: Addressed in #61.
Status: Issue closed
|
mlpack/models | 637479394 | Title: Add convert function.
Question:
username_0: Recently we added support to parse XML files in object detection type datasets. It would nice to have a conversion script that convert CSV, object-detection-tf type, json to XML and vice-versa. A great example is roboflow.ai's convert feature.
Let me know if any clarification is needed.
Thanks.
Answers:
username_0: Keep Open.
username_1: Hey @username_0 I want to start working on this issue, I hope it's still open...
username_0: Hey @username_1, please feel to pursue it, it's still open.
username_1: Can you just get me the link for that roboflow.ai's convert function?
username_0: Sure, Could select the [convert](https://roboflow.ai) option in the link.
username_1: @username_0 we need something like this https://github.com/xhallix/PyCsv2Xml right? But also for all the specified formats.
username_1: I will start working on converting csv to xml.
- I have a doubt, do I have yo write this in C++ only?
- And where exactly do you wanna put this convert function?
username_0: We already use boost xml parser, Refer LoadObjectDetectionDataLoader in Dataloader class.
username_1: @username_0 Can I get some example for the conversion? Like a dataset where we have both the formats, csv as well as xml just for the reference. Honestly I haven't worked with xml files in object detection so I need to go through an example to understand it's working.
username_1: - I am done with the main logic for converting cvs to xml files.
- This is the [link](https://github.com/username_1/Dataset_Converter) to the code and I have also included example input and output in the same repo.
- Some things are still remaining, for example when we have same file names its creating multiple xml files rather than appending in the same one, I will handle that and other cases.
For now I just need a check on the logic.
username_0: Looks good to me, maybe in the PR we could wrap it in a class and have a member function called CSVToXML or some better name.
username_1: - I wrapped it up in a class, I named the main function as csvxmlHelper() which is private.
- I made a public function convert, what I have in my mind is we can pass two parameters to this convert function(apart from path) and based on these parameters(which will be input file type and output file type) it will decide which helper fucntion to call.
`convert(path, csv, xml)`
- I think this should be better, for now I am keeping the convert function simple but as I keep adding other conversion types I will modify it accordingly, sounds good?
username_0: Sure, makes sense.
username_2: Hey @username_0 wanted to start from somewhere , found this . Is it still open ?
username_0: Hey @username_2, Yes its open. @username_1 opened a promising PR #33 for this issue. Maybe you can build upon that / or take inspiration from it.
username_2: @username_0 I have some doubt
You were referring to roboflow [https://roboflow.com/convert/coco-json-to-pascal-voc-xml](roboflow) , here the conversions are model specific but the conversion that username_1 did is csv to xml only
So do we need to be specific or we just need conversion mechanism .
username_0: I think we should be able to accommodate both. We can have static function that simply converts data from one format to other. Other than that, we can model / dataset specific conversion. This can be done by having separate functions or we can pass two strings and store a map that internally calls the correct function. Let me know what you think.
username_2: Yeah this will be better , because it will keep things clean and we can accommodate different models smoothly then .
username_3: @username_0 Can I also work upon?
username_0: Sure. Feel free to pursue this.
username_3: Thanks!
I have also commented [here](https://github.com/mlpack/models/pull/33), please acknowledge it.
username_4: You need to ask @username_1 for that.
username_0: You need to ask @username_1 for that. If he gets a chance, he will reply to you on the thread.
username_3: Okay
username_3: Tell me one thing, if he acknowledges and I make appropriate changes, then will it be considered as a PR?
Actually I am new to open source.
username_1: @username_3 this is almost done, you can pick any new type of conversion by going through the issue.
If you have any doubts regarding how to implement you can ping me anytime... |
google/ml-metadata | 876444800 | Title: Add support to release linux aarch64 wheels
Question:
username_0: Problem
---------
On aarch64, pip install ml-metadata builds the wheels from source code and then install it. It requires user to have development environment installed on his system. also, it take some time to build the wheels than downloading and extracting the wheels from pypi.
Resolution
-----------
On aarch64, pip install ml-metadata should download the wheels from pypi
@username_1, Please let me know your interest on releasing aarch64 wheels. I can help in this.
Answers:
username_1: hi @username_0 , our wheels releasing build environment is based on pypa image `manylinux2010`: https://github.com/pypa/manylinux, looks we need to upgrade to `manylinux2014` in order to release `aarch64`.
qq, are you using mlmd with tfx? if so, consider to open a bug on tensorflow/tfx, currently all wheels images are shared between tfx projects.
username_2: I'd appreciate if it would be possible to widen the platform support to ppc64le as well. However the same requirement also counts for that arch, as it is needs `manylinux2014`
username_0: @username_1, as per issue comment https://github.com/tensorflow/tfx/issues/3829#issuecomment-885085906, ml-metadata has dropped tf dependency but here https://github.com/google/ml-metadata/blob/master/ml_metadata/tools/docker_build/Dockerfile.manylinux2010#L18 it looks like build tool requires tf image.
Please correct me if I am wrong or let me know if you are using a different wheel builder.
username_0: @username_3, can you please suggest on this comment: https://github.com/google/ml-metadata/issues/114#issuecomment-1058913600
username_3: Hi @username_0,
Yes, MLMD OSS has dropped its tf dependency. The image you pointed out above is just a wheel image that we share between tfx projects. Do you encounter any problem when using it?
username_0: Hi @username_3,
Thanks for a quick reply. I am working on Linux AArch64 wheel build support. Can you please let me know if you have any plan for upgrading this tfx image to manylinux2014 and releasing for Linux AArch64 as well?
username_3: Currently this is under our radar but not in a high priority. We can easily update the image to manylinux2014, but as we are testing the build packages natively(without container), so we need put some effort to figure out a way to test aarch64 wheels in this setup(by running tests inside the container image.)
I wonder does [Building manually from source](https://github.com/tensorflow/tfx/issues/3829#issuecomment-854778922) work for you?
username_0: This is working fine but the availability of the wheel will save a lot of time of Linux AArch64 users. |
vuetifyjs/vuetify | 476327266 | Title: [Bug Report] Alert transition is not working
Question:
username_0: ### Environment
**Vuetify Version:** 2.0.4
**Last working version:** 1.5.16
**Vue Version:** 2.6.10
**Browsers:** Chrome 75.0.3770.142
**OS:** Linux x86_64
### Steps to reproduce
just click on the toggle button, you will notice that fade transition is not working
### Expected Behavior
It should give fade effect
### Actual Behavior
fade effect
### Reproduction Link
<a href="https://codepen.io/anon/pen/LwzNZd?editors=1010" target="_blank">https://codepen.io/anon/pen/LwzNZd?editors=1010</a>
### Other comments
Please fix this.
<!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
Answers:
username_1: This is a css ordering issue where in the minified css, the utility transitions are being placed above the components. In the case of v-sheet, it applies a box shadow transition. @vuetifyjs/core-team any thoughts on just making the transition on v-sheet a blanket `transition: $primary-transition`?
Workaround for now: https://codepen.io/johnjleider/pen/voWyVQ?editors=1010
username_0: @username_1 : thanks for your response, I tried temprary solution, adding `class="transition-swing"` but still this is not working.
username_2: This workaround works for me.
Status: Issue closed
username_3: Is this still being tracked? I'm a little concerned, because I just wanted to follow this thread until it gets closed for good, so I can remove the workaround as soon as it gets fixed. It's sad the documentation's not updated to this issue, though..
username_0: Actually it is still not working, i updated the vuetify to latest version
but no success.
i tried all the alternative which is given in this email. but no success.
if you want i can show you my code. i dont know it is working in other pr
not, but it is not working in mine.
username_4: Just encountered this bug today, can confirm still happening. I'm seeing it with a VSkeletonLoader, working pre-minified @username_1
username_5: @username_4 please open a new issue!
username_6: @username_4 no need to open an issue, there already is one that is open #9033 |
ONLYOFFICE/onlyoffice-nextcloud | 817677725 | Title: oc_onlyoffice_filekey is corrupted
Question:
username_0: git clone into nextcloud 21 /apps
chown ...
enable is return error -> Base table or view not found: 1932 Table 'nextcloud.oc_onlyoffice_filekey' doesn't exist in engine
After analyze:
in filesystem oc_onlyoffice_filekey.frm and ibd file exist
Phpmyadmin show table oc_onlyoffice_filekey, but details have same error #1932 - Table 'nextcloud.oc_onlyoffice_filekey' doesn't exist in engine
Answers:
username_1: #423
@username_2
This is the second issue.
username_2: For reference this is the trace from the other ticket
```
[settings] Error: Doctrine\DBAL\Exception\DriverException: An exception occurred while executing 'ALTER TABLE oc_onlyoffice_filekey ADD PRIMARY KEY (`id`)':
SQLSTATE[42S02]: Base table or view not found: 1932 Table 'nextcloud.oc_onlyoffice_filekey' doesn't exist in engine at <<closure>>
0. /config/www/nextcloud/3rdparty/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php line 169
Doctrine\DBAL\Driver\AbstractMySQLDriver->convertException("An exception oc ... e", Doctrine\DBAL\Dr ... ]})
1. /config/www/nextcloud/3rdparty/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php line 145
Doctrine\DBAL\DBALException::wrapException(Doctrine\DBAL\Driver\PDOMySql\Driver {}, Doctrine\DBAL\Dr ... ]}, "An exception oc ... e")
2. /config/www/nextcloud/3rdparty/doctrine/dbal/lib/Doctrine/DBAL/Connection.php line 1012
Doctrine\DBAL\DBALException::driverExceptionDuringQuery(Doctrine\DBAL\Driver\PDOMySql\Driver {}, Doctrine\DBAL\Dr ... ]}, "ALTER TABLE oc_ ... )")
3. /config/www/nextcloud/lib/private/DB/Migrator.php line 262
Doctrine\DBAL\Connection->query("ALTER TABLE oc_ ... )")
4. /config/www/nextcloud/lib/private/DB/Migrator.php line 85
OC\DB\Migrator->applySchema(Doctrine\DBAL\Schema\Schema {})
5. /config/www/nextcloud/lib/private/DB/MDB2SchemaManager.php line 124
OC\DB\Migrator->migrate(Doctrine\DBAL\Schema\Schema {})
6. /config/www/nextcloud/lib/private/legacy/OC_DB.php line 190
OC\DB\MDB2SchemaManager->updateDbFromStructure("*** sensitive parameters replaced ***")
7. /config/www/nextcloud/lib/private/Installer.php line 153
OC_DB::updateDbFromStructure("*** sensitive parameters replaced ***")
8. /config/www/nextcloud/apps/settings/lib/Controller/AppSettingsController.php line 447
OC\Installer->installApp("onlyoffice")
9. /config/www/nextcloud/lib/private/AppFramework/Http/Dispatcher.php line 170
OCA\Settings\Controller\AppSettingsController->enableApps(["onlyoffice"], [])
10. /config/www/nextcloud/lib/private/AppFramework/Http/Dispatcher.php line 100
OC\AppFramework\Http\Dispatcher->executeController(OCA\Settings\Con ... {}, "enableApps")
11. /config/www/nextcloud/lib/private/AppFramework/App.php line 137
OC\AppFramework\Http\Dispatcher->dispatch(OCA\Settings\Con ... {}, "enableApps")
12. /config/www/nextcloud/lib/private/AppFramework/Routing/RouteActionHandler.php line 47
OC\AppFramework\App::main("OCA\\Settings\\ ... r", "enableApps", OC\AppFramework\ ... {}, {_route: "settin ... "})
13. <<closure>>
OC\AppFramework\Routing\RouteActionHandler->__invoke({_route: "settin ... "})
14. /config/www/nextcloud/lib/private/Route/Router.php line 297
call_user_func(OC\AppFramework\ ... {}, {_route: "settin ... "})
15. /config/www/nextcloud/lib/base.php line 1012
OC\Route\Router->match("/settings/apps/enable")
16. /config/www/nextcloud/index.php line 37
OC::handleRequest()
POST /settings/apps/enable
from 192.168.1.104 by admin at 2021-01-13T12:22:18+00:00
```
@username_3 Any idea why that might occur even though the table exists on the database?
username_3: Not a lot ideas anymore about the old database.xml
Maybe an update to migrations solves it? |
atomix/atomix | 662825220 | Title: There are more lower overhead solution than Netty
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Netty is not the most efficient of network libraries. So using it does come at some overhead.
**Describe the solution you'd like**
Is it possible to implement the network communication stack internally to get more performance?
Answers:
username_1: Can you provide profiling snapshots showing where Netty is a performance bottleneck?
username_0: Following libraries are more lightweight:
- https://activej.io/net.html
- https://github.com/OpenHFT/Chronicle-Network
Status: Issue closed
username_1: I didn't ask for suggestions on different libraries, if you can provide a profiling snapshot which clearly demonstrates Netty is a performance bottleneck we can investigate further. |
Joll59/d-ser-t | 457021708 | Title: Update ReadMe to latest bits
Question:
username_0: Currently readme has vestigial data and needs to be updated.
- Readme is inconsistent in how you start the package. `cli.js` vs `main.js`
- Update the naming convention to match the package conventions. ( e.g: `AUDIO_FOLDER_PATH` to `audio-directory` )<issue_closed>
Status: Issue closed |
JuliaLang/julia | 153591151 | Title: Indexing with `UInt64`
Question:
username_0: I got an issue when indexing a matrix with a range of `UInt64`.
```
Version 0.5.0-dev+3438 (2016-04-07 15:46 UTC)
Commit a92c7ff (29 days old master)
x86_64-linux-gnu
julia> A = zeros(3,3)
julia> A[:,0x1:0x2] # ok UInt8
julia> A[:,1:2] # ok Int64
julia> A[:,UnitRange{UInt}(0x1:0x2)] # not ok UInt64
ERROR: MethodError: no method matching Array{T,N}(::Tuple{Int64,UInt64}, ::Int64, ::Int64)
Closest candidates are:
(::Type{TypeError})(::Any, ::Any, ::Any, ::Any)
(::Type{Expr})(::ANY...)
(::Type{Core.Inference.Generator{I,F}})(::Any, ::Any, ::Any...)
...
[inlined code] from ./range.jl:341
in _unsafe_getindex(::Base.LinearFast, ::Array{Float64,2}, ::Colon, ::UnitRange{UInt64}) at ./multidimensional.jl:224
[inlined code] from ./multidimensional.jl:217
in getindex(::Array{Float64,2}, ::Colon, ::UnitRange{UInt64}) at ./abstractarray.jl:476
in eval(::Module, ::Any) at ./boot.jl:237
```
I cannot reproduce the issue with a vector
```
julia> x = zeros(3)
julia> x[UnitRange{UInt}(0x1:0x2)] # ok
```
I am using Julia v0.5 nightly with Ubuntu 14.04 but my version is still 29 days old (it seems that there is no newer version for Ubuntu 14.04 in the ppa) so if this has been fixed recently, I apologies. Can someone reproduce it on the latest development version ?<issue_closed>
Status: Issue closed |
CrunchyData/postgres-operator | 724802730 | Title: pgbouncer configuration
Question:
username_0: Hi
Could you please explain me how connections are routed to the new master pod in case of failover ?
In the file pgbouncer.ini, the first line reference the primary name
= host={{.PG_PRIMARY_SERVICE_NAME}} port={{.PG_PORT}} auth_user=pgbouncer
During my testing, I could observe that PG_PRIMARY_SERVICE_NAME is not re evaluated in case of failover
How Kubernetes can switch to the correct endpoint ?
It was tested with Crunchy Data 4.3.2, is it working differently in 4.5.0
Regards
Marie
Answers:
username_1: @username_0 Sorry that this took a long time to reply to -- I had thought I answered it awhile ago, but it turns out I was mistaken!
Using this:
`* = host={{.PG_PRIMARY_SERVICE_NAME}} port={{.PG_PORT}} auth_user=pgbouncer`
The default configuration that the Operator provides is for it to connect to the primary *Service*. In Kubernetes, [Services are stable](https://kubernetes.io/docs/concepts/services-networking/service/), but allow for the Pods they represent to change.
During a failover event, the Operator's HA system will change the identifying label for the primary Pod -- the Service `selector` will handle this. At this point, if the Pod's labels have changed, you will need to wait on Kubernetes to identify that the Pod labels have changed and in turn, update the Service to point at those Pods.
By using the defined configuration, pgBouncer does not need to perform any updates during a failover, given the Service is a stable endpoint it can point at. Given the Service will be automatically pointed at the new Pod, though the timeline will depend on how quickly Kubernetes can refresh this change.
Status: Issue closed
|
hyb1996-guest/AutoJsIssueReport | 239848794 | Title: java.lang.IllegalArgumentException: View=io.mattcarroll.hover.defaulthovermenu.HoverMenuView{c7c7aea V.E...... ......ID 0,0-480,818} not attached to window manager
Question:
username_0: Description:
---
java.lang.IllegalArgumentException: View=io.mattcarroll.hover.defaulthovermenu.HoverMenuView{c7c7aea V.E...... ......ID 0,0-480,818} not attached to window manager
at android.view.WindowManagerGlobal.findViewLocked(WindowManagerGlobal.java:448)
at android.view.WindowManagerGlobal.updateViewLayout(WindowManagerGlobal.java:345)
at android.view.WindowManagerImpl.updateViewLayout(WindowManagerImpl.java:91)
at io.mattcarroll.hover.defaulthovermenu.window.WindowViewController.makeUntouchable(WindowViewController.java:99)
at com.stardust.hover.WindowHoverMenu$1.onCollapsed(WindowHoverMenu.java:58)
at io.mattcarroll.hover.defaulthovermenu.HoverMenuView.onMenuCollapsed(HoverMenuView.java:776)
at io.mattcarroll.hover.defaulthovermenu.HoverMenuView.access$2400(HoverMenuView.java:65)
at io.mattcarroll.hover.defaulthovermenu.HoverMenuView$14.onPullToSideCompleted(HoverMenuView.java:931)
at io.mattcarroll.hover.defaulthovermenu.MagnetPositioner$2.onAnimationEnd(MagnetPositioner.java:56)
at android.animation.ValueAnimator.endAnimation(ValueAnimator.java:1239)
at android.animation.ValueAnimator$AnimationHandler.doAnimationFrame(ValueAnimator.java:766)
at android.animation.ValueAnimator$AnimationHandler$1.run(ValueAnimator.java:801)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:894)
at android.view.Choreographer.doCallbacks(Choreographer.java:696)
at android.view.Choreographer.doFrame(Choreographer.java:628)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:880)
at android.os.Handler.handleCallback(Handler.java:815)
at android.os.Handler.dispatchMessage(Handler.java:104)
at android.os.Looper.loop(Looper.java:207)
at android.app.ActivityThread.main(ActivityThread.java:5769)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:806)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:696)
Device info:
---
<table>
<tr><td>App version</td><td>2.0.12 Beta</td></tr>
<tr><td>App version code</td><td>137</td></tr>
<tr><td>Android build version</td><td>1480394801</td></tr>
<tr><td>Android release version</td><td>6.0</td></tr>
<tr><td>Android SDK version</td><td>23</td></tr>
<tr><td>Android build ID</td><td>H7571AN_HBT_P16_X7_QH_V1.0.0_user_r2380</td></tr>
<tr><td>Device brand</td><td>Skyhon</td></tr>
<tr><td>Device manufacturer</td><td>Skyhon</td></tr>
<tr><td>Device name</td><td>X7</td></tr>
<tr><td>Device model</td><td>X7</td></tr>
<tr><td>Device product name</td><td>X7</td></tr>
<tr><td>Device hardware name</td><td>mt6735</td></tr>
<tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (32bit)</td><td>[armeabi-v7a, armeabi]</td></tr>
<tr><td>ABIs (64bit)</td><td>[]</td></tr>
</table> |
sars1492/diplomovka | 189528879 | Title: Some plots would look better with the legend box placed on the left
Question:
username_0: The new plots are awesome :+1: , but in some cases it would be more convenient to move the legend box to the left side. For example in the following plot:

Status: Issue closed
Answers:
username_0: This request for enhancement was implemented by pull request #4. |
clab/dynet | 234049073 | Title: Feature Request: Operation Caching
Question:
username_0: One of the major computational and memory efficiency bottlenecks in DyNet code is when the same operation is called multiple times within a for loop. This code can be sped up and made significantly more memory efficiency by moving that repetitive call outside the for loop.
Unfortunately many people don't realize this. It would be nice if DyNet had the ability to detect when people were doing the same operation multiple times, and re-use the results if so. The only difficulty is how to make this caching fast enough that it doesn't impact people adversely when they are already using optimized code. Perhaps we can add it as a flag?
Answers:
username_1: I think we can detect this case, we could just return an alias to the
original node without warning the user since the overhead would be minimal
(well, hopefully minimal). One simple way to do this: in addition to a
node's batching signature, each candidate node would need to have a "what
am i computing?" signature which would be computed recursively with a
forward-like computation. We could even build in some logic for recognizing
the identity of things like (x1 + x2) and (x2 + x1). Obviously there are
limits- even something as simple as ((x1 + x3) - x2) vs ((x1 - x2) + x3)
would be hard to detect.
We might be able to use this mechanism to handle parameter references (I
think we have some special casing for that?).
username_0: Yeah, that's what I was thinking. I actually don't think we even need to calculate the signature recursively per say, as each node has access to the ID of its arguments, which can be used as unique identifiers. The idea of identifying identities is a good one. Anyway, if we're not too worried about efficiency this should be easy to hack up, so I'll try to do it.
And re: parameter references, yes, the Python code has special casing for this. I think basically if we implement the generalized operation caching capability properly it should subsume the special casing for parameters.
username_2: In ACL Vancouver, I was talking to Yoav about this, and he just pointed out that there is an open issue about it. I would be very happy if dynet can implement this.
I would like to point out the importance of node or operation caching for inference-based training (beam search, dynamic programming or other inference algorithms). I was doing my SQA project using inference based training algorithms, and found out that having auto caching can be very useful. Sometimes it could be tedious to write all caches externally and efficiently. |
wso2/product-is | 955501097 | Title: GET application with `eq` filter invokes LIKE query
Question:
username_0: **Describe the issue:**
When tring out, GET
https://localhost:9443/api/server/v1/applications?filter=name+eq+abc
caused the following two DB calls executed each time
1. `
SELECT ID, APP_NAME, DESCRIPTION, UUID, IMAGE_URL, ACCESS_URL, USERNAME, USER_STORE, TENANT_ID FROM SP_APP WHERE TENANT_ID = -1234 AND APP_NAME LIKE 'My Account' AND APP_NAME != 'wso2carbon-local-sp' ORDER BY ID DESC OFFSET 0 ROWS FETCH NEXT 30 ROWS ONLY`
2. `
SELECT COUNT(*) FROM SP_APP WHERE TENANT_ID = -1234 AND APP_NAME LIKE 'My Account' AND APP_NAME != 'wso2carbon-local-sp'`

According to IS, there can be only one application with the given name in a tenant, so we can improve this flow to use equal query and remove the pagination from query. This way, we can add a cache layer.<issue_closed>
Status: Issue closed |
microsoft/botframework-solutions | 468465585 | Title: [POI Skill] Navigation/Routing can break the user state size limit of 2 MB in CosmosDB
Question:
username_0: #### What project is affected?
POI Skill
#### What language is this in?
C# (TS may be affected too)
#### What happens?
When requesting the navigation feature of POI skill for a long route, the user/conversation state store comes to an Overflow because of the item Limit of 2MB in Cosmos DB.
#### What are the steps to reproduce this issue?
When requesting the navigation feature of POI skill for a long route, the user/conversation state store comes to an Overflow because of the item Limit of 2MB in Cosmos DB.
#### What were you expecting to happen?
Regardless of the route I pick no error in the backend.
#### Can you share any logs, error output, etc.?
User State JSON file screenshot:

#### Any screenshots or additional context?
Answers:
username_1: This will be resolved in #1443 where the directions will not be returned.
username_2: Well, for single route, it could be sent directly to the user/navigation app and need not to be saved in user state. However, for multiple routes (*), they have to be saved for selection and being used in the next turn.
I am also wondering if we have any design doc/future plan for handling large user state like 'developers are required to save them in another db like blob storage' or 'separate the data to multiple cosmos db records'.
*: yet impossible now, since maxAlternatives is not set..
Status: Issue closed
|
accounts-js/accounts | 534431166 | Title: How to disable createUser?
Question:
username_0: Is there a hook (or other method) to effectively disable the createUser mutation that comes with the graphql implementation?
I see CreateUserSuccess and one for errors, but those seem to happen after the user is created.
Answers:
username_0: Because I setup my apollo server like this
```
// merge all of our Graphql type defs with the accountsjs type defs, before passing to apollo-server
const typeDefsWithAccounts = [typeDefs, accountsGraphQL.typeDefs];
// merge all our resolvers with the accountsjs resolvers, before passing to apollo-server
const resolvers = merge(accountsGraphQL.resolvers, CustomResolvers);
// Give apollo server it's options object
const server = new ApolloServer({
resolvers,
typeDefs: typeDefsWithAccounts,
```
any resolvers I pass in can overwrite accountsjs resolvers.
So I have a `createUser` field in `CustomResolvers` object that just does `createuser: () => null`
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.