repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
JuliaPlots/Plots.jl | 325398176 | Title: PGFPlots SVG error when displaying from the REPL
Question:
username_0: Hi, it seems Plots is having an issue displaying graphs when I use the pgfplots backend on the REPL. I can use the PGFPlots package by itself to generate pdfs and then manually convert them to svgs using pdf2svg. I can also display the figures on IJulia, but when I try displaying graphs from Julia's command line I get:
`julia> using Plots
julia> pgfplots()
Plots.PGFPlotsBackend()
julia> x = [1,2,3]; y = [1,4,2];
julia> plot(x,y)
Error saving as SVG
Error showing value of type Plots.Plot{Plots.PGFPlotsBackend}:
ERROR: SystemError: opening file C:\Users\STFA5E~1.ELM\AppData\Local\Temp\jl_49F6.tmp.log: No such file or directory
Stacktrace:
[1] #systemerror#44 at .\error.jl:64 [inlined]
[2] systemerror(::String, ::Bool) at .\error.jl:64
[3] open(::String, ::Bool, ::Bool, ::Bool, ::Bool, ::Bool) at .\iostream.jl:104
[4] open(::Base.#readstring, ::String) at .\iostream.jl:150
[5] save(::TikzPictures.PDF, ::TikzPictures.TikzPicture) at C:\Users\<NAME>\.julia\v0.6\TikzPictures\src\TikzPictures.jl:201
[6] save(::TikzPictures.SVG, ::TikzPictures.TikzPicture) at C:\Users\<NAME>\.julia\v0.6\TikzPictures\src\TikzPictures.jl:262
[7] _display(::Plots.Plot{Plots.PGFPlotsBackend}) at C:\Users\<NAME>\.julia\v0.6\Plots\src\backends\pgfplots.jl:607
[8] display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::MIME{Symbol("text/plain")}, ::Plots.Plot{Plots.PGFPlotsBackend}) at C:\Users\<NAME>\.julia\v0.6\Plots\src\output.jl:149
[9] display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::Plots.Plot{Plots.PGFPlotsBackend}) at .\REPL.jl:125
[10] display(::Plots.Plot{Plots.PGFPlotsBackend}) at .\multimedia.jl:218
[11] eval(::Module, ::Any) at .\boot.jl:235
[12] print_response(::Base.Terminals.TTYTerminal, ::Any, ::Void, ::Bool, ::Bool, ::Void) at .\REPL.jl:144
[13] print_response(::Base.REPL.LineEditREPL, ::Any, ::Void, ::Bool, ::Bool) at .\REPL.jl:129
[14] (::Base.REPL.#do_respond#16{Bool,Base.REPL.##26#36{Base.REPL.LineEditREPL,Base.REPL.REPLHistoryProvider},Base.REPL.LineEditREPL,Base.LineEdit.Prompt})(::Base.LineEdit.MIState, ::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Bool) at .\REPL.jl:646`
Does anyone know how to fix this?
Answers:
username_1: +1 here on macOS
username_2: +1
username_1: Someone knows how to corrects this issue?
username_3: I ran into this earlier. A *very* hacky workaround that you can do:
```julia
julia> p = plot(rand(10));
julia> savefig(p, "myplot.svg")
```
And you can open the svg using your favorite svg viewer. On OSX I just run ``run(`open myplot.svg\`)``.
username_4: This is caused by an issue in [TikzPictures.jl](https://github.com/JuliaTeX/TikzPictures.jl), fixed by this [pull request](https://github.com/JuliaTeX/TikzPictures.jl/pull/46) pending to be merged.
Status: Issue closed
|
ComplianceAsCode/content | 589714153 | Title: 5.5.1.4 Ensure inactive password lock is 30 days or less (Scored)
Answers:
username_1: The `INACTIVE` setting is cheked by `account_disable_post_pw_expiration`.
But actual list of users is not verified. I.E. if any user with passsword inactive for more than 30 days are inactive.
username_2: @username_1 good point. Seems like CIS should break that out into it's own rule, but anyway, reading the rule from the CIS benchmark itself, I thought `chage -l user` can show inactivity.
username_1: I believe the check of existing users` passwords is covered by:
- `accounts_password_set_max_life_existing`
- `accounts_password_set_min_life_existing`
I'll add them to CIS control files soon. |
pytorch/pytorch | 838593756 | Title: SyncBatchNorm crash when affine=False
Question:
username_0: ## 🐛 Bug
In nn.SyncBatchNorm:
There is a call to weight.contiguous() there:
https://github.com/pytorch/pytorch/blob/27048c1dfa80effabf17b8dca66cd2724dd502f8/torch/nn/modules/_functions.py#L12
But weight can be None if affine = False.
https://github.com/pytorch/pytorch/blob/27048c1dfa80effabf17b8dca66cd2724dd502f8/torch/nn/modules/batchnorm.py#L44
So it crashes.
Bug was introduced in:
https://github.com/pytorch/pytorch/commit/d30f4d1dfd5237d89834363ce2cff9de4ee92811
Answers:
username_1: Yes, https://github.com/pytorch/pytorch/pull/46906 is related. Thanks for the report.
cc @ngimel |
woocommerce/woocommerce | 553398349 | Title: Gallery Thumbnail Images Unspecified Dimensions
Question:
username_0: **Describe the bug**
Lately I noticed that the native thumbnails in the woocommerce product gallery are loaded with unspecified dimensions. This came from a GTMetrix Report. I have narrowed it down, by only having the WC plugin and a default theme, and it shows that the gallery thumbnails are missing the height and width attributes.
**Screenshots**
GTMetrix report: http://prnt.sc/qr5bt1
**Expected behavior**
Gallery thumbails need width and height attributes, so it will reduce calculating these and it benefits page load time
**Isolating the problem:**
- [X] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [X] This bug happens with a default WooCommerce Storefront theme
- [X] I can reproduce this bug consistently using the steps above.
**WordPress Environment**
<details>
`
### WordPress Environment ###
WordPress address (URL): _private_
Site address (URL): _private_
WC Version: 3.9.0
REST API Version: ✔ 1.0.2
Log Directory Writable: ✔
WP Version: 5.3.2
WP Multisite: –
WP Memory Limit: 1 GB
WP Debug Mode: ✔
WP Cron: ✔
Language: nl_NL
External object cache: –
### Server Environment ###
Server Info: Apache/2
PHP Version: 7.2.25
PHP Post Max Size: 64 MB
PHP Time Limit: 30
PHP Max Input Vars: 1000
cURL Version: 7.67.0
OpenSSL/1.0.2k-fips
SUHOSIN Installed: –
MySQL Version: 5.5.5-10.2.29-MariaDB
Max Upload Size: 64 MB
Default Timezone is UTC: ✔
fsockopen/cURL: ✔
SoapClient: ✔
DOMDocument: ✔
GZip: ✔
Multibyte String: ✔
Remote Post: ✔
Remote Get: ✔
### Database ###
WC Database Version: 3.8.1
WC Database Prefix: wpws_
Total Database Size: 610.19MB
Database Data Size: 333.95MB
[Truncated]
Oldest: 2020-01-02 17:08:47 +0100
Newest: 2020-01-21 23:47:53 +0100
Pending: 1
Oldest: 2020-01-28 23:47:52 +0100
Newest: 2020-01-28 23:47:52 +0100
Canceled: 0
Oldest: –
Newest: –
In-progress: 0
Oldest: –
Newest: –
Failed: 82
Oldest: 2019-04-28 18:58:24 +0200
Newest: 2019-11-04 19:22:48 +0100
`
</details>
Answers:
username_1: Hi @username_0
Thank you for submitting the issue. However, I can’t reproduce the issue you reported using the steps you provided. Everything is working as expected on my end.
With just WooCommerce 3.90 active on the default shop page and using the same GTMetrix Analysis tool I don't get the warnings you saw in your screenshot

Please provide us with more details about the issue which may help us to evaluate it further such as the setup of the page you had analysed. Is it the default shop page, what is the name of the theme used and have you used any WooCommerce blocks such as the All Products block?
username_0: Hi @username_1
Thanks for coming back. The product image gallery with thumbnails is part of the single product page, so its thumnbnails: [http://prnt.sc/qsn9bg](http://prnt.sc/qsn9bg)
As mentioned in my description, I have only woocommerce enabled and the default storefront theme, nothing else.
I hope this helps.
username_0: Hello @username_1
Were you able to reproduce this? I have currently updated to 3.9.1 and it's still missing the attributes.
Thank you
username_2: Hi @username_0,
Thank you for getting back to us with more details and apologies for the delayed response. I can reproduce it on my end (using https://gtmetrix.com/):

We won’t be able to include this fix in the upcoming release due to the lower priority of this issue compared to others reported. We’re going to add it to our backlog so we can include it in our planning for one of our future releases.
username_3: Hi @username_2
the absence of the width and height attributes means that the layout cannot be defined prior to those images being loaded. This has severe consequences for Googles new WebDev metrics, particularly CLS.
You can see the impact this has on the Storefront Demos Pagespeed Insight report:
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fthemes.woocommerce.com%2Fstorefront%2Fproduct%2Fbuild-your-dslr%2F%23camera-body
The element listed under Avoid Large Layout Shifts it is affecting is the:
`<ol class="flex-control-nav flex-control-thumbs">`
This elements height cannot be defined until the images are loaded.
The new WebDev metrics will become part of Googles Ranking algorithm in 2021. It would be good to see this resolved before it has a negative affect on a sites ranking.
username_4: @username_1 @username_2 - I've done some drilling down on this and would like to report in with what I have so far:
1) Ultimately, it looks to me that this is a FlexSlider issue. The initial markup produced by WC for the thumbs (via wc_get_gallery_image_html()) has attributes for height and width (as well as sizes, srcset, and more). However, then FlexSlider strips that down to the bare minimum. Apparently too much so ;)
2) In a semi-related note, in looking for an available solution, I tried - via the filter 'woocommerce_single_product_carousel_options' - to use some of the other FlexSlider properties ([https://github.com/woocommerce/FlexSlider/wiki/FlexSlider-Properties](https://github.com/woocommerce/FlexSlider/wiki/FlexSlider-Properties)) and couldn't get itemWidth, itemMargin, or maxItems to work. (Note: I didn't try all of the advanced properties so there may be others in this bunch). On the other hand, adding 'randomize' true did randomize, as did changing 'animation' to 'fade' did as well. I mention 'randomize' and the change to 'animation' to verify that some filtering of those args did work. So it wasn't me. I don't think :)
3) Finally, spitballing, if itemWidth did work (read: it gets added to the <li> wrapper of each thumb), and an itemHeight was added, if the img width was set 100% would that be sufficent for Google and other metrics? @username_3 @username_0, any idea? You can see itemWidth and itemMargin working here in the FS demo: [http://flexslider.woothemes.com/thumbnail-slider.html](http://flexslider.woothemes.com/thumbnail-slider.html)
username_5: Hi, is the issue of bad CLS for a productpage still being handled?
As what is noted in this ticket: the gallery has no height/width and Google punished this with bad CLS. Not a nice site in Search Console when you have +1000 productpages and a few "normal" pages. :-)
I've tested 4/5 sites via Lighthouse in the Chrome browser.
Same issue for every product page on every site.
So, is there more news around a specific solution in core?
username_6: Just chiming in here as @username_5 flagged this on Slack.
Re @username_4 point 3 - img width alone would not be sufficient for CLS (and other) CWV warnings, because the layout shift they are complaining about is vertical - the image loading would "push" the rest of the content lower (particularly on mobile).
Any element which contains a vertical element which after the first paint should have it's height (and width) set, to avoid layout shifts. The width in this case is not so important - because this does not shift the layout.
@username_5 - as a workaround, you can set the height of the containing element, particularly useful if you know what this will always be (i.e. if your product pics / thumbs have a fixed height). You can do this using CSS. This does stop PageSpeed Insights from squeaking at you.
username_6: CLS is basically saying "does the height of this whole page change after FCP?" - and, assuming you load elements lazy or after FCP without height set, this is exactly what happens. The problem? Well, let's say you click an anchor link below one of these elements. You go to press a button, then an image up the page loads, pushing the button down. Uh oh. You missed. Try again! This is poor UX, and now a ranking factor in Google search.
username_1: Thanks all for the additional information. It requires further feedback from the WooCommerce Core team. I am adding the `needs developer feedback` label to this issue so that the Core team could take a look.
Please note it may take a few days for them to get to this issue. Thank you for your patience.
username_5: Thanks Robin for your fantastic feedback.
In Search Console we only got "orange" improvement notifications for the desktop-version; the mobile-version is mostly okay for the CLS.
As what you've said, the height-parameter is important here.
username_7: Looks like the required changes for this to work are needed [here](https://github.com/woocommerce/FlexSlider/blob/690832b7f972298e76e2965714657a2beec9e35c/jquery.flexslider.js#L246).
username_8: I have the same problem... tried to find solution for weeks and I surrended. Is there any chance that it may be fixed? Bad CLS is dropping google rating because its almost 0,15... Changing image size has no sense if you have horizontal + vertical images.
I think many peope have similar problem but they didnt realize it exists...
username_9: I'm way too lazy to create a PR for this, but here is a replacement for jquery.flexslider.js... which you can minify on your own to create jquery.flexslider.min.js. Anyway, two lines of code seems to fix this problem for me. The solution is to pull the img height/width from the original markup and bring it into the generated img tag. Easy cheese.
Should work as a drop in replacement
https://gist.github.com/username_9/160839d5324e30de9bfc95a808eb880e
* There is still an svg icon appearing without height/width info. Does anyone know where this is coming from? <img draggable="false" role="img" class="emoji" alt="🔍" src="https://s.w.org/images/core/emoji/13.0.1/svg/1f50d.svg">
username_10: Simply enqueue your own single-product.js:
`add_action( 'wp_enqueue_scripts', 'my_single_product_script' );
function my_single_product_script() {
if (is_product()){
global $wp_scripts;
$wp_scripts->registered[ 'wc-single-product' ]->src = get_stylesheet_directory_uri() . '/js/my_woo_scripts/my-single-product.js';
}`
and in my-single-product.js you add:
`$(document.body).on('wc-product-gallery-after-init', function() {
//add image dimensions if flex control nav is present
if ($('.flex-control-nav').length){
console.log('present');
$('.flex-control-nav li img').each(function() {
$(this).attr('width',65).attr('height',65); //add your specific dimensions
});
}
});`
This adds the required width and height attribute.
theo
username_8: hey @username_10
I would like to test your improvement but im not a professional and im afraid I can break my website. Could you describe with more details steps you wrote?
enque single-product.js
adding in my-single-product.js
I see single-product.js in folder woocommerce but should I remove it ot or delete part of it and put code you wrote? I will be grateful for step by step instruction.
Thanks in advance mate!
Adrian
username_10: Hi Adrian
Ok, i’ll try.
The goal is to inject the missing attributes with jquery. That is, when the page has loaded.
This is done principally like so:
`$(selector).attr('width',value).attr('height',value);`
You can place the injection script in the standard js file of your child-theme.
(I hope you do have a child theme, otherwise i recommend to learn about this important feature.)
Open your standard.js (or whatever it is called) and add:
if ($('.flex-control-nav').length){ //comment: first check if the flex-control-nav element is present
console.log('present'); //comment: if so, you can check it’s presence in the console. Delete this line afterwards
$('.flex-control-nav li img').each(function() { //comment: the each function loops through all specified elements: .flex-control-nav li img
$(this).attr('width',65).attr('height',65); //comment: add the width and heigth attribute. In my case the value is 65. You will have to inspect what dimension your theme serves
});
}
Load the script. Probably you will have to empty the cache. Mind if caching plugins are working.
Check the console. If it says 'present'. Then the script has recognized the element in question.
Then inspect the .flex-control-nav li img and find the attributes width and height.
If it all works, the delete the console.log line and eventually minify the script.
Try this first. The method described above is a little more complicated. If you want to learn more, i can explain that too.
But for now, take the easy way.
Good luck
theo
username_8: hey @username_10
You can place the injection script in the standard js file of your child-theme.
(I hope you do have a child theme, otherwise i recommend to learn about this important feature.)
- yes, I have a child theme. where should I search this js.? (woocommerce/assets/js/frontend/single-product.js)?
I have also plugin PHP inserter. Can I use it or just edit woocommerce/template files manually?
username_10: For now, leave all woocommerce files alone.
In the directory of your child-theme look for functions.php.
Look for all the js files that are enqueued. Look in the js folder of your child theme.
Eventually you will find a js file that holds basic functions.
Send me a link to your website.
username_3: @username_10 this method may eliminate the missing width/height attributes warning in Lighthouse as i believe this is audited post render - but i am not sure it will resolve the CLS problem.
To eliminate the layout shift the browser needs to be able to reserve space for those images. To do that effectively the attributes need to be present in the HTML Tree before it is passed to the browser Layout Engine. Or as near as damn it to the instance of first contentful paint.
Have you check for Layout Shift on the single product with and without this script?
Alternative fix:
In most instances the missing width/height attributes can be ignored if they are not generating Layout Shift. In the event they do cause Layout Shift, and there is no direct way to add the attributes ( as in this instance ) the simplest fix is to use CSS to set a minimum height on the parent container like so:
```
.flex-control-nav {
min-height: 100px;
}
/* adjust height if necessary for mobile */
@media(max-width: 768px) {
.flex-control-nav {
min-height: 100px;
}
}
```
But ultimately the fix @username_9 provided [here](https://github.com/woocommerce/woocommerce/issues/25461#issuecomment-845673185) is the solutions and something that should merged into the plugin
username_10: @username_3 You are right. The layout shift remains. I my case, it is very small. Thanks for your input
username_9: Thanks @username_3 ! Yeah unfortunately there are no events, hooks or filters to bind to. So we're stuck just rewriting an existing script. I've replaced ours and will continue to do so manually until WooCommerce merges this change or revises the script in another way.
username_8: @username_10 @username_9 @username_3
gentlemen, thank you for contributing to the solution of the problem. css given by username_3 seems to solve the problem. This is not a good place to ask a question, but I see that I deal with specialists - maybe you can tell me how to solve another problem (I did not find the answer on forums, facebook groups, I wrote to the creator of the template, on stackoverflow) and nothing ...
I have a problem with the size of the photos on the product page for mobile devices. Despite the small target size, mobile phones are loaded with large sizes such as 1024w etc. The problem concerns the flex-viewport element. I have tried all possible solutions and spent tens of hours trying to fix the problem. Link to my product page that can be researched by GPSI (https://bit.ly/3wpbYa7). If you are able to help me or direct me to a solution, I will delete this post so as not to clutter the thread. Thanks in advance.
Adrian
username_10: Check out wordpress plugin «adaptive images».
username_8: @username_10 you mean this one? https://wordpress.org/plugins/adaptive-images/
I tried (not sure if this or similar one) and it didnt help.
username_10: yes, i use it since years.
username_8: @username_10 is there a risk to break some images or images quality? during tests of different optimizing plugins broke my website once and had to restore all data...
username_10: Sorry to hear that. I never had any problems with adaptive images. But anyway, always backup your working site.
username_3: @username_8 yeah - this isn't the right place for support - i would suggest speaking with your Theme author. Some pointers: The problem arises because of the src-set `sizes` your theme is outputting - if you inspect one of your images you will see:
`sizes="(max-width: 705px) 100vw, 705px"`
More info on this [see here](https://make.wordpress.org/core/2015/11/10/responsive-images-in-wordpress-4-4/)
Your theme is using WP's default sizes - it was setup this way with the good intentions of always ensuring an image would fill the space available to it... but it makes the assumption that on smaller screens the image will fill the width of the viewport - which it does not because of the container padding - and because of this the browser is left to its own devices and generally grabs a larger then required image.
However, the `sizes` attribute does have a filter hook: `wp_calculate_image_sizes` [see here](https://developer.wordpress.org/reference/hooks/wp_calculate_image_sizes/)
Heres a quick a dirty PHP Snippet example of that - note that this may not be applicable with your theme:
```
function db_modify_srcset_sizes($sizes, $size) {
return '(max-width: 420px) 300px, (min-width: 421px) 768px, (min-width: 769px) 1024px, 100vw';
}
add_filter('wp_calculate_image_sizes', 'db_modify_srcset_sizes', 10 , 2);
```
Have a chat with your theme developer about applying this.
username_8: @username_3 Hey,
thank you for code. It seems that problem has been solved - no more warning from GPSI.
Thank you very much mate and big beer for you!
Adrian
username_11: Additional report in 4171922-zen - customer noticed poor CLS score on single product pages on mobile and mentioned that "when you remove the [gallery] thumbnails (using a product with only one image), the CLS issue simply vanishes"
username_12: I tried to fix it with the above code, but I always get the console error:
"Uncaught TypeError: $ is not a function"
This is the content of my custom js file (I created a blank .js file with this content, uploaded it in my childtheme's js folder and enqueued it:
`$(document.body).on('wc-product-gallery-after-init',function(){
if ($('.flex-control-nav').length){
console.log('present');
$('.flex-control-nav li img').each(function() {
$(this).attr('width',65).attr('height',65);
});
}
});`
Is there anything I have to place before that code? I have no clue from jquery, so any help would be really appreciated.
username_2: I am changing the priority from `low` to `high` here given the number of comments we have on the issue. cc @username_14
username_10: Hi username_12
If you get this uncaught typeError: $ is not a function, then jQuery is not available.
For the script to work, the jQuery library must be loaded first.
How do you load the jQuery library?
I hope that helps to solve the error.
theo
username_13: I did a pr with the code of the modified version of @username_9 from https://github.com/woocommerce/woocommerce/issues/25461#issuecomment-845673185
With the hope that is reviewed and merged but the library last commit is from 2019.
https://github.com/woocommerce/FlexSlider/pull/1799
username_14: @username_13 For better or worse, we forked flexslider before inclusion, so we can make this change in WC directly as well, would you be interested in sending another PR targeting core? See the file here: https://github.com/woocommerce/woocommerce/blob/trunk/assets/js/flexslider/jquery.flexslider.js
username_13: Done https://github.com/woocommerce/woocommerce/pull/30648
I don't understand why keep a repository opened to the world if the project is not developed there anymore. Maybe archive the repository to avoid the confusions?
username_9: This is the way
username_13: I had to change your code @username_9 as it takes all the sliders height and width and not the real width of the image.
My version do some calculations and set a better width/height and not just a filler to silence lighthouse.
username_8: Hello,
@username_13
Im using function:
`function db_modify_srcset_sizes($sizes, $size) {
return '(max-width: 420px) 300px, (min-width: 421px) 768px, (min-width: 769px) 1024px, 100vw';
}
add_filter('wp_calculate_image_sizes', 'db_modify_srcset_sizes', 10 , 2);`
Is it better to apply code from last posts in this conversation? When I disable code from above PSI immediately tells about wrong image size.
username_13: I don't think that this affect this part. As gallery thumbnails HTML is generated by JS and not PHP.
username_8: @username_13 should I leave it as it is or try improvements? im no professional
username_13: Depends, lighthouse reports troubles that with this line are fixed? I have no idea
username_9: Thanks for that. I didn't realize it was using the wrong dimensions. My intention wasn't to silence LH, but to actually fix it. So this is good stuff. I'll review these changes and most likely drop them into my websites that use this feature. :)
username_15: Sorry to join the discussion only now, but I want to share the fix because I guess i've already solved this issue in the past... Anyhow, this is only a part of the speed optimizations needed in 2021, another important change would be to fix the attribute that prevents lazyloading.
Basically, we wait the image load, to then add width and height
```
// jquery.flexslider.js#L244
item = $( '<a></a>' ).attr( 'href', '#' ).text( j );
if ( slider.vars.controlNav === "thumbnails" ) {
item = $('<img/>', {
load: function (el) {
el.currentTarget.width = el.currentTarget.naturalWidth;
el.currentTarget.height = el.currentTarget.naturalHeight;
},
src: slide.attr('data-thumb'),
alt: slide.attr('alt')
})
}
```
https://github.com/username_15/woocommerce/blob/dbc22d6f8968cf64909e89943764720eb7eed647/assets/js/flexslider/jquery.flexslider.js#L244
username_16: Can confirm this works.
Status: Issue closed
|
docker-java/docker-java | 458958376 | Title: Error to copy file to container
Question:
username_0: Is there any limitation for Windows?
code:
```java
public static void main(String[] args) throws IOException {
// Data
String imageId = "openjdk:8";
String containerName = "alpine";
String containerFile = "/opt/app.jar";
String hostFile = "api-0.0.1-SNAPSHOT.jar";
// Docker client
DockerClient dockerClient = DockerClientBuilder.getInstance("tcp://localhost:2375").build();
// Create container
try (CreateContainerCmd createContainer = dockerClient
.createContainerCmd(imageId).withName(containerName)) {
createContainer.withTty(true);
createContainer.exec();
}
// Start container
dockerClient.startContainerCmd(containerName).exec();
// Copy file from container
try (CopyArchiveToContainerCmd cmd =
dockerClient.copyArchiveToContainerCmd(containerName)
.withHostResource("C:\\Users\\jose.da.silva.neto\\Desktop\\netxio\\netx.io-api\\target\\api-0.0.1-SNAPSHOT.jar")
.withRemotePath(containerFile).withNoOverwriteDirNonDir(false)) {
cmd.exec();
}
// Stop container
//dockerClient.killContainerCmd(containerName).exec();
// Remove container
//dockerClient.removeContainerCmd(containerName).exec();
}
```
Error output:
```
21:58:20.540 [main] DEBUG com.github.dockerjava.jaxrs.JerseyDockerCmdExecFactory$1 - Connection released: [id: 1][route: {}->http://localhost:2375][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
21:58:20.541 [main] INFO org.apache.http.impl.execchain.RetryExec - I/O exception (java.net.SocketException) caught when processing request to {}->http://localhost:2375: Software caused connection abort: socket write error
21:58:20.543 [main] DEBUG org.apache.http.impl.execchain.RetryExec - Software caused connection abort: socket write error
java.net.SocketException: Software caused connection abort: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at org.apache.http.impl.conn.LoggingOutputStream.write(LoggingOutputStream.java:74)
at org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:124)
at org.apache.http.impl.io.SessionOutputBufferImpl.flushBuffer(SessionOutputBufferImpl.java:136)
at org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:167)
at org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:179)
at org.apache.http.impl.io.SessionOutputBufferImpl.writeLine(SessionOutputBufferImpl.java:219)
at org.apache.http.impl.io.ChunkedOutputStream.flushCacheWithAppend(ChunkedOutputStream.java:123)
at org.apache.http.impl.io.ChunkedOutputStream.write(ChunkedOutputStream.java:179)
at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:224)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:300)
at org.glassfish.jersey.message.internal.ReaderWriter.writeTo(ReaderWriter.java:117)
[Truncated]
at org.glassfish.jersey.message.internal.InputStreamProvider.writeTo(InputStreamProvider.java:61)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:266)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:251)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:163)
at com.github.dockerjava.jaxrs.filter.LoggingFilter.aroundWriteTo(LoggingFilter.java:300)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:163)
at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1135)
at org.glassfish.jersey.client.ClientRequest.doWriteEntity(ClientRequest.java:516)
at org.glassfish.jersey.client.ClientRequest.writeEntity(ClientRequest.java:498)
at org.glassfish.jersey.apache.connector.ApacheConnector$1.writeTo(ApacheConnector.java:598)
at org.apache.http.impl.execchain.RequestEntityProxy.writeTo(RequestEntityProxy.java:121)
at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)
at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:152)
at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
... 19 more
```
Answers:
username_0: I found the problem
myRemotePath was a name of a file:
`String containerFile = "/opt/app.jar";`
I changed to:
`String containerFile = "/opt"` and it worked.
but how do I do something like this:
`ADD target / * .jar /opt/app.jar`
Status: Issue closed
|
rails/rails | 201433450 | Title: redirect behavior change with path: not documented
Question:
username_0: given this route:
```ruby
get "/user/sign-in", to: redirect("/sign-in")
```
`/user/sign-in?foo=bar` will redirect to `/sign-in`
given this route:
```ruby
get "/user/sign-in", to: redirect(path: "/sign-in")
```
`/user/sign-in?foo=bar` will redirect to `/sign-in?foo=bar`
[This test](https://github.com/rails/rails/blob/master/actionpack/test/dispatch/routing_test.rb#L291-L298) is the closest thing which might suggest this is the intended behavior in the `path:` case. Although the name suggests that it's testing something else and the params are an afterthought. In fact from reading that test one might imagine that params are only carried over by default when the host is being changed in the redirect.
What's the intended behavior? I would have expected those two routes to have identical behavior.
The fact that the `path:` case behaves as such doesn't seem to be documented ([ActionDispatch::Routing::Redirection](http://api.rubyonrails.org/classes/ActionDispatch/Routing/Redirection.html)). I'd like to contribute some documentation, but I don't know the rationale.
Thanks,
John
Answers:
username_1: The intended behavior is exactly what is happing now by the commit message that introduced that test https://github.com/rails/rails/commit/0bda6f1ec664fcfd1b312492a6419e3d76d5baa7.
When you pass a hash only the relevant parts of the url changes, in your case only the path change keeping the query string, the host, etc. While without the hash everything is changed to the URL passed as argument.
Can you work in improving the documentation to explain this difference in the behavior?
username_2: Looks like @username_3 is working on a PR to fix this at #27730...
Status: Issue closed
username_0: 🆒 |
halfnelson/svelte-native-preprocessor | 1028074063 | Title: preprocessor is slow
Question:
username_0: within this project https://github.com/username_0/alpimaps
The webpack build times is 12s without the svelte-native preprocessor and 19s with the svelte-native preprocessor.
It seems to be dependant on the number of svelte files.
Any way to make it faster? It is really slow right now... |
burke-software/django-mass-edit | 1001335102 | Title: Add a confirmation page on Save
Question:
username_0: Currently, when a mass edit is being saved, the data is directly saved without asking for any confirmation. It would be nice to have a confirmation feature to show the changes that are going to be done, as well as the list of objects that will be affected. This is particularly useful when the model has lots of fields and may be confusing. |
Azure/azure-cli | 348018965 | Title: compute: "Microsoft.Compute/skus" should accept server filter
Question:
username_0: This api returns boat load of information including configurations across all compute resources with 3MB payload returned. Because of the volume, the information is hard to digest by CLI users.
Considering the importance of the api (the only place people can query zone list and vm size availability) I suggest, service end should expose server side filter including "resource type" and "vm size" so to retrieve better data with shorter latency.
Status: Issue closed
Answers:
username_1: Cleaning up old issues from 2016 - 2018. Please reopen this issue if it is still a concern. |
DigitalPlatform/dp2 | 314512475 | Title: “我爱图书馆”微信公众号及Web测试工作单/图书馆介绍
Question:
username_0: 1. 未选择图书馆时,进入“图书馆介绍”界面时出现未选择图书馆提示,点击链接选择图书馆。选好图书馆后,自动转回“图书馆介绍”界面。
Web:√
微信:
2. “图书馆介绍”界面显示正常。图书馆介绍界面有栏目的概念,但显示是在同一个界面,且按栏目分组显示。
Web:√
微信:
3. 当前账号为pubilc或读者账号时,用户只能查看图书馆介绍界面。
Web:√
微信:
4. 当前账号为有权限的工作人员账号时,图书馆界面除了显示内容以外,增加“新发布信息”按钮。
Web:√
微信:
5. 点击需要编辑的介绍内容 --该区域显示显示为绿底色,且右边出现“编辑”和“删除”按钮--点击该内容区域,取消编辑状态:底色消失,“编辑”和“删除”按钮消失。
Web:√
微信:
6. 工作人员可以对当前介绍内容进行删除。
点击需要编辑的介绍内容区域--该区域显示显示为绿底色,且右边出现“编辑”和“删除”按钮--点击“删除”--系统弹出对话框提示是否要删除该内容--点“取消”--对话框关闭,回到介绍界面,且编辑区域关闭。
点击需要编辑的介绍内容区域--该区域显示显示为绿底色,且右边出现“编辑”和“删除”按钮--点击“删除”-系统弹出对话框提示是否要删除该内容--点“确定”--系统弹出对话框提示删除成功--点“确定”--对话框关闭,回到图书馆介绍界面,编辑区域关闭,且刚才删除的内容消失。
Web:√
微信:
7. 工作人员可以对当前介绍内容进行编辑。
点击需要编辑的内容区域--该区域显示显示为绿底色,且右边出现“编辑”和“删除”按钮--点击“编辑”--进入图书馆介绍编辑界面--下拉选择或自定义栏目,编辑标题和内容--点“取消”--对话框关闭,回到图书馆介绍界面,且编辑区域关闭。系统显示为修改之前的介绍内容。
点击需要编辑的内容区域--该区域显示显示为绿底色,且右边出现“编辑”和“删除”按钮--点击“编辑”--进入图书馆介绍编辑界面--下拉选择或自定义栏目,编辑标题和内容--点“保存”--系统弹出对话框提示保存成功--点“确定”--对话框关闭,回到图书馆介绍界面,且编辑区域关闭。系统显示为修改之后的介绍内容(含未修改的内容)。
Web:√
微信:
8. 工作人员增加介绍内容。
在图书馆介绍界面直接点击“新发布信息”按钮--进入图书馆介绍编辑界面--用户可以下拉选择或自定义栏目,编辑标题和内容,
1)设置后点击“取消”--回到“图书馆介绍”界面。系统显示增加之前的介绍内容。
2)设置后点击“确定-”-系统弹出对话框提示操作成功--点“确定”--对话框关闭,回到“图书馆介绍”界面,系统按照介绍先后顺序显示内容,当前新增内容按照编辑时栏目最前面的数字顺序显示在预定的位置。
Web:√
微信:
Answers:
username_0: 该计划已迁至chord。
Status: Issue closed
|
thorrak/tiltbridge | 838000273 | Title: WiFi Connection Lost Repeatedly
Question:
username_0: Somewhere between every few hours to once per day the tilt bridge loses connection to the wifi (WPA2) for multiple hours. Other devices on the network do not seem to have the same issue. When I push the reboot button on the device it reconnects immediately upon reboot.
I have tried using the Arduio IDE serial port monitor, but it outputs gibberish characters. When I try to run the serial port monitor using Visual Studio Code with Platform IO installed I get the following error message.
Executing task: C:\Users\Maniac\.platformio\penv\Scripts\platformio.exe device monitor <
--- Available filters and text transformations: colorize, debug, default, direct, esp32_exception_decoder, hexlify, log2file, nocontrol, printable, send_on_enter, time
--- More details at http://bit.ly/pio-monitor-filters
--- forcing DTR active
--- forcing RTS active
Error: [WinError 2] The system cannot find the file specified
The terminal process "C:\Users\you\.platformio\penv\Scripts\platformio.exe 'device', 'monitor'" terminated with exit code: 1.
Is there a guide somewhere I can follow to get the serial port monitor in Visual Studio/Platform IO working or another, correct way to diagnose what is going on?
The Til Bridge has lost connection to your WiFi.
Attempting to reconnect..

using TTGO USB-C with v1.0.2 from brewflasher (web interface about page says 1.0.1)
Answers:
username_1: The error you are receiving in PIO looks to be because of a missing driver. You might give a search for that. The baud rate is 115200, which may address your gibberish.
Other than that - all I can say is the antennas on these are not optimized. If you reset the wifi (the top button) and then pay attention to the signal strength when you re-set it, it may give you a clue.
username_0: Thanks for the tips. The baud rate fixed the gibberish. I'll update with the monitor log after it loses connection next.
username_0: Here we have it. Thanks for taking a look.
[tiltbridge-log.txt](https://github.com/thorrak/tiltbridge/files/6186060/tiltbridge-log.txt)
username_1: Yeah, the more you do the more memory it uses. That said, I am doing exactly the same thing without this issue.
Did you try resetting the WiFi credentials, going through the connection workflow, and seeing what the dB rating is for your access point?
username_0: I just tried resetting the wifi credentials while connected to the serial monitor. I don't see where the db rating is shown for the access point, but the tilt bridge is very close to the access point and this graphical representation of signal strength is shown for my network and a soft network I setup for testing before I choose one and connect.

username_1: Yes, that's it. -42 is very good.
Is it possible that something weird like cooling starting on the fridge is knocking it out?
username_0: During the test for the attached log above the tilt was in a bucket of water next to the bridge away from any fridge.
I thought it might be related to the my AP having spaces in the password, but testing with the soft AP with no spaces in the password had a similar result. Would the Dup AP due to mesh wifi system cause trouble for the bridge to pick one to reconnect to?
username_0: Here's another log from after clearing the wifi settings and reconnecting in the post above.
[new-log.txt](https://github.com/thorrak/tiltbridge/files/6190692/new-log.txt)
username_1: I am reasonably sure this is just more silliness between WiFi and Bluetooth - which use the same radio. Unfortunately this is all either in 3rd party libs or (worse yet) in the core.
I had been looking at a little more aggressive reboots in teh case of issues like this, along with a generic 24 hour reboot just for everything else. I'll look into that.
username_1: The change which reboots if unable to rejoin the network, a well was a general-purpose 24-hour reboot has been pushed to devel. There are a few other changes pending in devel so this will get done when @thorrak gets around to it next.
username_0: Thanks for loading the dev branch into the alpha label in brewflasher. I loaded the TTGO alpha and the serial monitor log is much less verbose than the main branch was, but the reboot on disconnect seems to be functioning bandaid for now. It is now rebooting a lot rather than disconnecting a lot. Checking on the about page in the local web interface periodically I haven't seen uptime get over 15 minutes.
[ttgoalphalog.txt](https://github.com/thorrak/tiltbridge/files/6245863/ttgoalphalog.txt)
username_1: Well that's a crapton more than I was getting. I think John is turning off the logging because I thought I left it on.
What targets/configuration are you using? Maybe there's a combination that works especially poorly.
username_0: After loading the beta build the only thing I did was connect to wifi, change offset to -4, and setup a link to a google sheet to see if the reboot solution enabled a continuous stream of data points in between reboots.
username_1: Well, here's what I have:

The commit will not be the same but it should be the same code. That suggests your WiFi coverage there is really spotty, or there's something else interfering. Do you have the ability to capture any logs on your AP?
username_0: The TTGO board is still near the main router of a mesh network. Here's the relevant log entries from the router:
Apr 1 14:09:17 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-alpha-mac, status: 0, reason: Deauthenticated because sending station is leaving (or has left) IBSS or ESS (3), rssi:0
Apr 1 14:09:18 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-alpha-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 1 14:09:20 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-alpha-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 1 15:37:43 syslog: wlceventd_proc_event(526): eth1: Auth tiltbridge-alpha-mac, status: Successful (0), rssi:0
Apr 1 15:37:43 syslog: wlceventd_proc_event(555): eth1: Assoc tiltbridge-alpha-mac, status: Successful (0), rssi:0
Apr 1 15:41:37 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-alpha-mac, status: 0, reason: Deauthenticated because sending station is leaving (or has left) IBSS or ESS (3), rssi:0
Apr 1 15:42:01 syslog: wlceventd_proc_event(507): eth1: Disassoc tiltbridge-alpha-mac, status: 0, reason: Disassociated because sending station is leaving (or has left) BSS (8), rssi:0
username_1: That's weird ... you don't have someone firing off a deauther nearby do you? :P
How about the power supply. It's a longshot but have you tried a different one?
username_0: I don't have my deauther powered up. I'm not aware of anyone in the area who knows what that is. It also seems unlikely to only be deauthing these two TTGO boards and no other devices.
I've had the same result with a few different power supplies, including powering the board off a pc usb port in order to capture the logs.
username_1: This is generally where I throw my hands up and page out my network engineers to run a sniffer. It's weird.
I can see about putting a debug version on BrewFlasher with more logging enabled. *Maybe* that will help diagnose the issue. I'd probably have to enable core logging and that would be VERY chatty.
username_1: I put an Alpha out there with *significantly* increased logging:

There will be a metric crapton of BLE debug. Unfortunately, the lib does not respect its own logging levels and if I turn on core, it turns that on. Easy enough to filter out the ones that contain "NimBLEScan:" though.
You should see some correlation at the drops between your AP and the core WiFi lib debug. Hopefully. I've got it set for WARN level, hopefully that's enough.
username_0: I had a feeling the mesh nodes with overlapping signals with the same network names was confusing the esp32, even though I haven't seen any other device have such problems including a couple esp8266 running different bits of code and this same esp32 previously running tilt simulator. I created a test guest network that did not replicate across the nodes. The log below shows the output of the main branch serial monitor since the dev does not show as much. I now have one each connected to the test network on main and dev.
18:34:48.633 -> *WM: [1] 10 networks found
18:34:48.633 -> *WM: [2] DUP AP: main-network-node
18:34:48.633 -> *WM: [2] DUP AP: main-network-node
18:34:48.633 -> *WM: [2] DUP AP: main-network-node
18:34:48.633 -> *WM: [2] DUP AP: main-network-node
18:34:48.633 -> *WM: [2] AP: -45 main-network
18:34:48.633 -> *WM: [2] AP: -45 test-network-singular
It appears they are no longer getting disconnected as much/at all. I have a feeling it is confusion between the nodes.


I'll give the extra logging alpha a go next.
username_0: There's a couple of reboots in there at lines 1254 & 1517. I removed all the log entries containing NimBLEScan using the regular expression search ^.*\b(NimBLEScan)\b.*$\r?\n in find and replace with blank.
[chatty-log.txt](https://github.com/thorrak/tiltbridge/files/6247157/chatty-log.txt)
username_1: This one:
```
21:01:16.933 -> 4041978 V: Serving reset reason.
21:01:16.979 -> abort() was called at PC 0x400863b9 on core 0
21:01:16.979 ->
21:01:16.979 -> ELF file SHA256: 0000000000000000
21:01:16.979 ->
21:01:16.979 -> Backtrace: 0x4008f710:0x3ffbf620 0x4008f98d:0x3ffbf640 0x400863b9:0x3ffbf660 0x400864e5:0x3ffbf690 0x401132af:0x3ffbf6b0 0x4010b075:0x3ffbf970 0x4010afd1:0x3ffbf9c0 0x400e6f0f:0x3ffbf9f0 0x401c276a:0x3ffbfa20 0x401b3dde:0x3ffbfa40 0x4001791d:0x3ffbfa60 0x401b4585:0x3ffbfa80 0x401b4662:0x3ffbfac0 0x400188f5:0x3ffbfaf0 0x401b4017:0x3ffbfb30 0x400175ee:0x3ffbfb50 0x40017688:0x3ffbfb70 0x4008ee92:0x3ffbfba0 0x4008f067:0x3ffbfbc0 0x401b3d8b:0x3ffbfbe0 0x4008f084:0x3ffbfc00 0x40083759:0x3ffbfc20 0x400885b7:0x3ffb9d00 0x4008ee39:0x3ffb9d20 0x401c2909:0x3ffb9d40 0x401a46fc:0x3ffb9d60 0x400e6d05:0x3ffb9d80 0x40082283:0x3ffb9da0 0x4009098e:0x3ffb9dc0
21:01:16.979 ->
21:01:16.979 -> Rebooting...
```
... is because you left it on the "About" screen. Yes, ironically enough, that screen highlights an issue with WiFi and BLE. The TTGO is better in this respect than the larger TFT, but it still happens.
This one:
```
21:07:57.419 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
21:07:57.419 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
21:08:02.839 -> E (406135) task_wdt: Task watchdog got triggered. The following tasks did not reset the watchdog in time:
21:08:02.839 -> E (406135) task_wdt: - IDLE0 (CPU 0)
21:08:02.839 -> E (406135) task_wdt: Tasks currently running:
21:08:02.839 -> E (406135) task_wdt: CPU 0: esp_timer
21:08:02.839 -> E (406135) task_wdt: CPU 1: loopTask
21:08:02.839 -> E (406135) task_wdt: Aborting.
21:08:02.839 -> abort() was called at PC 0x4011b53c on core 0
21:08:02.839 ->
21:08:02.839 -> ELF file SHA256: 0000000000000000
21:08:02.839 ->
21:08:02.839 -> Backtrace: 0x4008f710:0x3ffbfbc0 0x4008f98d:0x3ffbfbe0 0x4011b53c:0x3ffbfc00 0x40083759:0x3ffbfc20 0x40102746:0x3ffb77a0 0x4010328f:0x3ffb77c0 0x401c4dbd:0x3ffb77f0 0x400d2e98:0x3ffb7810 0x400d2fe1:0x3ffb7830 0x400d3021:0x3ffb7850 0x400dcc37:0x3ffb7870 0x4011d88e:0x3ffb7890 0x4011d903:0x3ffb78b0 0x4009098e:0x3ffb78d0
21:08:02.839 ->
21:08:02.839 -> Rebooting...
```
Seems more like the de-auth issue. Does that correlate with what you see in your AP logs?
username_0: The AP log doesn't persist long enough to go back to that point. Here it is running again with both logs from the time STA_DISCONNECTED shows up in the tiltbridge log.
AP Log
```
Apr 2 11:02:59 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Deauthenticated because sending station is leaving (or has left) IBSS or ESS (3), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 11:03:00 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
```
tiltbridge-chatty log
```
11:03:03.062 -> 6726455 V: Free Heap: 135728, Largest contiguous block: 51752, Frag: 62%
11:03:06.379 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
11:03:06.379 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
11:03:26.480 -> .... 6750219 E: Unable to reconnect WiFI, restarting.
11:03:27.817 -> ets Jul 29 2019 12:21:46
11:03:27.817 ->
11:03:27.817 -> rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
11:03:27.817 -> configsip: 0, SPIWP:0xee
11:03:27.817 -> clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
11:03:27.817 -> mode:DIO, clock div:2
11:03:27.817 -> load:0x3fff0018,len:4
11:03:27.817 -> load:0x3fff001c,len:1044
11:03:27.817 -> load:0x40078000,len:10044
11:03:27.817 -> load:0x40080400,len:5872
11:03:27.817 -> entry 0x400806ac
11:03:28.423 ->
11:03:28.423 -> 31 N: Serial logging started at 115200.
11:03:28.423 -> 33 V: Loading config.
11:03:28.518 -> [W][SPIFFS.cpp:71] begin(): SPIFFS Already Mounted!
11:03:28.618 -> 217 V: Initializing LCD.
11:03:29.173 -> 792 V: Initializing WiFi.
11:03:29.173 -> *wm:[2] Added Parameter: mdns
11:03:29.173 -> *wm:[1] AutoConnect
11:03:29.312 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 0 - WIFI_READY
11:03:29.312 -> *wm:[D][WiFiGeneric.cpp:374] _eventCallback(): Event: 2 - STA_START
11:03:29.312 -> [2] WiFiSetCountry to US
11:03:29.312 -> *wm:[2] [OK] esp_wifi_set_country: US
11:03:29.312 -> *wm:[2] ESP32 event handler enabled
11:03:29.546 -> *wm:[2] Connecting as wifi client...
11:03:29.546 -> *wm:[2] setSTAConfig static ip not set, skipping
11:03:29.546 -> *wm:[1] Connecting to SAVED AP: MeshAP
11:03:30.015 -> *wm:[1] connectTimeout not set, ESP waitForConnectResult...
11:03:31.418 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 4 - STA_CONNECTED
11:03:31.467 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 7 - STA_GOT_IP
11:03:31.467 -> [D][WiFiGeneric.cpp:419] _eventCallback(): STA IP: 000.000.000.000, MASK: 000.000.000.000, GW: 000.000.000.000
11:03:31.512 -> *wm:[2] Connection result: WL_CONNECTED
11:03:31.512 -> *wm:[1] AutoConnect: SUCCESS
11:03:31.512 -> *wm:[1] STA IP Address: 000.000.000.000
11:03:31.558 -> 3189 V: Initializing scanner.
11:03:31.745 -> I NimBLEDevice: "BLE Host Task Started"
11:03:31.791 -> I NimBLEDevice: "NimBle host synced."
11:03:34.885 -> 6506 N: HTTP server started. Open: http://tiltbridge-chatty.local/ to view application.
```
Back to AP Log
```
Apr 2 11:03:30 syslog: wlceventd_proc_event(526): eth1: Auth tiltbridge-mac, status: Successful (0), rssi:0
Apr 2 11:03:33 syslog: wlceventd_proc_event(555): eth1: Assoc 3tiltbridge-mac, status: Successful (0), rssi:0
```
username_1: ```
11:03:06.379 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
```
I'll have to see if I can get a callback on that one.
username_0: While that first set of logs in my last post shows the tiltbridge-chatty bouncing between AP nodes and not reauthenticating the other board on the last main branch release has been sitting on a non-mesh SSID for longer than I have seen before.

username_0: Here's another instance
```
Apr 2 12:17:49 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:17:49 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:17:49 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:17:49 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:18:09 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:18:09 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:18:09 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:18:09 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:18:09 syslog: wlceventd_proc_event(490): eth1: Deauth_ind tiltbridge-chatty-mac, status: 0, reason: Class 3 frame received from nonassociated station (7), rssi:0
Apr 2 12:24:16 syslog: wlceventd_proc_event(526): eth1: Auth tiltbridge-chatty-mac, status: Successful (0), rssi:0
Apr 2 12:24:16 syslog: wlceventd_proc_event(555): eth1: Assoc tiltbridge-chatty-mac, status: Successful (0), rssi:0
12:18:04.992 -> 4476509 V: Free Heap: 137060, Largest contiguous block: 52368, Frag: 62%
12:18:10.752 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:18:10.752 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:18:31.732 -> . 4503275 E: Unable to reconnect WiFI, restarting.
12:18:32.810 -> ets Jul 29 2019 12:21:46
12:18:32.810 ->
12:18:32.810 -> rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
12:18:32.810 -> configsip: 0, SPIWP:0xee
12:18:32.810 -> clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
12:18:32.810 -> mode:DIO, clock div:2
12:18:32.810 -> load:0x3fff0018,len:4
12:18:32.810 -> load:0x3fff001c,len:1044
12:18:32.810 -> load:0x40078000,len:10044
12:18:32.810 -> load:0x40080400,len:5872
12:18:32.810 -> entry 0x400806ac
12:18:33.374 ->
12:18:33.422 -> 31 N: Serial logging started at 115200.
12:18:33.422 -> 33 V: Loading config.
12:18:33.469 -> [W][SPIFFS.cpp:71] begin(): SPIFFS Already Mounted!
12:18:33.562 -> 217 V: Initializing LCD.
12:18:34.174 -> 792 V: Initializing WiFi.
12:18:34.174 -> *wm:[2] Added Parameter: mdns
12:18:34.174 -> *wm:[1] AutoConnect
12:18:34.316 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 0 - WIFI_READY
12:18:34.316 -> *wm:[D][WiFiGeneric.cpp:374] _eventCallback(): Event: 2 - STA_START
12:18:34.316 -> [2] WiFiSetCountry to US
12:18:34.316 -> *wm:[2] [OK] esp_wifi_set_country: US
12:18:34.316 -> *wm:[2] ESP32 event handler enabled
12:18:34.504 -> *wm:[2] Connecting as wifi client...
12:18:34.504 -> *wm:[2] setSTAConfig static ip not set, skipping
12:18:34.504 -> *wm:[1] Connecting to SAVED AP: MeshAP
12:18:35.018 -> *wm:[1] connectTimeout not set, ESP waitForConnectResult...
12:18:36.559 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 4 - STA_CONNECTED
12:18:36.559 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 7 - STA_GOT_IP
12:18:36.559 -> [D][WiFiGeneric.cpp:419] _eventCallback(): STA IP: 000.000.000.000, MASK: 000.000.000.000, GW: 000.000.000.000
12:18:36.605 -> *wm:[2] Connection result: WL_CONNECTED
12:18:36.605 -> *wm:[1] AutoConnect: SUCCESS
12:23:39.980 -> 306615 V: Free Heap: 138420, Largest contiguous block: 97392, Frag: 30%
12:23:51.936 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:23:51.936 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:24:12.618 -> .. 339377 E: Unable to reconnect WiFI, restarting.
12:24:13.788 -> ets Jul 29 2019 12:21:46
12:24:13.788 ->
12:24:13.788 -> rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
[Truncated]
12:24:14.443 -> [W][SPIFFS.cpp:71] begin(): SPIFFS Already Mounted!
12:24:14.536 -> 217 V: Initializing LCD.
12:24:15.142 -> 792 V: Initializing WiFi.
12:24:15.142 -> *wm:[2] Added Parameter: mdns
12:24:15.142 -> *wm:[1] AutoConnect
12:24:15.282 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 0 - WIFI_READY
12:24:15.282 -> *wm:[D][WiFiGeneric.cpp:374] _eventCallback(): Event: 2 - STA_START
12:24:15.282 -> [2] WiFiSetCountry to US
12:24:15.282 -> *wm:[2] [OK] esp_wifi_set_country: US
12:24:15.282 -> *wm:[2] ESP32 event handler enabled
12:24:15.470 -> *wm:[2] Connecting as wifi client...
12:24:15.470 -> *wm:[2] setSTAConfig static ip not set, skipping
12:24:15.470 -> *wm:[1] Connecting to SAVED AP: MeshAP
12:24:15.983 -> *wm:[1] connectTimeout not set, ESP waitForConnectResult...
12:24:17.484 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 4 - STA_CONNECTED
12:24:17.577 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 7 - STA_GOT_IP
12:24:17.577 -> [D][WiFiGeneric.cpp:419] _eventCallback(): STA IP: 000.000.000.000, MASK: 000.000.000.000, GW: 000.000.000.000
12:24:17.577 -> *wm:[2] Connection result: WL_CONNECTED
12:24:17.577 -> *wm:[1] AutoConnect: SUCCESS
```
username_1: Just dropping notes here:
Can set up an event handler with `WiFi.onEvent(WiFiEvent);` Then, in the event handler I can scan for which `WiFiEvent_t` (which in `WiFiType.h` points to `system_event_id_t` in `esp_event_legacy.h`):
https://github.com/espressif/esp-idf/blob/1cb31e50943bb757966ca91ed7f4852692a5b0ed/components/esp_event/include/esp_event_legacy.h#L29-L62
There is no status for deauthed, the best we can do is `SYSTEM_EVENT_STA_DISCONNECTED` which is no better than what we are doing now checking for a disconnect via timer.
Long story short: Need to figure out why `wifisetup.cpp::reconnectWiFi()` is not working when we get deauthed.
username_2: You are relying on arduino's autoreconnect which only is triggered in certain cases which does not include Reason: 7 - NOT_ASSOCED.
https://github.com/espressif/arduino-esp32/blob/5502879a5b25e5fff84a7058f448be481c0a1f73/libraries/WiFi/src/WiFiGeneric.cpp#L829-L835
You should request changes to arduino-esp32 or use a callback to handle the event separately
username_0: @username_1 @thorrak I am looking at the [new issue form](https://github.com/espressif/arduino-esp32/issues/new?assignees=&labels=&template=bug_report.md&title=) for the file linked by username_2 and want to make sure that I get the details right in requesting Reason: 7 - NOT_ASSOCED be added as a trigger for autoreconnect. What are the correct values for the categories below?
Board: ttgo_oled
Core Installation version: ?1.0.0? ?1.0.1-rc4? ?1.0.1? ?1.0.1-git? ?1.0.2? ?1.0.3?
IDE name: Platform.io
Flash Frequency: ?40Mhz?
PSRAM enabled: ?no? ?yes?
Upload Speed: 115200
Computer OS: Windows 10
username_1: @username_0 I installed a callback that might work, but even if it does not there's a fair chance the additional debug should give me an idea which way to go. Sorry to keep experimenting on you but I am unaware of how to replicate this (unless I flash a deauther? That will kill my house though.)
Go ahead and flash the same named firmware and you should pick it up.
username_0: I appreciate all the help. I'm glad to test possible solutions. Speaking of deauthers, if you have an extra esp8266 around you can use [the web interface](https://github.com/SpacehuhnTech/esp8266_deauther/wiki/Web) to target a specific network device, in this case a testing tiltbridge.
I just reflashed the TTGO alpha from brewflasher and connected it back to the main mesh network with the serial port monitor running.
username_1: I didn't know I could target a single device; good to know.
I knelt have 30-40 extra controllers laying about. Should be able to find one. 😃
username_1: Well, I mean it's not fixed but I'm poking the right scab. :)
I won't be able to look at this again till the morning. I have some ideas though.
username_2: Something else to be aware of is that the default method WIFI_FAST_SCAN will connect to the first matching AP not necessarily the one with the best signal
username_1: @username_2 unless I'm *really* dreaming you had a note about changing the default behavior from `WIFI_FAST_SCAN` to `WIFI_ALL_CHANNEL_SCAN`. Have I gone insane or did you post and then remove that? I was digging into that area without a ton of luck since it appears to be handled in the upstream lib (WiFiManager.)
username_1: Okay, @username_0 go ahead and flash again if you would - the desired effect is that it will recognize the event and reconnect. The actual effect ... you tell me. :)
username_0: I'm having trouble using brewflasher to get the alpha flashed this time for some reason. I've tried now on a windows 10 computer and a mac. The v1.0.2 flashes as expected (albeit with a significantly longer time on the downloading firmware step), but the alpha just gets to this point and nothing else happens.
```
Downloading firmware...
Downloaded successfully!
Command: esptool.py --port...
```
username_1: I probably screwed up the entry in brewflasher. If you want to pull that branch and install manually that will work, or I can fix it when I get home.
username_2: @username_1 sorry the default method is WIFI_FAST_SCAN in esp-idf but WIFI_ALL_CHANNEL_SCAN in arduino so it was wrong and I deleted it
username_1: Hey no worries - I thought I had gone insane. :)
username_0: Is this how I would flash it manually using the bin files in the dev branch? Are the addresses for each bin file shown there correct for the tiltbridge files too?
http://esprtk.wap.sh/tt/t3/flash_bin_file_to_esp32.html
Type | File name | Address
Bootloader | _a_0x1000.bin | 0x1000
Partition Table | _b_0x8000.bin | 0x8000
Main File | _c_0x10000.bin | 0x10000
SPIFFS | _d_0x225000.bin | 0x225000
username_1: Want to try again? He may have fixed it.
username_0: Thanks for trying, but brewflasher still stops at the esptool.py command line.
username_1: Okay: Fresh version uploaded, tested flashing from Windows:

username_0: I got the new version running and logging. 🪵
username_0: tiltbridge
```
12:01:06.020 -> 4776489 V: Free Heap: 138528, Largest contiguous block: 96360, Frag: 31%
12:01:10.588 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:01:10.588 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:01:10.588 -> 4781009 V: WiFi event callback: reason code: 7.
12:01:10.588 -> 4781021 E: ERROR: Lost Association 4781026 V: DEBUG: WiFiEvent() received: SYSTEM_EVENT_STA_DISCONNECTED
12:01:21.631 -> D N.imBLEScan: "erase device: 7b:92:c6:69:9d:b8"
12:01:30.655 -> 4801124 E: Unable to reconnect WiFI, restarting.
12:01:31.729 -> ets Jul 29 2019 12:21:46
12:01:31.729 ->
12:01:31.729 -> rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
```
AP
```
Apr 10 12:01:38 syslog: wlceventd_proc_event(527): eth1: Auth 00:00:00:00:00:00, status: Successful (0), rssi:0
Apr 10 12:01:38 syslog: wlceventd_proc_event(556): eth1: Assoc 00:00:00:00:00:00, status: Successful (0), rssi:0
```
```
12:16:59.096 -> 6996 N: HTTP server started. Open: http://tiltbridge-gamma.local/ to view application.
12:17:24.499 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:17:24.499 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:17:24.499 -> 32386 V: WiFi event callback: reason code: 7.
12:17:24.499 -> 32398 E: ERROR: Lost Association 32402 V: DEBUG: WiFiEvent() received: SYSTEM_EVENT_STA_DISCONNECTED
12:17:28.941 -> .. 36999 V: Free Heap: 139272, Largest contiguous block: 97392, Frag: 31%
12:17:44.617 -> D NimBLES can: "erase device: 4d:b2:2d:7c:cb:46"
12:17:44.617 -> E: Unable to reconnect WiFI, restarting.
12:17:45.929 -> ets Jul 29 2019 12:21:46
12:17:45.929 ->
12:17:45.929 -> rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
```
```
Apr 10 12:16:28 syslog: wlceventd_proc_event(491): eth1: Deauth_ind 00:00:00:00:00:00, status: 0, reason: Deauthenticated because sending station is leaving (or has left) IBSS or ESS (3), rssi:0
Apr 10 12:16:31 syslog: wlceventd_proc_event(508): eth1: Disassoc 00:00:00:00:00:00, status: 0, reason: Disassociated because sending station is leaving (or has left) BSS (8), rssi:0
```
The following instances have no correlated mention of the the tiltbridge mac address in that log
```
12:29:23.608 -> 697096 V: Free Heap: 139276, Largest contiguous block: 97392, Frag: 31%
12:29:47.526 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:29:47.526 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:29:47.526 -> 721006 V: WiFi event callback: reason code: 7.
12:29:47.526 -> 721019 E: ERROR: Lost Association 721023 V: DEBUG: WiFiEvent() received: SYSTEM_EVENT_STA_DISCONNECTED
12:29:53.361 -> ... 727096 V: Free Heap: 139440, Largest contiguous block: 97392, Frag: 31%
12:30:07.286 -> .... 741120 E: Unable to reconnect WiFI, restarting.
```
```
12:37:47.405 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
12:37:47.405 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 7 - NOT_ASSOCED
12:37:47.405 -> 143897 V: WiFi event callback: reason code: 7.
12:37:47.405 -> 143909 E: ERROR: Lost Association 143914 V: DEBUG: WiFiEvent() received: SYSTEM_EVENT_STA_DISCONNECTED
12:38:00.081 -> .... 156874 V: Free Heap: 139088, Largest contiguous block: 97392, Frag: 30%
12:38:07.484 -> . 164012 E: Unable to reconnect WiFI, restarting.
12:38:08.608 -> ets Jul 29 2019 12:21:46
[Truncated]
14:10:27.688 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 3 - STA_STOP
14:10:27.688 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 3 - STA_STOP
14:10:28.721 -> *wm:[2] Disabling STA
14:10:28.721 -> *wm:[2] Enabling AP
14:10:28.721 -> *wm:[1] StartAP with SSID: TiltBridgeAP
14:10:28.721 -> *wm:[2] Starting AP on channel: 1
14:10:28.767 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 0 - WIFI_READY
14:10:28.767 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 14 - AP_START
14:10:29.281 -> *wm:[1] AP IP address: 192.168.4.1
14:10:29.281 -> *wm:[2] setting softAP Hostname: tiltbridge-gamma
14:10:29.281 -> *wm:[2] WiFiSetCountry to US
14:10:29.281 -> *wm:[2] [OK] esp_wifi_set_country: US
14:10:29.281 -> 13203 V: Entered config mode: SSID: TiltBridgeAP, IP: 192.168.4.1
14:10:29.328 -> *wm:[1] Starting Web Portal
14:10:29.328 -> *wm:[2] HTTP server started
14:10:29.328 -> *wm:[2] Config Portal Running, blocking, waiting for clients...
14:10:29.328 -> *wm:[2] Portal Timeout In 300 seconds
14:10:46.048 -> *wm:[2] Portal Timeout In 282 seconds
14:11:16.033 -> *wm:[2] Portal Timeout In 252 seconds
```
username_1: Looks like I need to do some more dancing in order to (re)connect, since it's actually not reconnecting it's connecting.
username_1: Okay, @username_0 - you can test again at your leisure. With luck, this one will get it.
username_0: ```
Apr 11 18:56:58 kernel: tiltbridgemac not mesh client, can't update it's ip
```
```
18:56:39.395 -> 930960 V: Serving Tilt JSON.
18:56:44.918 -> 936496 V: Free Heap: 131216, Largest contiguous block: 89544, Frag: 32%
18:56:53.450 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 5 - STA_DISCONNECTED
18:56:53.450 -> [W][WiFiGeneric.cpp:391] _eventCallback(): Reason: 16 - GROUP_KEY_UPDATE_TIMEOUT
18:56:53.450 -> 945025 V: WiFi event callback: reason code: 16.
18:56:53.450 -> 945037 V: DEBUG: WiFiEvent() received: SYSTEM_EVENT_STA_DISCONNECTED
18:56:53.450 -> 945041 V: Attempting rec onnect.
18:56:54.523 -> *wm:[1] AutoConnect
18:56:54.523 -> *wm:[2] ESP32 event handler enabled
18:56:54.523 -> *wm:[2] Connecting as wifi client...
18:56:54.523 -> *wm:[2] setSTAConfig static ip not set, skipping
18:56:54.523 -> *wm:[1] Connecting to SAVED AP: MeshAP
18:56:55.034 -> *wm:[1] connectTimeout not set, ESP waitForConnectResult...
18:56:59.737 -> 951334 V: Serving Tilt JSON.
18:56:59.921 -> V: Serving Tilt JSON.
18:56:59.921 -> [W][AsyncTCP.cpp:1014] _poll(): pcb is NULL
18:57:05.022 -> *wm:[2] Connection result: WL_DISCONNECTED
18:57:05.022 -> *wm:[1] AutoConnect: FAILED
18:57:05.022 -> *wm:[2] Starting Config Portal
18:57:05.022 -> *wm:[2] AccessPoint set password is VALID
18:57:05.162 -> *wm:[2] Disabling STA
18:57:05.162 -> *wm:[2] Enabling AP
18:57:05.162 -> *wm:[1] StartAP with SSID: <PASSWORD>
18:57:05.209 -> [D][WiFiGeneric.cpp:374] _eventCallback(): Event: 0 - WIFI_READY
18:57:05.209 -> 956769 V: WiFi event callback: reason code: 0.
18:57:05.209 -> 956776 V: DEBUG: WiFiEvent() received unknown event: 0
18:57:05.718 -> *wm:[1] AP IP address: 192.168.4.1
18:57:05.718 -> *wm:[1] Starting Web Portal
18:57:05.718 -> *wm:[2] HTTP server started
18:57:05.718 -> *wm:[2] Config Portal Running, blocking, waiting for clients...
18:57:05.718 -> *wm:[2] NUM CLIENTS: 0
18:57:14.930 -> 966496 V: Free Heap: 124608, Largest contiguous block: 85396, Frag: 32%
18:57:35.685 -> *wm:[2] NUM CLIENTS: 0
18:57:44.898 -> 996496 V: Free Heap: 124608, Largest contiguous block: 85396, Frag: 32%
18:58:05.688 -> *wm:[2] NUM CLIENTS: 0
18:58:14.905 -> 1026496 V: Free Heap: 124608, Largest contiguous block: 85396, Frag: 32%
18:58:35.700 -> *wm:[2] NUM CLIENTS: 0
18:58:44.936 -> 1056496 V: Free Heap: 124608, Largest contiguous block: 85396, Frag: 32%
18:59:05.727 -> *wm:[2] NUM CLIENTS: 0
18:59:14.929 -> 1086496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
18:59:35.716 -> *wm:[2] NUM CLIENTS: 0
18:59:44.934 -> 1116496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:00:05.721 -> *wm:[2] NUM CLIENTS: 0
19:00:14.926 -> 1146496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:00:35.700 -> *wm:[2] NUM CLIENTS: 0
19:00:44.924 -> 1176496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:01:05.725 -> *wm:[2] NUM CLIENTS: 0
19:01:14.916 -> 1206496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:01:35.713 -> *wm:[2] NUM CLIENTS: 0
19:01:44.933 -> 1236496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:02:05.724 -> *wm:[2] NUM CLIENTS: 0
19:02:14.939 -> 1266496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:02:35.738 -> *wm:[2] NUM CLIENTS: 0
19:02:44.952 -> 1296496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:03:05.739 -> *wm:[2] NUM CLIENTS: 0
[Truncated]
19:08:14.918 -> 1626496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:08:35.733 -> *wm:[2] NUM CLIENTS: 0
19:08:44.932 -> 1656496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:09:05.756 -> *wm:[2] NUM CLIENTS: 0
19:09:14.947 -> 1686496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:09:35.734 -> *wm:[2] NUM CLIENTS: 0
19:09:44.925 -> 1716496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:10:05.773 -> *wm:[2] NUM CLIENTS: 0
19:10:14.930 -> 1746496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:10:35.749 -> *wm:[2] NUM CLIENTS: 0
19:10:44.957 -> 1776496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:11:05.744 -> *wm:[2] NUM CLIENTS: 0
19:11:14.933 -> 1806496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:11:35.744 -> *wm:[2] NUM CLIENTS: 0
19:11:44.945 -> 1836496 V: Free Heap: 124616, Largest contiguous block: 85396, Frag: 32%
19:12:05.743 -> *wm:[2] NUM CLIENTS: 0
```
At this point the TTGO display still shows Temp and Gravity, but is no longer accessible over the local network and has started the wifi config AP. Connecting to the TiltBridgeAP produces the following view where it would normally have the wifi config screens.

username_1: Well, that didn't quite work as expected. This part though:
```
18:56:54.523 -> *wm:[2] setSTAConfig static ip not set, skipping
18:56:54.523 -> *wm:[1] Connecting to SAVED AP: MeshAP
18:56:55.034 -> *wm:[1] connectTimeout not set, ESP waitForConnectResult...
18:57:05.022 -> *wm:[2] Connection result: WL_DISCONNECTED
18:57:05.022 -> *wm:[1] AutoConnect: FAILED
```
Popping that portal was not what I expected - but then again it should just have connected and as you can see above it failed. I wonder if there's not a bigger thing in play here.
I briefly considered allowing the device to participate in the mesh ... but then sanity took hold.
username_0: When you say allowing the device to participate in the mesh, do you mean connecting to the node with the best signal?
username_1: No, the ESP32 libs actually have the capability to participate as a mesh node. I'm officially whacking that one as scope creep. :) |
PaloAltoNetworks/iron-skillet | 432698290 | Title: Predefined External Dynamic Lists not included in security policies
Question:
username_0: Regardless of the "INCLUDE_PAN_EDL" setting, the PAN EDLs are not included in the security policies.
Answers:
username_1: can you confirm the sw version and what method/tools to set the value? Had found this error a few weeks ago and seeing if anything missed for a fix
username_0: I am running the version shown below:
```
git describe --tags
v1.0.3-137-g25a449c
```
To set the value, in tools/config_variables.yaml:
```
- name: INCLUDE_PAN_EDL
description: include the predefined Palo Alto Networks external lists
value: True
```
username_1: fixed for 8.1 and updating 8.0/9.0. Also now requires use of 'yes' as the value (with quotes). issue with reserved words and boolean vs text string issues
Status: Issue closed
username_1: an issue with mix of boolean and text not picking up the yes/no toggle. should be resolved for all skillets in 8.0, 8.1, 9.0 |
Azure/monaco-kusto | 402232255 | Title: Intellisense v2 doesn't work
Question:
username_0: ### Description
Enabling `useIntellisenseV2` trows an exception:
```
Uncaught (in promise)
{
HResult: -2147467261
InnerException: null
Message: "Value cannot be null.↵Parameter name: type"
ParamName: "type"
StackTrace: "Error: Value cannot be null.
at $ctor1.ctor (webpack://monaco/./node_modules/@kusto/language-service/bridge.js?:5436:31)
at $ctor1 (webpack://monaco/./node_modules/@kusto/language-service/bridge.js?:5507:39)
at $ctor1.$ctor3 (webpack://monaco/./node_modules/@kusto/language-service/bridge.js?:32303:47)
at new $ctor1 (webpack://monaco/./node_modules/@kusto/language-service/bridge.js?:32322:49)
at eval (webpack://monaco/./node_modules/@kusto/language-service-next/Kusto.Language.Bridge.js?:21774:27)
at new ctor (webpack://monaco/./node_modules/@kusto/language-service-next/Kusto.Language.Bridge.js?:21775:18)
at createColumnSymbol (webpack://monaco/./src/languageService/kustoLanguageService.ts?:849:20)
at eval (webpack://monaco/./src/languageService/kustoLanguageService.ts?:868:73)
at Array.map (<anonymous>)
at createTableSymbol (webpack://monaco/./src/languageService/kustoLanguageService.ts?:868:45)"
}
```
This error caused by wrong schema normalization:
https://github.com/Azure/monaco-kusto/blob/master/src/languageService/kustoLanguageService.ts#L619
```ts
columns: OrderedColumns.map(({Name, Type, CslType}: s.showSchema.Column) => ({
name: Name,
type: CslType,
}))
```
Code above produces columns with undefined `type`.
Guess, it may be fixed by assigning proper value to `type`:
```ts
columns: OrderedColumns.map(({Name, Type, CslType}: s.showSchema.Column) => ({
name: Name,
type: Type,
cslType: CslType,
}))
```
### Steps to Reproduce
Enable `useIntellisenseV2` for kusto:
```ts
monaco.languages['kusto'].kustoDefaults.setLanguageSettings({
includeControlCommands: true,
newlineAfterPipe: true,
useIntellisenseV2: true,
useSemanticColorization: true,
});
```
### System configuration
monaco-editor version: 0.2.2 (master)
Browser: Chrome
OS: Linux
Answers:
username_1: 0.2.2 is an old version of monaco editor. This might be the reason.
can you update to the latest version and see if you still get error?
username_0: @username_1 I use master branch from this repo. How can I update it?
username_0: @username_1 sorry, `0.2.2` is a version of monaco-kusto, we use monaco `0.15.6`.
username_1: I don't undestand why you are getting empty CslType.
when calling .show schema as json (for example here):
https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA9MrzsgvVyhOzkjNTVRILFbIKs7PAwCbRYavFAAAAA==
we're getting CslType as part of the repsonse.
can you show the request you're performing to get the schema and the relevant repsonse?
username_1: Looks like are not using kusto directly – you’re using a middle tier service that has a different protocol.
So you’d have to massage the data a little bit before you feed it to Monaco-kusto.
username_0: Ok, got it.
Status: Issue closed
|
PaddlePaddle/VisualDL | 1132612449 | Title: 只能看10个图像样本,怎么才能看到更多?
Question:
username_0: 
image组件只能看10个图像样本,怎么才能看到更多
Status: Issue closed
Answers:
username_0: 在**常见问题**里看到“使用Image、Audio、Text组件仅显示10个样本”
https://github.com/PaddlePaddle/VisualDL/blob/develop/docs/faq_CN.md#%E4%BD%BF%E7%94%A8imageaudiotext%E7%BB%84%E4%BB%B6%E4%BB%85%E6%98%BE%E7%A4%BA10%E4%B8%AA%E6%A0%B7%E6%9C%AC
问题已解决 |
mscdex/node-imap | 40434899 | Title: Decoding mime encoded name in envelope
Question:
username_0: Hi,
I may have notice a decoding issue with NAME field on TO/FROM/CC/... in the ENVELOPE. It seems to be the same kind of issue than #318.
Here, part of the result of a FETCH:
```
"envelope": {
"date": "2014-07-03T12:06:50.000Z",
"subject": "Test",
"from": [
{
"name": "=?utf-8?Q?Timoth=C3=A9e_Eid?=",
"mailbox": "timothee.eid",
"host": "erizo.fr"
}
]
```
I think the solution may be to add this in the Parser.js (function: parseEnvelopeAddresses)
```
name: decodeWords(addr[0], {buffer: undefined, encoding: undefined, consecutive: false, replaces: undefined, curReplace: undefined}),
```
instead of:
```
name: addr[0],
```
Answers:
username_1: Same issue for attachment filenames |
Realm667/WolfenDoom | 1071239250 | Title: tesla shield
Question:
username_0: If you want to buy the tesla shield, it says [100%]. Shouldn't it say [+100%] ??? It now does +100% on the current shield health.

Answers:
username_1: Fixed
Status: Issue closed
|
websocket-client/websocket-client | 1123895151 | Title: Clarify documentation for teardown return from run_forever()
Question:
username_0: The [doc-comment](https://github.com/websocket-client/websocket-client/blob/master/websocket/_app.py#L260) in `run_forever` says:
```
teardown: bool
False if caught KeyboardInterrupt, True if other exception was raised during a loop
```
This doesn't cover are the cases where `run_forever` returns `None`.
One case is on signal `SIGTERM` which causes `keep_running` to become `False` and therefore [SSLDispatcher.read](https://github.com/websocket-client/websocket-client/blob/master/websocket/_app.py#L66) returns `None`. Pressing the stop button in PyCharm will trigger this.
I _think_ the intention here is to allow the caller to differentiate between a graceful and non-graceful exit. Given that it may be sensible to have `run_forever` always return either `False` or `True` and have both `SIGINT` (i.e. `KeyboardInterrupt` exception) and other non-exceptional status (i.e. `SIGTERM`) return `False`.
Answers:
username_1: If I am understanding correctly, there is an edge case that is triggered when pressing the stop button in an IDE. I haven't used PyCharm lately so this could be an error that I never encountered. Could you try drafting a PR so that "False" is returned instead of "None" in this edge case? It should help me understand better how the edge case happens.
A different low effort idea is to edit the docstring to mention that "None" is return in some edge cases, so that developers are aware of this edge case.
username_0: @username_1 my explanation above was not very clear, my apologies. Here is a minimal example to demonstrate `WebSocketApp.run_forever()` returning `None`:
```py
import signal
import websocket
if __name__ == '__main__':
def shutdown(*args):
print("close")
ws.close()
signal.signal(signal.SIGINT, shutdown)
ws = websocket.WebSocketApp("wss://example.com")
teardown = ws.run_forever()
print(teardown)
```
If you pass `SIGINT` to the `python` process (i.e. run `kill -s INT <pid>` or otherwise) the program will output `close` and then `None`.
The core issue here is the assumption that `dispatcher.read(self.sock.sock, read, check)` will only ever exit with an Exception, which is not true when the driving application is performing signal handling and explicitly closing the `WebSocketApp` with `close()`. In this case, as no exception occurs, the special handling for `KeyboardInterrupt` (i.e. `SIGINT`) does not occur and `None` is returned.
I've raised a PR (#788) to have `run_forever()` return `False` unconditionally in the event of a clean (no exception) exit and clarified the documentation of `teardown`.
The intended meaning of the name `teardown` is still a little unclear to me and it feels odd that returning `False` is the "happy path" case. However given this minimal change is (arguably) not a breaking change (given that it removes an undocumented edge case) I think it is lowest impact way of proceeding. I flagged it as a breaking change in the draft PR.
Status: Issue closed
|
pantheon-systems/wp_launch_check | 64113290 | Title: Best Practice:
Question:
username_0: wordfence scans should not be running in production environments. We should flag those cron jobs.
Answers:
username_1: Putting this back in the hopper while we consider a more general solution for banned plugins.
username_2: For now, wouldn't this just be flagged in the Known Plugins list? I believe we do that for say 'Devel' on the Drupal side. |
ndiquattro/chooseeats | 82969268 | Title: Verify MySQL switch is working
Question:
username_0: There are still some errors involving disconnects to sql server.
pythonanywhere help says use this:
``` {python}
engine = create_engine('mysql+mysqldb://...', pool_recycle=280)
```
Need to figure out engines
Answers:
username_0: just had to change the link!
Status: Issue closed
|
EthanRosenthal/alignimation | 1079950914 | Title: What if keypoint names are not given?
Question:
username_0: Thank you for sharing such an amazing work of yours!
I'm studying image registration and always wondered how I can replace optimization part with PyTorch.
Your work is exactly what I've been looking for.
I have a question regarding a mapping process.
Using segmentation model, you obtain keypoints and use them for mapping.
**What if keypoint names, such as left ear, are not provided?**
Let's say we're using all keypoints from the segmentation and trying to register the image.
Does the algorithm work fine or is there any other approach?
(I'm currently working on a case in which no keypoint name information is given, but only arbitrary keypoints are given.)
Answers:
username_1: If I understand your question correctly, you only have a set of keypoints for each image, but you don't know if keypoint 1 for image A is referencing the same location as key point 2 for image B?
If that's the case, I think that you can still do the image registration, but you will have to look out for one thing. In my code, I create a Gaussian mask around each keypoint. Each keypoint gets its own index in the torch tensor. I know that the keypoint at index 2 in image A corresponds to the same location of the body as the keypoint in index 2 in image B. I then create Gaussian masks centered at each keypoint, [multiply these masks](https://github.com/username_1/alignimation/blob/f270d54b432dc006a83093dad55fa97ef26b31ca/src/alignimation/base.py#L370-L375) by the body "segmentation mask", and then perform image registration on the resulting mask. Crucially, that resulting mask has an index for each keypoint.
In your case, you could take the sum along the keypoint index. You would end up with a single mask that contains all of the different keypoint Gaussian masks and then perform image registration on this resulting mask. While this _can_ work, the danger could be that the image registration algorithm ends up registering the wrong keypoints on each other if that happens to provides the optimal solution. You won't know if that's an issue until you try it, though! |
couchbaselabs/mobile-testkit | 230052683 | Title: Reenable "test_rebalance_sanity"
Question:
username_0: Only if SG version >= 1.5
Since we are now using gocb, this should be fixed
Answers:
username_0: This is in https://github.com/couchbaselabs/mobile-testkit/pull/1347
- [ ] Run the reenabled tests locally and make sure they work before merging. The rebalance test is currently failing due to a test issue.
username_1: This test is reenabled after adding multiple url support. It is in branch now, sent pull request to Raghu
Status: Issue closed
|
material-components/material-components-web | 301475007 | Title: Reset MDC-Select?
Question:
username_0: I have two selects that are interdependent.
The content of the second select depends on the selection of the first select.
If I choose another option from the first select than before, the second select must be emptied.
In short, how can I reset a select?
Answers:
username_1: @username_0 You can clear the select by setting the `selectedIndex` to -1. Right now, there's a bug where you also have to clear the label float above class, but that will be fixed with the next release.
Codepen
https://codepen.io/username_1/pen/LQaENr
Status: Issue closed
|
dotnet/roslyn | 85990598 | Title: "Quick Actions..." in Annotate view causes error dialog.
Question:
username_0: 
Answers:
username_1: Can you take a look and see if this is us or the editor?
username_2: Microsoft.VisualStudio.CoreUtility.dll!Microsoft.VisualStudio.Utilities.PropertyCollection.GetProperty(object key) Line 152 C#
Microsoft.VisualStudio.Platform.VSEditor.dll!Microsoft.VisualStudio.Language.Intellisense.Implementation.LightBulbController.GetController(Microsoft.VisualStudio.Text.Editor.ITextView textView) Line 49 C#
Microsoft.VisualStudio.Platform.VSEditor.dll!Microsoft.VisualStudio.Language.Intellisense.Implementation.LightBulbBroker.TryExpandSession(Microsoft.VisualStudio.Language.Intellisense.ISuggestedActionCategorySet requestedActionCategories, Microsoft.VisualStudio.Text.Editor.ITextView textView, Microsoft.VisualStudio.Text.ITrackingPoint triggerPoint, Microsoft.VisualStudio.Text.ITrackingSpan triggerSpan, bool trackMouse) Line 307 C#
Microsoft.VisualStudio.Platform.VSEditor.dll!Microsoft.VisualStudio.Language.Intellisense.Implementation.LightBulbBroker.TryExpandSession(Microsoft.VisualStudio.Language.Intellisense.ISuggestedActionCategorySet requestedActionCategories, Microsoft.VisualStudio.Text.Editor.ITextView textView) Line 253 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.IntellisenseCommandFilter.TryShowSuggestedActionsAtCaret(Microsoft.VisualStudio.Language.Intellisense.ISuggestedActionCategorySet requestedCategorySet, bool specifyTriggerSpan) Line 558 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.IntellisenseCommandFilter.Exec(ref System.Guid pguidCmdGroup = {System.Guid}, uint nCmdID = 1, uint nCmdexecopt = 0, System.IntPtr pvaIn = {System.IntPtr}, System.IntPtr pvaOut = {System.IntPtr}) Line 186 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.CommandChainNode.InnerExec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Line 67 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.CommandChainNode.Exec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Line 51 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.BraceCompletionCommandFilter.Exec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Line 173 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.CommandChainNode.InnerExec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Line 67 C#
Microsoft.VisualStudio.Editor.Implementation.dll!Microsoft.VisualStudio.Editor.Implementation.SimpleTextViewWindow.Exec(ref System.Guid pguidCmdGroup = {System.Guid}, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn = {System.IntPtr}, System.IntPtr pvaOut) Line 145 C#
Microsoft.VisualStudio.TeamFoundation.VersionControl.dll!Microsoft.VisualStudio.TeamFoundation.VersionControl.ToolWindowAnnotate.IOleCommandTargetExec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Unknown
Microsoft.VisualStudio.TeamFoundation.dll!Microsoft.VisualStudio.TeamFoundation.ToolWindowBase.Microsoft.VisualStudio.OLE.Interop.IOleCommandTarget.Exec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Unknown
Microsoft.VisualStudio.Platform.WindowManagement.dll!Microsoft.VisualStudio.Platform.WindowManagement.DocumentObjectSite.Exec(ref System.Guid pguidCmdGroup = {System.Guid}, uint nCmdID = 1, uint nCmdexecopt = 0, System.IntPtr pvaIn = {System.IntPtr}, System.IntPtr pvaOut = {System.IntPtr}) Line 740 C#
Microsoft.VisualStudio.Platform.WindowManagement.dll!Microsoft.VisualStudio.Platform.WindowManagement.WindowFrame.Exec(ref System.Guid pguidCmdGroup, uint nCmdID, uint nCmdexecopt, System.IntPtr pvaIn, System.IntPtr pvaOut) Line 8705 C#
username_1: Can you file an internal bug and route to them in that case?
Status: Issue closed
username_2: filed internal bug 1185801 |
prettier/prettier | 233071798 | Title: Typescript: 'export =' syntax gets replaced with 'export default'
Question:
username_0: [Reproduction](https://prettier.github.io/prettier/#%7B%22content%22%3A%22declare%20module%20%5C%22hello%5C%22%20%7B%5Cn%20%20%20%20export%20%3D%20Hello%3B%5Cn%7D%22%2C%22options%22%3A%7B%22printWidth%22%3A80%2C%22tabWidth%22%3A2%2C%22singleQuote%22%3Afalse%2C%22trailingComma%22%3A%22none%22%2C%22bracketSpacing%22%3Atrue%2C%22jsxBracketSameLine%22%3Afalse%2C%22parser%22%3A%22typescript%22%2C%22semi%22%3Atrue%2C%22useTabs%22%3Afalse%2C%22doc%22%3Afalse%7D%7D)
Input:
```ts
declare module "hello" {
export = Hello;
}
```
Output:
```ts
declare module "hello" {
export default Hello;
}
```
Answers:
username_1: Just opened https://github.com/eslint/typescript-eslint-parser/issues/304
Keep up with the awesome bug reports!
Status: Issue closed
|
hackfiu/peach | 384184133 | Title: Set now.json file to deploy server correctly
Question:
username_0: At the moment, now checks fail continuously. In the past it was due to env files not being configured, but it is now breaking due to this:
`Error! The alias peach is a deployment URL or it's in use by a different team.`
I've been working on this but if anyone else can take a look at this, that would be awesome.
Answers:
username_0: Just had to change the alias, there was no other workaround.
Status: Issue closed
|
suchipi/domdom-go | 61737293 | Title: Application attempts to unzip non-zip files
Question:
username_0: Single-part downloads are often not zip files. The application should detect this case and not attempt to unzip them.
Status: Issue closed
Answers:
username_0: Fixed in https://github.com/username_0/domdom-go/commit/a75fe53720e158d52176dadeb9bbd8991489b6fe |
newrelic/node-newrelic | 892954147 | Title: Fastify add prefix 'Fastify' to all the transactions on the UI
Question:
username_0: ## Description
[NOTE]: # ( Describe the problem you're encountering. )
After enable `fastify_instrumentation` flag configuration, I can see the routes correctly in the transactions APM but all the transactions prefix with word 'Fastify/...'
## Expected Behavior
While Fastify should exist in the name, it should not be showing by the UI on raw display (but rather shown as the framework in certain displays). I think this is because the raw name appears to have both "webtransaction" and "webframeworkuri" which typically we'd only have one of those
## Steps to Reproduce
Enable `fastify_instrumentation` flag and check the transactions APM in the UI.
## Your Environment
* ex: Browser name and version: Chrome v90
* ex: Node version: 14.16
* ex: Operating System and version: MAC OS
* ex: Node new-relic: 7.4.0
Answers:
username_1: This appears because for all other web frameworks it uses the "legacy name" schema by not prepending the web framework prefix to the already created name. This is fixed by doing
```
--- a/lib/transaction/name-state.js
+++ b/lib/transaction/name-state.js
@@ -17,7 +17,8 @@ var LEGACY_NAMING = {
Expressjs: true,
Hapi: true,
Nodejs: true,
- Restify: true
+ Restify: true,
+ Fastify: true
}
```
but feels like a refactor should occur to prevent future frameworks from having this same issue
username_2: Looks like this is the result of some intentional naming (likely specific to the Node agent) that is not supported in the UI.
The naming was introduced to avoid special casing frameworks in the old UI, for the Node agent, which has many frameworks built on top of an instrumented 'http' module. https://source.datanerd.us/APM/rpm_site/pull/25862
It also appears the 'category' field of the segment breakdown is not working for Node segments in the new UI compared to the old. It doesn't appear to be parsing Nodejs/Middleware/Expressjs into a category of 'Expressjs' anymore. |
spacemeshos/SMIPS | 1099708892 | Title: Atomic transaction batches (placeholder)
Question:
username_0: ## Overview
Provides an alternative to nonces to specify a set of transactions that must be executed together, in a specific order, or not at all.
## Scope
<!-- Please briefly state the scope of the proposal, including things that are explicitly not included (if relevant). -->
## Goals and motivation
<!-- Explain the background, motivation, and goals of the proposal. -->
## High-level design
<!-- Explain the high-level design being proposed. -->
## Prior art
<!-- Explain how other projects or protocols have solved this problem, or relevant prior work in Spacemesh. -->
## Specification
<!-- The complete proposed design. -->
## Implementation plan
<!-- Provide more details about the proposed implementation plan such as roadmap and milestones, if relevant. -->
## Questions
<!-- List any unanswered questions, or questions to be discussed. -->
## Dependencies and interactions
<!-- List which applications, elements of infrastructure, and/or parts of the code that are impacted by this proposal. -->
## Stakeholders and reviewers
<!-- Who should be involved in the design, implementation, and review process? -->
## Testing and performance
<!-- How do you intend to test the changes? --> |
zulip/zulip | 416427979 | Title: Realm Deactivated Page does not refresh
Question:
username_0: As part of doing a backup before upgrading to 2.0, I followed the 1.9 backup instructions which recommend deactivating the realm.
Every logged in user now has a giant "Realm Deactivated" screen, telling them to email me, even though the realm has been back up for an hour.
This is extremely unideal, as some of those users will just assume the realm is dead forever. For every other non-technical user, I have to email them that the realm is actually back up and to ignore the page and refresh.
Technical users will likely just refresh the page, and aren't an issue.
At the very least, I recommend the wording of this page be modified to recommend refreshing. Ideally, this would attempt refreshes automatically. Users know what to do when prompted with a sign in page, but they don't even know what a realm is.
Answers:
username_1: Yeah, adding a refresh for this is worth doing for folks who upgrade after using the data export tool for easily readable backups, even though the intent of the instruction on deactivating the realm was for the case that you're definitely migrating hardware.
username_1: We have one on our 500 page `static/html/5xx.html`, we may want to use similar settings here.
username_2: @zulipbot claim
Status: Issue closed
|
PrefectHQ/prefect | 652675305 | Title: Keep task states as futures when passing arguments to downstream tasks
Question:
username_0: Currently when the `FlowRunner` has to wait on an upstream task before computing a downstream task (say when submitting a mapped task), the arguments from the upstream tasks are sent as fully realized `State` objects, not `Future`s to `State` objects. This can cause problems when the arguments are large in size, as the argument is inlined into the task graph.
A better way would be to keep the future around to use for submitting downstream tasks. This avoids re-serializing the arguments around, and should make prefect more performant when running on a `DaskExecutor`. |
symfony/flex | 485566497 | Title: Flex removes recipes on composer update
Question:
username_0: Composer version 1.9.0 2019-08-02 20:55:32
```
I am having the same issue with #522, what was the fix in the previous issue?
Answers:
username_1: Unconfiguring removed packages is not a bug, but a feature. Your `composer update` is asking to uninstall these packages.
username_1: Well, I don't think Flex has anything to do here. Why would these packages be part of the project if they are not required ? This looks like a messed up composer.lock state instead.
username_1: then, why would `composer update` remove them, and why would `composer require` fix that ? The effect of `composer require` is to add the requirement, so if it fixes it, it tells me that your composer.json was *not* requiring them.
username_1: please share with us your composer.json and composer.lock before the update causing the issue, to help reproducing the issue.
username_0: Composer version 1.9.0 2019-08-02 20:55:32
```
I am having the same issue with #522, what was the fix in the previous issue?
username_0: I am not sure that whether a bug or a feature, I resolved the issue by manually run the command
`composer require symfony/translation symfony/swiftmailer-bundle friendsofsymfony/user-bundle doctrine/doctrine-migrations-bundle` before `composer update`, it could be a compatibility issue or a conflict with flex.
username_0: Those packages are required in my project, I was using flex to install them, so it can run the recipe and create config files for me
username_0: composer.json
```
{
"type": "project",
"license": "proprietary",
"require": {
"php": "^7.1.3",
"ext-ctype": "*",
"ext-iconv": "*",
"friendsofsymfony/rest-bundle": "^2.5",
"gesdinet/jwt-refresh-token-bundle": "^0.7.0",
"jms/serializer-bundle": "^3.4",
"lexik/jwt-authentication-bundle": "^2.6",
"nelmio/cors-bundle": "^1.5",
"sensio/framework-extra-bundle": "^5.3",
"username_1/doctrine-extensions-bundle": "^1.3",
"symfony/console": "4.2.*",
"symfony/dotenv": "4.2.*",
"symfony/flex": "^1.4",
"symfony/framework-bundle": "4.2.*",
"symfony/profiler-pack": "^1.0",
"symfony/web-server-bundle": "4.2.*",
"symfony/yaml": "4.2.*"
},
"config": {
"preferred-install": {
"*": "dist"
},
"sort-packages": true
},
"autoload": {
"psr-4": {
"App\\": "src/"
}
},
"autoload-dev": {
"psr-4": {
"App\\Tests\\": "tests/"
}
},
"replace": {
"paragonie/random_compat": "2.*",
"symfony/polyfill-ctype": "*",
"symfony/polyfill-iconv": "*",
"symfony/polyfill-php71": "*",
"symfony/polyfill-php70": "*",
"symfony/polyfill-php56": "*"
},
"scripts": {
"auto-scripts": {
"cache:clear": "symfony-cmd",
"assets:install %PUBLIC_DIR%": "symfony-cmd"
},
"post-install-cmd": [
"@auto-scripts"
],
"post-update-cmd": [
"@auto-scripts"
]
},
[Truncated]
"eventmanager",
"events",
"zf2"
],
"time": "2018-04-25T15:33:34+00:00"
}
],
"aliases": [],
"minimum-stability": "stable",
"stability-flags": [],
"prefer-stable": false,
"prefer-lowest": false,
"platform": {
"php": "^7.1.3",
"ext-ctype": "*",
"ext-iconv": "*"
},
"platform-dev": []
}
```
username_0: symfony.lock
```
{
"behat/transliterator": {
"version": "v1.2.0"
},
"doctrine/annotations": {
"version": "1.0",
"recipe": {
"repo": "github.com/symfony/recipes",
"branch": "master",
"version": "1.0",
"ref": "cb4152ebcadbe620ea2261da1a1c5a9b8cea7672"
},
"files": [
"./config/routes/annotations.yaml"
]
},
"doctrine/cache": {
"version": "v1.8.0"
},
"doctrine/collections": {
"version": "v1.6.2"
},
"doctrine/common": {
"version": "v2.10.0"
},
"doctrine/data-fixtures": {
"version": "v1.3.2"
},
"doctrine/dbal": {
"version": "v2.9.2"
},
"doctrine/doctrine-bundle": {
"version": "1.6",
"recipe": {
"repo": "github.com/symfony/recipes",
"branch": "master",
"version": "1.6",
"ref": "02bc9e7994b70f4fda004131a0c78b7b1bf09789"
},
"files": [
"./config/packages/doctrine.yaml",
"./config/packages/prod/doctrine.yaml",
"./src/Entity/.gitignore",
"./src/Repository/.gitignore"
]
},
"doctrine/doctrine-cache-bundle": {
"version": "1.3.5"
},
"doctrine/doctrine-fixtures-bundle": {
"version": "3.0",
"recipe": {
"repo": "github.com/symfony/recipes",
"branch": "master",
"version": "3.0",
"ref": "fc52d86631a6dfd9fdf3381d0b7e3df2069e51b3"
},
"files": [
[Truncated]
"symfony/yaml": {
"version": "v4.2.8"
},
"twig/twig": {
"version": "v2.11.3"
},
"willdurand/jsonp-callback-validator": {
"version": "v1.1.0"
},
"willdurand/negotiation": {
"version": "v2.3.1"
},
"zendframework/zend-code": {
"version": "3.3.1"
},
"zendframework/zend-eventmanager": {
"version": "3.2.1"
}
}
```
username_2: I don't see any of the packages in question listed in the `require` section of the `composer.json` file.
username_0: I was using the short name of the package to install `e.g. >composer require serializer`, then it will be handled by Flex I guess
username_2: I am sorry, but I fail to understand how these packages are related to installing the serializer.
username_0: Sorry, I did not explain it well. It is really interesting that I cannot replicate the issue anymore today, I was trying to create a new project by executing `composer create-project symfony/skeleton my_project_name`, then run `composer require friendsofsymfony/user-bundle "~2.0"`, it used to ask me that would like to execute the recipe, so it will create `fos_user.yaml` and other files automatically, but it doesn't show this time, then I found out [the recipe for FOSUserBundle](https://github.com/symfony/recipes-contrib/commit/3ba633bf1885dd83d4e925e38f45ec9bdff54355#diff-d52a88cbdc863dc4928bdc58ccfd9317) was removed somehow. Is Flex depending on `symfony/recipes-contrib`?
username_3: @username_0 it seems like `bin/console cache:clear` (which is run as composer post-update script) failed and your `composer.json` was reverted. In such case packages installed by flex are not reverted - they stay in the vendor directory until the next `composer install` / `composer update` which is desired behaviour (they are removed by composer because they are not listed in `composer.json`).
Status: Issue closed
|
long-war-2/lwotc | 473905539 | Title: Launcher Wont Identify the mod
Question:
username_0: Hi guys, not sure if this is the right place for this, but I could use some help. Ive unzipped the mod files into my xcom2 game directory E:\Steam\steamapps\common\XCOM 2/Xcom2-WarOfTheChosen/Mods (xcom 2 runs on my second drive). I see the folders for both LWOTC and Community highlander in that folder, but whenever I launch my game, my mod launcher doesnt display the mods as launchable. Have tried uninstalling and reinstalling Xcom2, as well as removing any downloaded mods from steam workshop. Any help is appreciated.
Answers:
username_1: I also have this problem :(
username_2: Under settings in the launcher, you have to create a path to that mod directory.
username_1: Thankyou for answering! May I ask where the launcher settings are??? Which launcher are we even talking about?
username_0: I figured it out, I'm dumb (this is what I get for trying something at 2am).
The mods folder needed to go in file path e:steam/steampps/common/xcom2/Xcom2-war of the chosen/Xcom game/mods.
I didn't realize they needed to be in Xcom game. Ofc, when I clicked on Xcom game and there were already a mods folder in there I was like ohhhhh.... I'm stupid.
From there, as soon as you launch Xcom 2, it will detect the mods as launch able.
username_2: https://github.com/X2CommunityCore/xcom2-launcher
I guess it isn't necessary, but it is very useful.
Status: Issue closed
username_3: closing this as OP has resolved issue. |
swapmyvote/swapmyvote | 531141469 | Title: [Airbrake] [Production] 403 Forbidden
Question:
username_0: **Airbrake error:** [#1977](https://herokuapp32817511herokucom.airbrake.io/projects/110137/groups/2625555991502231977)
**Airbrake project:** swapmyvote
**Error type:** `OAuth::Unauthorized`
**Error message:** `` 403 Forbidden ``
**Where:** `<no information>`
**Occurred at:** Dec 02, 2019 13:51:01 UTC
**First seen at:** Dec 02, 2019 13:51:01 UTC
**Occurrences:** 0 (0 since last deploy on Dec 02, 2019 13:35:01 UTC)
**Severity:** `error`
**URL:** [http://www.swapmyvote.uk/auth/twitter](http://www.swapmyvote.uk/auth/twitter)
**File:** `/GEM_ROOT/gems/oauth-0.5.4/lib/oauth/consumer.rb`
**Backtrace:**
```
/GEM_ROOT/gems/oauth-0.5.4/lib/oauth/consumer.rb:236:in token_request
/GEM_ROOT/gems/oauth-0.5.4/lib/oauth/consumer.rb:155:in get_request_token
/GEM_ROOT/gems/omniauth-oauth-1.1.0/lib/omniauth/strategies/oauth.rb:28:in request_phase
/GEM_ROOT/gems/omniauth-twitter-1.4.0/lib/omniauth/strategies/twitter.rb:61:in request_phase
```
Answers:
username_0: Can't even find this in airbrake now!
Status: Issue closed
|
danrugeles/EBNF_visualization | 237786548 | Title: Lack documentation..
Question:
username_0: Hi, I think this project is great but lacks instructions on how to use it..
I opened the html/index.html file in my browser, but it doesn't work. My browser keeps prompting dialogs.


And finally, a blank page with unresponsive buttons..

Thanks..
Answers:
username_1: Dear username_0,
The code was not running due to a broken dependency calling jquery library.
I have updated the code and the repo so that everything should be clear.
Status: Issue closed
username_0: Thanks, appreciate it! |
opencontainers/runc | 929909127 | Title: replace runc
Question:
username_0: Do you need to stop and restart docker? I'm going to replace runc and execute "cp ./runc.amd64 /usr/sbin/runc" . Do I need to restart?
Answers:
username_1: No, but note that your existing containers won't change after you update (so if there's some new runc feature you need you'd need to restart them to get it).
Status: Issue closed
|
PaddlePaddle/PaddleDetection | 927800383 | Title: 只训练一轮epoch就结束了
Question:
username_0: 根据教程,输入 python tools/train.py -c configs/yolov3/yolov3_mobilenet_v1_roadsign.yml ,只训练一轮就结束了,没有任何报错,环境python3.7,paddledetection是clone的2.1-gpu版本,epoch是默认的12。

Answers:
username_1: 麻烦贴下Paddle版本,PaddleDetection版本,我明天找台windows机器复现一下
username_0: 版本是2.1.0

之前这台电脑有用过darknet跑yolo4,这个显卡配置和驱动应该是没问题,然后用paddle训练的时候,设置了gpu训练,但是gpu算力占用只有2%左右,第一轮的训练完,还没保存就结束了(设置12个epoch,每个epoch保存一次)
username_1: 我们在linux下没遇到过这个问题,等下找台windows机器试下
username_0: 麻烦你了- ̗̀(๑ᵔ⌔ᵔ๑)
username_1: 这个问题我同事有定位过问题,目前还没有一个确切的结论,不使用anaconda,直接使用python环境不会出现这个问题
username_0: 好的,我用python环境试一下,还有想问一下paddledetection的maskrcnn或者solov2能在C++上部署吗?
username_1: mask rcnn可以,solov2之后会支持
username_0: 我换成pycharm来弄,还是只训练一轮就结束了,output里面没有保存,我看到也有人在问,是不是在哪里return掉了?我换成CPU训练,也是只训练一轮,output没有保存。


username_2: 我也在FRCNN遇到一樣的問題
python tools/train.py -c configs/faster_rcnn/faster_rcnn_r50_1x.yml --use_vdl=true --vdl_log_dir=vdl_dir/scalar
username_0: @username_1 这个确实是个问题。。。有定位到问题所在吗?
username_1: @username_0 不用anaconda创建的虚拟环境不会出现该问题,不是已经说过用原生python环境训练吗?
username_3: 请问您解决这个问题了吗 我也是只能训练迭代一轮 就跳出是什么问题呢? (试了环境用anaconda和python原声环境 都会出现这个问题)

username_4: 同样遇到这个问题,也同样试了环境用anaconda和python原生环境 都没有解决这个问题
username_0: 我的问题还没解决,我试了用pycharm创建的python环境,也是不行。
username_0: @username_1 有定位到吗?确实很多人都遇到。。。
username_5: 我也遇到了这个问题,ubuntu上没有问题。但是在windows上训练一轮就跳出也不保存模型
username_6: 
我也遇到这个情况,请问解决了吗
username_7: ### 我的是faster_rcnn_r50_fpn_1x_coco.yml,训练到11个epoch跳出来的,然后模型保存了前十个的
username_8: 我也遇到这个问题,还没有解决吗?
username_8: PaddleDetection 2.2 版本解决了上面的问题,感谢!
username_9: 我用的是最新的2.3版本,也出现了这个问题,请问具体是什么原因造成的? |
DBMS-Consulting/CQT2 | 266359312 | Title: Release 2 : Priority 1: SMQ filtering should be reflected in the exports from all applicable areas of the application
Question:
username_0: In MQ Detailed Report of (Create, Update, Copy and Browse & Search modules) and Export in the IA module should export PTs (associated to SMQ) based on the filter selected in the UI.
SMQ Scope codes:
1) 1 : Broad : Export PTs where scope is 1
2) 2 : Narrow : Export PTs where scope is 2
3) 4 : Full (Broad + Narrow) : Export PTs where scope is 1 or 2.
Answers:
username_0: PS: In case of hierarchical SMQ, the PTs from the associated Child SMQ should also be filtered based on the scope selected for the Parent or grand parent SMQ.
Status: Issue closed
username_0: Fixed in November 2017 release. |
Altinn/altinn-studio | 449188801 | Title: Create an instantiation application
Question:
username_0: ## Considerations
Input (beyond tasks) on how the user story should be solved can be put here.
## Acceptance criteria
- Should be able to instantiate an app without any checks
- Should start runtime application
## Tasks
- [ ] Backend
- [ ] Make method for authentication (level 0) for fetching necessary info (Only placeholder for the authentication that will come later - should return true)
- [ ] Make API that returns instanceOwnerId (logged in user)
- [ ] Rewrite StartService() in InstanceController
- [ ] Make API for creating an instance
- [ ] Frontend
- [ ] Create a new react app
- [ ] Call the API that returns instanceOwnerId
- [ ] Call the API that instantiates the application and redirects to runtime with instance ID upon completion
- [ ] Make jest tests
- [ ] Documentation (if relevant)
- [ ] Test / QA (if relevant)
Answers:
username_0: Moving this to development because the structure of the react application have been discussed and the backend tasks have been reviewed by Rune.
username_1: Closing this issue as tested using steps outlined by @username_0 above. Outstanding issue opened as bug (#1935), testcafe test updated to directly navigate to the runtime links.
Status: Issue closed
|
kyma-project/cli | 506669931 | Title: I am unable to install Kyma using latest Kyma CLI on MacOS
Question:
username_0: **Steps to reproduce**
<!-- List the steps to follow to reproduce the bug. Attach any files, links, code samples, or screenshots that could help in investigating the problem. -->
1. Get the newest version of kyma-cli from master branch
2. `kyma provision minikube`
3. `kyma-install`
I got the following error:
```
X Adding domains to /etc/hosts
Error: Executing the 'minikube --profile ssh sudo /bin/sh -c 'echo "127.0.0.1 apiserver.kyma.local minio.kyma.local console.kyma.local console-backend.kyma.local core-ui.kyma.local docs.kyma.local lambdas-ui.kyma.local dex.kyma.local addons.kyma.local configurations-generator.kyma.local oauth2-admin.kyma.local oauth2.kyma.local brokers.kyma.local catalog.kyma.local instances.kyma.local" >> /etc/hosts'' command with output 'Error: unknown command "sudo /bin/sh -c 'echo \"127.0.0.1 apiserver.kyma.local minio.kyma.local console.kyma.local console-backend.kyma.local core-ui.kyma.local docs.kyma.local lambdas-ui.kyma.local dex.kyma.local addons.kyma.local configurations-generator.kyma.local oauth2-admin.kyma.local oauth2.kyma.local brokers.kyma.local catalog.kyma.local instances.kyma.local\" >> /etc/hosts'" for "minikube"
Run 'minikube --help' for usage.
' and error message 'exit status 64' failed
```
In `kyma/install/cmd.go` there is following code:
```
if VMDriver != "none" {
_, err := minikube.RunCmd(cmd.opts.Verbose, "ssh", "sudo /bin/sh -c 'echo \""+hostAlias+"\" >> /etc/hosts'")
if err != nil {
return err
}
}
```
It looks that `ssh` command is used as a profile, which is incorrect.
Answers:
username_1: Thanks for the feedback, indeed we introduced that bug just recently and the CI is not detecting it as it is not using a VMDriver.
Have fixed it
Status: Issue closed
|
aws/aws-toolkit-vscode | 713647199 | Title: Error: DotnetCliPackageBuilder:RunPackageAction - [WinError 267] The directory name is invalid
Question:
username_0: Type: AWS::Serverless::Function
Properties:
CodeUri: lambda/check-domain-available/checkdomainavailable.zip
Handler: checkdomainavailable::checkdomainavailable.Bootstrap::ExecuteFunction
Runtime: dotnetcore3.1
Role: !GetAtt OnBoardingR53LambdaRole.Arn
Events:
APIEvent:
Type: Api
Properties:
Path: /checkdomainavailable
Method: post
RestApiId: !Ref OnBoardingAPI
* Use the AWS: Deploy SAM Application
Deployment will fail with the following output:
``
An error occurred while deploying a SAM Application. Check the logs for more information by running the "View AWS Toolkit Logs" command from the Command Palette.
Starting SAM Application deployment...
Building SAM Application...
Error with child process: Building function 'StartPaymentFunction'
,Running NodejsNpmBuilder:NpmPack
,Running NodejsNpmBuilder:CopyNpmrc
,Running NodejsNpmBuilder:CopySource
,Running NodejsNpmBuilder:NpmInstall
,Running NodejsNpmBuilder:CleanUpNpmrc
,Building function 'CheckDomainAvailableFunction'
,Running DotnetCliPackageBuilder:GlobalToolInstall
,
,Tool 'amazon.lambda.tools' was reinstalled with the latest stable version (version '4.1.0').
,Running DotnetCliPackageBuilder:RunPackageAction
,Error: DotnetCliPackageBuilder:RunPackageAction - [WinError 267] The directory name is invalid
``
**Expected behavior**
The function should be deployed to Lambda
**Desktop (please complete the following information):**
<!-- Tip: Use the 'About AWS Toolkit' option from the toolkit dropdown menu
or 'AWS: About AWS Toolkit' in the Command Palette. -->
- OS: Windows_NT x64 10.0.18362
- Visual Studio Code Version: 1.49.2
- AWS Toolkit Version: 1.14.0
**Additional context**
In the template an additional nodejs function, as well as API and IAM roles are also deployed. This worked fine before I added the dotnetcore function.
**Testing performed**
I already tried moving the generated zip file to other directories anad different runtime dotnetcore2.1
Answers:
username_1: `AWS: Deploy SAM Application` is a wrapper around the `sam deploy` CLI command. Can you try running `sam deploy` directly in your terminal and see what that yields?
The particular `sam deploy ...` command that the Toolkit tried, should be in the logs somewhere. If you can't find it in the logs, you can `cd` to your project directory where the `template.yaml` lives and try this command (replace "mystack" and "us-east-1" as appropriate):
sam deploy --template-file template.yaml --stack-name mystack --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --region us-east-1
username_0: Hi Justin,
sam version:
``
SAM CLI, version 1.2.0
``
I tried the sam deploy command manually and it was successful, so that fixes my problem for now, but I still would like to use the vscode toolkit.
By the way, I used a simplified version of my template for this:
``
Resources:
CheckDomainAvailableFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: C:\AzureRepos\reponame\WebAppPS\checkdomainavailable.zip
Handler: checkdomainavailable::checkdomainavailable.Bootstrap::ExecuteFunction
Runtime: dotnetcore2.1
``
I will test later with a full version, but it seems to be in whatever is wrapping it.
BTW, I did see the jetbrains link so I already tried moving the file to different location. Maybe to clarify, what directory would sam build consider the root of the project? I assumed it was the directory with the template.yaml file. Is that correct.
username_1: Looks like `C:\AzureRepos\reponame\WebAppPS` would be the project root in this case.
So this would be your template.yaml (can you try this?):
```
Resources:
CheckDomainAvailableFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: C:\AzureRepos\reponame\WebAppPS
Handler: checkdomainavailable::checkdomainavailable.Bootstrap::ExecuteFunction
Runtime: dotnetcore2.1
```
username_0: Hi, sorry it took a while, but I tried a full test. If I use sam deploy from commandline everything works as expected, under all circumstances.
Full command:
``
sam deploy --template-file template.yaml --stack-name sam-pstestfull --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM --region eu-west-1 --s3-bucket s3bucketname --profile profilename
``
This is the relevant template.yaml:
``
CheckDomainAvailableFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: lambda/check-domain-available/checkdomainavailable.zip
Handler: checkdomainavailable::checkdomainavailable.Bootstrap::ExecuteFunction
Runtime: dotnetcore2.1
Role: !GetAtt OnBoardingR53LambdaRole.Arn
Events:
APIEvent:
Type: Api
Properties:
Path: /checkdomainavailable
Method: post
RestApiId: !Ref OnBoardingAPI
``
I already tried ``C:\AzureRepos\reponame\WebAppPS`` and ``C:\AzureRepos\reponame`` and both fail using vscode AWS Toolkit.
username_1: ### Status
- This is a `sam build` [issue](https://github.com/aws/aws-toolkit-vscode/issues/1332#issuecomment-703136989).
- Toolkit could avoid this problem by skipping `sam build` when deploying a `CodeUri: ...zip` |
daffychuy/JMdict_e-Kanjidic-JSON | 643344693 | Title: Incorrect appliesToKanji and appliesToKana field
Question:
username_0: The script is replacing a list with a singular character texts causing ones with multiple item in appliesToKanji and appliesToKana be removed
Answers:
username_0: Fixed on v1.0 release.
https://github.com/username_0/JMdict_e-Kanjidic-JSON/releases/tag/v1.0
Status: Issue closed
|
rancher/rancher | 443381219 | Title: Pipeline applyyaml CRD
Question:
username_0: <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email <EMAIL> instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
* Install kube-prometheus operator in cluster
* Create servicemonitor.yaml in git repository:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: app-metrics
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics-port
* Create .rancher.pipeline.yml in git repository:
stages:
- name: Deploy
steps:
- applyYamlConfig:
path: ./servicemonitor.yaml
timeout: 60
notification: {}
* Setup and execute pipeline
**Result:**
09:53:51 from server for: “./servicemonitor.yaml”: servicemonitors.monitoring.coreos.com “app-metrics” is forbidden: User “system:serviceaccount:p-9h7jb-pipeline:jenkins” cannot get servicemonitors.monitoring.coreos.com in the namespace “p-9h7jb-pipeline”
**Other details that may be helpful:**
Running from command line works fine:
$ kubectl --as=jenkins -n p-9h7jb-pipeline apply -f servicemonitor.yaml
servicemonitor.monitoring.coreos.com/app-metrics created
**Environment information**
- Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): rancher 2.1.5, rancher 2.2.2
- Installation option (single install/HA): single
<!--
If the reported issue is regarding a created cluster, please provide requested info below
-->
**Cluster information**
- Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Custom
- Machine type (cloud/VM/metal) and specifications (CPU/memory): vSphere, RancherOS
- Kubernetes version (use `kubectl version`): 1.11.5
- Docker version (use `docker version`): 17.3.2
Answers:
username_1: The build pods are using `jenkins` serviceaccount in the pipeline dedicated namespace. It has `edit` permission inside the project. That's a clusterrole for most native resources generated by kubernetes but it doesn't include CRD resources.
If you want to deploy stuff beyond the default scope, you can grant additional permissions to the `jenkins` service account, for example:
```
kubectl create clusterrolebinding --clusterrole=cluster-owner --serviceaccount=p-sxvhk-pipeline:jenkins custom-crb
```
**NB**: The above is giving `*` permission for `*` resources to the SA. You probably want to use fine-grained roles/rolebindings according to your needs.
username_1: We probably don't want to grant `*` permissions to the SA by default. Privilege escalation happens where project-members can access roles/rolebindings in the project although they should not.
Status: Issue closed
username_0: Thanks for suggesting. Granting permissions (less that cluster-owner) works. |
Coding-Coach/find-a-mentor | 481311783 | Title: As a mentor is possible to enter a 'Title' that is longer then can be comfortably conceived (rendered)
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'edit profile'
2. Enter a 'Title' of 50 or more chars
3. Press 'save'
4. look at you profile
**Expected behavior**
A limit on the 'Title' that is not erroneously rendered in the profile
**Screenshots**

Status: Issue closed
Answers:
username_0: Verified. Working correctly now. |
ohager/burst-autoplotter | 245097713 | Title: Introduce an advanced mode
Question:
username_0: According to Blago, mem settings are an advanced feature - after a look at the XPlotter code I saw that mem settings is set automatically. So, could be shifted towards advanced setup. Same for CPU. Maybe in easy mode CPU/Threads could be set to (available Threads - 1) per default.
With all of this, more detailed output should be written - especially from XPlotters startup (reading initial settings from stdio)
Answers:
username_0: Using `-e` as optional argument makes autoplotter run in `extended mode`, such that thread and memory can be configured additionally.
Status: Issue closed
|
sixbank/Magento2-Plugin | 522325221 | Title: Campo telefone no checkout, está sem validação e máscara
Question:
username_0: 

Status: Issue closed
Answers:
username_0: 

Status: Issue closed
|
NicoHood/HID | 443594408 | Title: platform.h no such file or directory
Question:
username_0: compilation terminated.
exit status 1
Fehler beim Kompilieren für das Board Arduino/Genuino Uno.
I put the HID-master without "-master" into C:\Program Files (x86)\Arduino\libraries and have no idea what to do now... I just want to use the arduino to emulate a keyboard.
sorry if this is probably a super basic problem but I have no idea where to look for...
thanks!!
Answers:
username_1: Please install via the library manager.
Status: Issue closed
username_1: https://github.com/username_1/HID/wiki/Installation |
facebookresearch/pytorch3d | 588573276 | Title: visibility map?
Question:
username_0: ## ❓ Questions on how to use PyTorch3D
<!-- A clear and concise description of the question you need help with. -->
Hi, I'd like to know whether the Pytorch3D can produce the visibility map for each vertice when rendering?
Answers:
username_1: @username_0 can you explain what the format of the output should be? Do you mean a boolean indicator if the vertex is visible?
username_0: @username_1 Thank you for your quick response.
Your explanation is accurate.
username_1: @username_0 you can infer this information from the output of rasterization - we return a named tuple called `fragments` which contains a tensor called `pix_to_face`. This is an `(N, H, W, K)` dimensional tensor which gives the indices of the closest K faces (ordered by Z distance) for each pixel.
NOTE: the indices in `pix_to_face` refer to the packed faces of `meshes` i.e. the flattened list of faces across all meshes in the batch (which can be retrieved using `meshes.faces_packed()`
You can set K=1 (`faces_per_pixel = 1` in the `RasterizationSettings`) and then use the indices to create a visibility map. For example, you could do the following:
```
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=0.0,
faces_per_pixel=1,
)
rasterizer = MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
)
# Get the output from rasterization
fragments = rasterizer(meshes)
# pix_to_face is of shape (N, H, W, 1)
pix_to_face = fragments.pix_to_face
# (F, 3) where F is the total number of faces across all the meshes in the batch
packed_faces = meshes.faces_packed()
# (V, 3) where V is the total number of verts across all the meshes in the batch
packed_verts = meshes.verts_packed()
vertex_visibility_map = torch.zeros(packed_verts.shape[0]) # (V,)
# Indices of unique visible faces
visible_faces = pix_to_face.unique()[0] # (num_visible_faces )
# Get Indices of unique visible verts using the vertex indices in the faces
visible_verts_idx = packed_faces[visible_faces] # (num_visible_faces, 3)
unique_visible_verts_idx = torch.unique(visible_verts_idx) # (num_visible_verts, )
# Update visibility indicator to 1 for all visible vertices
visibility_map[unique_visible_verts_idx] = 1.0
```
Note: also be aware that the visibility map will be affected by the setting for `blur_radius` in the `RasterizationSettings` as this increases the boundary region for each face.
Status: Issue closed
username_0: @username_1 It works for me. |
Project-OSRM/osrm-backend | 189239617 | Title: Refactor Graph Loader
Question:
username_0: https://github.com/Project-OSRM/osrm-backend/blob/538bbd47d1ee169764b66df3ef67ecfe131cdfec/include/util/graph_loader.hpp
- [ ] All of graph_loader.hpp's definition have to be inline, otherwise we violate the One-Definition Rule.
- [ ] Most if not all of the features there are re-implementations of what's already in utils/io.hpp<issue_closed>
Status: Issue closed |
saltstack/salt | 118785114 | Title: Suggestion: Nightly build packages
Question:
username_0: In my experience, Salt bugs appear often but are fixed really fast - but your release cycle is fairly slow, with weeks between each point release. Unless you're comfortable running salt from directly from the source doing a git pull whenever you need to, or building the salt package yourself, you can end up waiting a long time for critical bugs to be fixed.
I understand salt now has an automated build process. Would it then not be possible to generate nightly builds so that people can get, for example, a package of the latest commit in the 2015.8 branch?
For example, in [your Debian repository](http://repo.saltstack.com/apt/debian/latest/), in addition to `salt-minion`, there'd be a `salt-minion-2015.8-nightly` package.
Answers:
username_1: @username_0, thanks for the suggestion. This is indeed one of the main goals of our new automated packaging setup. Package automation has taken longer than we anticipated, but we are still making progress, since, as you can suspect, there are many complexities.
@UtahDave, @dmurphy18 |
cdnjs/cdnjs | 121919777 | Title: add auto-update config for pegjs
Question:
username_0: https://cdnjs.com/libraries/pegjs
npm package: pegjs
PS: I'm not sure if it's suitable.
Answers:
username_1: I think we could use this as the source of auto-update.
https://github.com/pegjs/bower
username_0: What about npm?
username_1: Files on npm need to build.
username_0: Should ask the author if the filename can be changed.
Status: Issue closed
|
laravel-idea/plugin | 782629158 | Title: Eloquent helper file is broken if the method definition is too long
Question:
username_0: I generated the eloquent helper code and everything looked normal except the paginate method:

So I checked the generated `vendor/_ide_helper_models.php` file and I noticed that long lines were wrapped so phpstorm is unable to resolve the method:

Other examples:


Answers:
username_1: Thank you. Will be fixed.
username_2: This also breaks using the file as a stubFile in PHPStan/LaraStan (https://github.com/nunomaduro/larastan/issues/760#issuecomment-760896034)
Status: Issue closed
username_1: I removed code formatting in the 4.0 version. |
tidyverse/dplyr | 470820906 | Title: sample_frac function of dplyr is giving back an error
Question:
username_0: x
1. +-dplyr::sample_frac(ecom, size = 0.7)
2. +-dplyr:::sample_frac.data.frame(ecom, size = 0.7)
3. | +-dplyr::slice(...)
4. | \-dplyr:::slice.tbl_df(...)
5. | \-dplyr:::slice_impl(.data, quo)
6. +-sample.int(...)
7. +-base::sample.int(...)
8. \-dplyr::n()
9. \-dplyr:::from_context("..group_size")
10. \-`%||%`(...)
- Session info
#> R version 3.5.1 (2018-07-02)
#> Platform: x86_64-w64-mingw32/x64 (64-bit)
#> Running under: Windows 10 x64 (build 17134)
#>
#> Matrix products: default
#>
#> locale:
#> [1] LC_COLLATE=German_Germany.1252 LC_CTYPE=German_Germany.1252
#> [3] LC_MONETARY=German_Germany.1252 LC_NUMERIC=C
#> [5] LC_TIME=German_Germany.1252
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] readr_1.3.1 dplyr_0.8.3
#>
#>
#> loaded via a namespace (and not attached):
#> [1] compiler_3.5.1 magrittr_1.5 tools_3.5.1 htmltools_0.3.6
#> [5] yaml_2.2.0 Rcpp_1.0.1 stringi_1.4.3 rmarkdown_1.14
#> [9] highr_0.8 knitr_1.23 stringr_1.4.0 xfun_0.8
#> [13] digest_0.6.20 evaluate_0.14
Answers:
username_0: \> typeof(ecom)
[1] "list"
username_0: referrer device n_visit n_pages duration purchase order_items order_value
\<fct> \<fct> \<dbl> \<dbl> \<dbl> \<lgl> \<dbl> <dbl>
1 google tablet 10 1 905 FALSE 0 0
2 direct tablet 10 18 324 TRUE 10 1497
3 google tablet 1 1 73 FALSE 0 0
4 google laptop 0 3 45 FALSE 0 0
5 yahoo mobile 8 4 112 FALSE 7 621
6 social mobile 10 17 459 FALSE 0 0
7 bing tablet 3 1 134 FALSE 0 0
8 bing tablet 0 3 42 FALSE 0 0
9 google tablet 9 1 382 FALSE 0 0
10 google laptop 5 9 261 TRUE 6 2086
Status: Issue closed
|
facebook/relay | 811440355 | Title: GraphQLTaggedNode params uis missing in ReaderFragment
Question:
username_0: Im getting this flow error on fragments
**Cannot get `macroQuery.params` because property `params` is missing in `ReaderFragment` [1].Flow(prop-missing)
GraphQLTag.js.flow(42, 43): [1] `ReaderFragment`**
But it's there when I log it out:

"relay-compiler": "^10.1.0",
"relay-compiler-language-js-flow-uncommented": "^2.0.0",
"relay-config": "^10.1.0",
"relay-runtime": "^10.1.3",
 |
Open-Systems-Pharmacology/Qualification-text-modules | 1027241235 | Title: Error in PK-Sim_PBPK_generic_model_scheme: the splenic vein should drain into the portal vein
Question:
username_0: The splenic vein should drain into the portal vein and not in to the venous blood.
Same applies for the pancreatic vein (which is completely missing, see #10)
(It is correctly implemented in PK-Sim.)
Applies to:
images/PK-Sim_PBPK_generic_model_scheme.png<issue_closed>
Status: Issue closed |
boto/botocore | 674609168 | Title: Cap on docutils < 0.16 prevents co-installability
Question:
username_0: **Describe the bug**
I observe the following dependency conflict:
botocore 1.17.37 requires docutils<0.16,>=0.10, but you'll have docutils 0.16 which is incompatible.
Strangely, pipdeptree doesn't show constraints on docutils that prevent it from going to 0.15.
But for some reason the constraints solver keeps docutils at 0.16.0 (current release on pip)
**Expected behavior**
Is constraint to <0.16 arbitrary? It seems this constraint was placed there by:
https://github.com/username_0/botocore/commit/a8bbf60b665efee53f5a5d2cbbeb6e5e9e0838f2
And constraining docutils was problematic before.
Answers:
username_0: Associated pull request - https://github.com/boto/botocore/pull/2121
No testing was performed.
username_1: This comment is related: https://github.com/boto/botocore/pull/2121#issuecomment-686762476
username_2: This issue is related to issue https://github.com/boto/botocore/issues/1942
username_3: This issue should now be resolved in Botocore 1.18.0.
Status: Issue closed
|
thunks/thunk-disque | 129175179 | Title: Client is not attempting reconnect
Question:
username_0: The disque client looks to have reconnect functionality built in. However, I am having trouble getting the client to reconnect to the disque server.
My test scenario is that I first create a disque client and shortly afterwards shutdown the disque server. I expected that when I brought the server back up, the client would eventually reconnect. From my test, the client remains in the `ended` state and will not reconnect. If I trace through what the client is doing, it looks to be skipping the reconnect attempts because it [returns due to ended being true](https://github.com/thunks/thunk-disque/blob/05146eab91614f591414cf9d5f230cb9df52d186/lib/connection.js#L135).
Between 269076683ad1cb5f43730b2854413ea48df3433b...05146eab91614f591414cf9d5f230cb9df52d186 the `ended` property is now set to true when the socket's `close` event is fired due to the call to `tryRemove`. Any pointers for how to get this working would be very much appreciated.
Answers:
username_1: Hi, disque server send a close signal to client when you shut down, it is not a exception disconnect. In my application, I listened `close` event and `error` event, when it occured, event handle throw some error , then application will be restarted.
username_0: Thanks for taking a look. Maybe I am not thinking about this the right way. However, even when I `kill -9 <disque-server-process-id>`, the [end handler](https://github.com/thunks/thunk-disque/blob/05146eab91614f591414cf9d5f230cb9df52d186/lib/connection.js#L123) is being called. Is this intended?
I am wondering how the client will attempt a reconnect once it is ended. Do you have an example of how this works? It would be very helpful.
Thanks.
Status: Issue closed
username_1: @username_0 I fixed this case, released v0.3.0. Here is example: https://github.com/thunks/thunk-disque/blob/master/examples/reconnect-forever.js
username_0: Great! And thank you for the example. |
openaddresses/openaddresses | 222491674 | Title: Barbados Address Points
Question:
username_0: email sent: <EMAIL>
http://www.townplanning.gov.bb
Answers:
username_1: @username_0 Did you ever receive a response?
username_0: no, I just e-mailed them again. Some countries agencies are really bad at e-mail responses, but it could have went in their junk folder. A call might be best if this doesn't work.
Status: Issue closed
username_1: @username_0 I'm curious, did they actually respond?
username_0: No. If any agencies respond, I put it in the tickets. Caribbean nations rarely do. I believe the best way to contact them is in person or by phone. |
superolelli/Soma | 303561157 | Title: Tooltips für Fähigkeiten
Question:
username_0: Beim darüberfahren mit der Maus, sollten Informationen zu den Fähigkeiten angezeigt werden
Answers:
username_0: Erste Version ist implementiert. Ggf. müssen die Farben noch verbessert werden und es sollte angezeigt werden, auf wen die Fähigkeit eingesetzt werden kann
username_0: Verbesserungsideen: Text zentrierter in Box, mehr Rand, Abgerundete Ecken, Alphablend nach außen
Status: Issue closed
|
NSS-Day-Cohort-49/tabloid-mvc-streamers | 1008233249 | Title: Create a Post
Question:
username_0: As an author, I would like to be able to create Posts so I can share my thoughts with the world.
**Given** a user is in the app
**When** they select the `New Post` menu option
**Then** they should be directed to a page with a form for creating a new post
**Given** the user has entered the relevant information for a Post
**When** they click the `Save` button
**Then** the Post should be saved to the database
**And** the creation datetime should be automatically set to the current date and time
**And** the post should automatically be approved
**And** the user should be recorded as the author of the Post
**And** the user should be redirected to the new Post's details page
The "relevant information" for a Post is
* Title
* Content
* Category
* Header Image URL (optional)
* Publication Date (optional)
Answers:
username_1: Ticket appears to be done.
Able to create new post.
Confirmed data is saved to database.
"IsApproved" is set to true on all new posts.
Shows who wrote post
User is routed to detail page |
WebPlatformForEmbedded/meta-wpe | 538878293 | Title: webkit 2.22(WPEWebProcess) crashing while setting rectangle
Question:
username_0: There is a crash in WebProcess in below given environment when Youtube started playing the next video of loaded URL (autoplay enabled). Noticed that webkit keeps on setting the video rectangle to westeros-sink and crashed by reporting a broken pipe error. Please find below logs.
Environment details:
Platform - RPI3 32 bit (vc4 graphics)
distro - yoe distro (yocto build)
meta-wpe - master branch
westeros - westeros-sink is used (v4l2 decoder + kms)
wpewebkit 2.22 (westeros-sink is enabled with punch hole), SHA- 2c1e49c291a3a67c25b4a508c59a5c3e52c89421
POSITION: 0:00:01.096333333gst_westeros_sink_set_property set window rect (107,80,1280,536)
gst_westeros_sink_set_property set window rect (107,80,1280,536)
gst_westeros_sink_set_property set window rect (107,80,1280,536)
gst_westeros_sink_set_property set window rect (107,80,1280,536)
POSITION: 0:00:01.330250000gst_westeros_sink_set_property set window rect (107,80,1280,536)
gst_westeros_sink_set_property set window rect (107,80,1280,536)
POSITION: 0:00:01.528083333gst_westeros_sink_set_property set window rect (107,80,1280,536)
Memory pressure relief: Total: res = 191004672/157835264/-33169408, res+swap = 188981248/188981248/0
POSITION: 0:00:03.545250000POSITION: 0:00:03.731916666POSITION: 0:00:03.760000000gst_westeros_sink_set_property set window rect (107,80,1280,536)
Error sending request: Broken pipe
Westeros Debug: wstDestroySurfaceCallback resource 0x73a83cf8 free surfaceInfo
Westeros Debug: wstSurfaceDestroy: surface 0x73a83c08 refCount 1
Westeros Info: display wayland-0 client pid 1126 disconnected
Westeros Debug: wstDestroySurfaceCallback resource 0x73a78f00 free surfaceInfo
Westeros Debug: wstSurfaceDestroy: surface 0x73a61fd8 refCount 1
Westeros Info: display wayland-0 client pid 1126 disconnected
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:261] : wl_simple_shell_listener.surface_destroyed shell=0x15a59f8 name=westeros-sink-surface-4 surfaceId=4
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:268] : wl_simple_shell_listener.surface_destroyed surfaceId=4
[ 270284082 us] CRASH: WebProcess crashed, exiting...
Westeros Info: display wayland-0 client pid 1126 disconnected
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:261] : wl_simple_shell_listener.surface_destroyed shell=0x15a59f8 name=westeros-sink-surface-2 surfaceId=2
Westeros Debug: wstDestroySurfaceCallback resource 0x73a6bb20 free surfaceInfo
Westeros Debug: wstSurfaceDestroy: surface 0x73a6bb68 refCount 1
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:268] : wl_simple_shell_listener.surface_destroyed surfaceId=2
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:261] : wl_simple_shell_listener.surface_destroyed shell=0x15a59f8 name=WebKitBrowser surfaceId=1
[pid=1091][Client /usr/src/debug/wpeframework/3.0+gitAUTOINC+e770ecf4ba-r0/git/Source/compositorclient/Wayland/Westeros.cpp:268] : wl_simple_shell_listener.surface_destroyed surfaceId=1
[ 272411760 us] FORCED Shutdown: WebKitBrowser by reason: Failure.
[ 272414358 us] Deactivated plugin [WebKitBrowser]:[WebKitBrowser]
[ 273009301 us] Activated plugin [WebKitBrowser]:[WebKitBrowser]
Answers:
username_0: this issue is created in https://github.com/WebPlatformForEmbedded/WPEWebKit/issues/641 so, closing this issue as it should be raised in wpewebkit.
Status: Issue closed
|
mozilla/addons-server | 212113486 | Title: multi fox
Question:
username_0: ### Describe the problem and steps to reproduce it:
(Please include as many details as possible.)
### What happened?
### What did you expect to happen?
### Anything else we should know?
(Please include a link to the page, screenshots and any relevant files.)
Status: Issue closed
Answers:
username_1: If you have details to help us understand the problem please add them and re-open this issue. |
XeroAPI/xero-php-oauth2 | 735396180 | Title: Issue with creating new contact.
Question:
username_0: I tried create new contact via AccountingApi with function createContacts(). When i sent contact with name "Test test (test > test)" i got result with contact_id and contact_name. But contact was not created. I think you have issue in your API (not with lib). As i understood the name "Test test (test > test)" is not correct in your API, but API didn't return validation error. Can you try check this?
Answers:
username_1: @username_0 - When using the createContacts method the 3rd argument is called SummarizeErrors and is a boolean. The default is "true". So, if you don't pass a 3rd argument, you will get the default behavior of summarizing errors.
What that means is you'll get a 200ok returned and and array of contact that includes both the successful and invalid contacts are returned. If you inspect the properties of the response you'll one "HasValidationErrors": true
If you pass the 3rd argument as false, then you'll get a 400 response and an array of only invalid objects will be returned.
Hope that clarifies things.
Status: Issue closed
|
Smardens/Smarduino | 357068848 | Title: Implementing Sensor Check
Question:
username_0: Functionality allows user to use web application to test whether certain sensors are working properly.
Status: Issue closed
Answers:
username_1: sensor check is implemented and allow the user to check the status of the sensors.
username_0: Functionality allows user to use web application to test whether certain sensors are working properly. |
osx-cross/homebrew-avr | 284531719 | Title: cannot install on high sierra
Question:
username_0: command used:
brew install --HEAD osx-cross/avr/avr-gcc osx-cross/avr/avarice osx-cross/avr/simavr
log:
==> Installing avr-gcc from osx-cross/avr
==> Installing dependencies for osx-cross/avr/avr-gcc: gmp, mpfr, libmpc, avr-binutils
==> Installing osx-cross/avr/avr-gcc dependency: gmp
==> Downloading https://homebrew.bintray.com/bottles/gmp-6.1.2_1.high_sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/ea/eadb377c507f5d04e8d47861fa76471be6c09dc54991540e125ee1cbc04fecd6?__gda__=exp=15
######################################################################## 100.0%
==> Pouring gmp-6.1.2_1.high_sierra.bottle.tar.gz
🍺 /usr/local/Cellar/gmp/6.1.2_1: 18 files, 3.1MB
==> Installing osx-cross/avr/avr-gcc dependency: mpfr
==> Downloading https://homebrew.bintray.com/bottles/mpfr-3.1.6.high_sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/ff/ff2f02099a071f15f73ac776026c30e33bb43f9b389b19b87f575cd9bd4ac0bb?__gda__=exp=15
######################################################################## 100.0%
==> Pouring mpfr-3.1.6.high_sierra.bottle.tar.gz
🍺 /usr/local/Cellar/mpfr/3.1.6: 26 files, 3.6MB
==> Installing osx-cross/avr/avr-gcc dependency: libmpc
==> Downloading https://homebrew.bintray.com/bottles/libmpc-1.0.3_1.high_sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring libmpc-1.0.3_1.high_sierra.bottle.tar.gz
🍺 /usr/local/Cellar/libmpc/1.0.3_1: 12 files, 345.1KB
==> Installing osx-cross/avr/avr-gcc dependency: avr-binutils
==> Downloading http://ftp.gnu.org/gnu/binutils/binutils-2.29.tar.bz2
######################################################################## 100.0%
==> Downloading https://raw.githubusercontent.com/osx-cross/homebrew-avr/master/avr-binutils-size.patch
######################################################################## 100.0%
==> Patching
==> Applying avr-binutils-size.patch
patching file binutils/size.c
==> ../configure --prefix=/usr/local/Cellar/avr-binutils/2.29 --infodir=/usr/local/Cellar/avr-binutils/2.29/share/info --mandir=/u
==> make
==> make install
🍺 /usr/local/Cellar/avr-binutils/2.29: 140 files, 12.2MB, built in 2 minutes 30 seconds
==> Installing osx-cross/avr/avr-gcc --HEAD
==> Cloning svn://gcc.gnu.org/svn/gcc/trunk
==> ../configure --target=avr --prefix=/usr/local/Cellar/avr-gcc/HEAD-255997 --libdir=/usr/local/Cellar/avr-gcc/HEAD-255997/lib/av
==> make
==> make install
==> Downloading https://download.savannah.gnu.org/releases/avr-libc/avr-libc-2.0.0.tar.bz2
==> Downloading from https://mirror.netcologne.de/savannah/avr-libc/avr-libc-2.0.0.tar.bz2
######################################################################## 100.0%
==> ./configure --build=i686-apple-darwin17.3.0 --prefix=/usr/local/Cellar/avr-gcc/HEAD-255997 --host=avr
==> make install
Last 15 lines from /Users/k/Library/Logs/Homebrew/avr-gcc/05.make:
avr-gcc -DHAVE_CONFIG_H -I. -I../../.. -I../../../common -I../../../include -I../../../include -Wall -W -Wstrict-prototypes -mmcu=avr2 -D__COMPILING_AVR_LIBC__ -mcall-prologues -Os -MT asctime_r.o -MD -MP -MF .deps/asctime_r.Tpo -c -o asctime_r.o ../../../libc/time/asctime_r.c
during RTL pass: expand
../../../libc/time/asctime_r.c: In function 'asctime_r':
../../../libc/time/asctime_r.c:57:16: internal compiler error: in convert_memory_address_addr_space_1, at explow.c:300
buffer[i] = ascdays[d++];
~~~~~~~~~~^~~~~~~~~~~~~~
libbacktrace could not find executable to open
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
make[4]: *** [asctime_r.o] Error 1
make[3]: *** [install-recursive] Error 1
make[2]: *** [install-recursive] Error 1
make[1]: *** [install-recursive] Error 1
make: *** [install-recursive] Error 1
If reporting this issue please do so at (not Homebrew/brew or Homebrew/core):
https://github.com/osx-cross/homebrew-avr/issues
Answers:
username_1: have you tried removing the `--HEAD`?
username_2: Works for me on macOS High Sierra 10.13.2 (17C88)
```
brew tap osx-cross/avr
brew install avr-gcc
```
```
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core, caskroom/cask).
==> Updated Formulae
app-engine-go-64 gobuster unixodbc wwwoffle
astyle irssi verilator xvid
autogen kubeless whohas yaze-ag
backupninja mikutter wiggle zabbix
==> Tapping osx-cross/avr
Cloning into '/usr/local/Homebrew/Library/Taps/osx-cross/homebrew-avr'...
remote: Counting objects: 14, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 14 (delta 4), reused 4 (delta 0), pack-reused 0
Unpacking objects: 100% (14/14), done.
Tapped 8 formulae (47 files, 66KB)
Sujiths-MacBook-Pro:~ sujith$ brew install avr-gcc
==> Installing avr-gcc from osx-cross/avr
==> Installing dependencies for osx-cross/avr/avr-gcc: avr-binutils
==> Installing osx-cross/avr/avr-gcc dependency: avr-binutils
==> Downloading http://ftp.gnu.org/gnu/binutils/binutils-2.29.tar.bz2
######################################################################## 100.0%
==> Downloading https://raw.githubusercontent.com/osx-cross/homebrew-avr/master/
######################################################################## 100.0%
==> Patching
==> Applying avr-binutils-size.patch
patching file binutils/size.c
==> ../configure --prefix=/usr/local/Cellar/avr-binutils/2.29 --infodir=/usr/loc
==> make
==> make install
🍺 /usr/local/Cellar/avr-binutils/2.29: 140 files, 12.2MB, built in 2 minutes 36 seconds
==> Installing osx-cross/avr/avr-gcc
==> Downloading https://gcc.gnu.org/pub/gcc/releases/gcc-7.2.0/gcc-7.2.0.tar.xz
######################################################################## 100.0%
==> ../configure --target=avr --prefix=/usr/local/Cellar/avr-gcc/7.2.0 --libdir=
==> make
==> make install
==> Downloading https://download.savannah.gnu.org/releases/avr-libc/avr-libc-2.0
==> Downloading from https://mirrors.up.pt/pub/nongnu/avr-libc/avr-libc-2.0.0.ta
######################################################################## 100.0%
==> ./configure --build=i686-apple-darwin17.3.0 --prefix=/usr/local/Cellar/avr-g
==> make install
🍺 /usr/local/Cellar/avr-gcc/7.2.0: 1,682 files, 175MB, built in 19 minutes 42 seconds
```
Status: Issue closed
username_1: I've tried reinstalling, it works as well on my computer.
@username_0 without more information, it's hard to know what's going wrong. I'm closing the issue but feel free to reopen if the problem persists. |
ellisp/ellisp.github.io | 676484707 | Title: Possible data supporting wearing of masks
Question:
username_0: Is it worth doing an in-depth look at Victoria’s new cases of Covid-19 data between the dates of 9 July (Initial Melbourne lockdown), 22 July (mandatory mask wearing in Melbourne), and 3 August (introduction of Stage 4 restrictions). All recommendations to introduce mandatory mask wearing has been based on advice, not proof, that they help control spread of the virus. It appears there is a very short period of data that shows mask wearing is driving the current decrease in new cases.
Status: Issue closed
Answers:
username_1: I think this is dealt with (to a degree) by a change I made in my modelling of time-varying R so that there can be a level shift down or up at named times, and I have fed it the times there were significant changes in lockdown etc. I think the data are too thin and too many confounding factors to draw strong conclusions on just masks from just Victoria data however. Difficult (but good) question.
There is some Imperial College modelling that's looked at this across countries using similar methods to what I have, but much more thoroughly and with more data. |
janishar/PlaceHolderView | 288067780 | Title: How to get the last click position?
Question:
username_0: @Click(R.id.monthsday_txt)
private void onClick() {
monthsdays.setBackgroundResource(R.drawable.circle);
}
I have a calendar view .if click on day change the day background and reset the previous click day background.
Status: Issue closed
Answers:
username_1: Pass callback from the clicked View to the Activity or Fragment and hold that View's reference. When Next View is clicked then the activity will set the earlier clicked view with new background property and call notifyItemChanged of PlaceHolderView. Position of the View can be obtained using `@Position int position`
username_1: @Click(R.id.monthsday_txt)
private void onClick() {
monthsdays.setBackgroundResource(R.drawable.circle);
}
I have a calendar view .if click on day change the day background and reset the previous click day background.
Status: Issue closed
|
ant-design/ant-design | 564738116 | Title: table组件在dataIndex属性使用多层JSON无效
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[](https://codesandbox.io/s/exciting-glade-2pzk0?fontsize=14&hidenavigation=1&theme=dark)
### Steps to reproduce
性别这一列期望能正常显示
### What is expected?
性别这一列根据数据显示
### What is actually happening?
不显示
| Environment | Info |
|---|---|
| antd | 4.0.0-rc.4 |
| React | 16.12 |
| System | win10 |
| Browser | chrome 80 |
---
antd3.x版本可以
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: ref: https://next.ant.design/components/table-cn/#%E4%BB%8E-v3-%E5%8D%87%E7%BA%A7%E5%88%B0-v4
Status: Issue closed
username_2: 4.0开始是这样使用的吧
dataIndex: ['test', 'sex']
username_3: - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[](https://codesandbox.io/s/exciting-glade-2pzk0?fontsize=14&hidenavigation=1&theme=dark)
### Steps to reproduce
性别这一列期望能正常显示
### What is expected?
性别这一列根据数据显示
### What is actually happening?
不显示
| Environment | Info |
|---|---|
| antd | 4.0.0-rc.4 |
| React | 16.12 |
| System | win10 |
| Browser | chrome 80 |
---
antd3.x版本可以
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
username_3: @username_1 这个在 https://next.ant.design/docs/react/migration-v4-cn 里没有提到。
Status: Issue closed
username_4: 此外,比较重大的改动为 dataIndex 从支持路径嵌套如 user.age 改成了数组路径如 ['user', 'age']
但是 在table搜索的時候 getColumnSearchProps 就失效了
['user', 'age'] ERROR是tostring 就是沒找到對應的dataindex
用'user', 'age'的時候沒有ERROR但是 結果是有問題的
username_1: 示例里 `getColumnSearchProps` 只是个例子,自己根据代码调整即可。
username_5: 一个很差的更新,不能因为个别的属性名带.就把之前的全不兼容了,之前的做法都是依赖于路径嵌套的特性做的,现在改动这么大,牵扯到的租户不是一般的多,太坑了。带.的就让他们自己写render转换啊,真不知道是怎么评估的需求改动 |
BtbN/FFmpeg-Builds | 957410700 | Title: Difference between ffmpeg-N-* and ffmpeg-n4.4-*?
Question:
username_0: Looking at the assets page, the file names, the file contents, the README.md, and skimming the rest of the repo source, I can't seem to find any information about the difference between these builds, apart from different hash in filenames, and the ffmpeg-N-* builds seem to use newer shared library versions. Could you drop a note in the README to explain the difference?
Answers:
username_1: That is quite simple,
`ffmpeg-n4.4-*` is built from the 4.4 release branch instead of master. While the other one you mentioned (`ffmpeg-N-*`) is built from the master branch
username_0: @username_1 I understand the difference between a tagged release and master (HEAD), and suspected this. Thank you for the explanation. The explanation is simple, but the naming convention is not at all clear or simple. Why not `ffmpeg-release-4.4-*` and `ffmpeg-master-*`? Then the file name would be self-documenting. That would be truly simple. ;-) Or at least, a quick note in the README.
username_1: That's a great idea but, I assume that `N` or `n` means `Nightly Builds` here
username_2: N is literally what upstream ffmpeg calls the builds, I'm not making those version names. No idea what it actually stands for.
username_0: Thank you both for the clarifications!
Status: Issue closed
username_1: Hmm
username_1: You are welcome
username_3: Because it is not release. Commits from 4.5 are right now being backported into 4.4. See: https://github.com/FFmpeg/FFmpeg/commits/release/4.4
Release is the 4.4 tag...
username_0: The URL literally has "release" followed by "4.4", hence my question remains valid as far as self documenting file names.
I understand that the various "release" n<major>.<minor> tags are moving targets, each with back ports from HEAD, which is tagged 4.5-dev. This seems a departure from previous major.minor.patch tagged releases. It's part of a trend over the past 15+ years in software development to move away from versions to various degrees.
The ffmpeg project chose a very peculiar versioning system. I've never seen any version numbers or tags start with 'n', in projects over nearly 30 years. The additional 'N' was also a cryptic choice, possibly for "nightly", but who knows. I could probably find a discussion in an old email list or forum thread announcing the choice and explaining the rationale.
Again, none of these choices come from username_2, but rather are merely inherited from FFmpeg. The point is moot. Dubious or not, the choices are legacy now, and the FFmpeg project is not likely to ever change in that regard, so I just accept it and move on.
Thanks for all the explanations and sorry for the line noise.
username_2: I'm not building any tags.
It's building the release branch. The number after the tag is the number of commits sitting on top of it. When 4.4.1 will be tagged it will start counting up from that again.
I just don't see any point in building specific releases. All that means it intentionally missing out on backported bugfixes until someone tags a new release from them. |
lalibi/aepp-presentation | 580584624 | Title: Μέθοδος μαύρο κουτί
Question:
username_0: Καλησπέρα, ΚΑΤΑΠΛΗΚΤΙΚΗ ΔΟΥΛΕΙΑ. Το πρόβλημα είναι ότι ακόμη και με την τωρινή ανανέωση δεν μπορώ να εντοπίσω στις διαφένειες τη μέθοδο μαύρου κουτιού. Υπάρχει κάποιο θέμα ή το προσπέρασα κατά την αναζήτηση; Ευχαριστώ πολύ!
Answers:
username_1: Σε ευχαριστώ πολύ! Για να μπορεί να παίζει η παρουσίαση και offline κάνω λίγο aggressive caching (με service worker). Αυτό έχει σαν αποτέλεσμα να μην εμφανίζονται αμέσως οι αλλαγές, Συνήθως, αν κλείσεις το παράθυρο/tab και το ξαναανοίξεις, είναι οκ. Δοκίμασε το και πες μου.
username_0: Όλα κομπλέ! Στη διαφάνεια 346, άλλαξε στη δήλωση των μεταβλητών τη μεταβλητή "οφειλόμενο" σε "οφειλόμενο_ποσό" γιατί σωστά μου είπαν οι μαθητές μου ότι έχει και συντακτικό σφάλμα.
Status: Issue closed
|
XinFinOrg/XinFin-Node | 441195445 | Title: What KYC Info is Required for an Individual for MN Candidate
Question:
username_0: What KYC information needs to be put into the PDF and submitted for the MN candidate?
Note: If this document will be displayed publically, I am concerned about how much detail will be shown to the public that could be used in identity theft. I can provide anything needed to XinFin but prefer a limited amount be shown publically.
Preston
Status: Issue closed
Answers:
username_0: This info is now provided on the screen where you submit to become a candidate and upload KYC documents. |
simpligility/android-maven-plugin | 1171563664 | Title: [4.6.0] NPE when trying to call getAndroidTargetManager
Question:
username_0: try to build with JAVA 11:
`JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn -U -B clean -P windows-linux,android-release-build install -Dandroid.sdk.path=/home/hanmac/android-sdks/ -T 1C -DskipTests=true -e`
Plugin Config in Maven pom.xml
```xml
<plugin>
<groupId>com.simpligility.maven.plugins</groupId>
<artifactId>android-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.1</version>
</dependency>
</dependencies>
<version>4.6.0</version>
<extensions>true</extensions>
<configuration>
<sign>
<debug>false</debug>
</sign>
<sdk>
<platform>26</platform>
</sdk>
<zipalign>
<verbose>false</verbose>
</zipalign>
<dexForceJumbo>true</dexForceJumbo>
<androidManifestFile>${project.basedir}/AndroidManifest.xml</androidManifestFile>
<assetsDirectory>${project.basedir}/assets</assetsDirectory>
<resourceDirectory>${project.basedir}/res</resourceDirectory>
<nativeLibrariesDirectory>${project.basedir}/libs</nativeLibrariesDirectory>
<extractDuplicates>true</extractDuplicates>
<proguard>
<skip>false</skip>
<config>${project.basedir}/proguard.cfg</config>
</proguard>
<release>true</release>
<dex>
<jvmArguments>
<argument>${build.min.memory}</argument>
<argument>${build.max.memory}</argument>
</jvmArguments>
</dex>
</configuration>
</plugin>
```
getting the following NPE:
```
[ERROR] Failed to execute goal com.simpligility.maven.plugins:android-maven-plugin:4.6.0:generate-sources (default-generate-sources) on project forge-gui-android: Execution default-generate-sources of goal com.simpligility.maven.plugins:android-maven-plugin:4.6.0:generate-sources failed. NullPointerException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal com.simpligility.maven.plugins:android-maven-plugin:4.6.0:generate-sources (default-generate-sources) on project forge-gui-android: Execution default-generate-sources of goal com.simpligility.maven.plugins:android-maven-plugin:4.6.0:generate-sources failed.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:190)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:186)
[Truncated]
at com.simpligility.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR (GenerateSourcesMojo.java:789)
at com.simpligility.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute (GenerateSourcesMojo.java:240)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:190)
at org.apache.maven.lifecycle.internal.builder.multithreaded.MultiThreadedBuilder$1.call (MultiThreadedBuilder.java:186)
at java.util.concurrent.FutureTask.run (FutureTask.java:264)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:515)
at java.util.concurrent.FutureTask.run (FutureTask.java:264)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:628)
at java.lang.Thread.run (Thread.java:829)
```
4.6.0 and 4.5.0 cause this NPE
4.4.3 does not |
linemanjs/lineman | 33108953 | Title: Clarifications (or, n00b question) -> Documentation
Question:
username_0: Hey guys-
Trying to get up an running with Lineman, quite the adventure — coming from a Rails-backed JS site, want to get up to speed on the modern (dare I say, right) way to do this. Through this issue, i'm hoping to get a discussion where n00bs like me have the resources to 1) know where to turn for learning pre-requisites 2) REALLY understand all common workflow patterns you'd encounter in setting up a real-world app.
I'll keep editing this as more clarifications come to mind, and the end result will be a documentation pull request. :+1:
##Initial Setup##
The biggest issue I've come across is my understanding of Grunt was not high enough, which makes lineman's thin configuration frosting seem very magical. This is exacerbated by not having a clear understanding of:
1) Is a devDependency in package.json always required?
2) Assuming Yes to the above, do plugins always need to be stored in node_modules, or does Node find them in the systems package repository as well?
2) When a lineman-aware package is added here, will it magically apply its tasks, or is more configuration required?
3) When a NON lineman aware package is added here, will settings for that package always flow through (to grunt.initConfig I presume) from application.js? Like setting up lineman-coffeelint, attempting to override options has no effect:
coffeelint: {
options: {
"max_line_length": {
"level": "ignore"
} } }
4) When is adding a task to loadNpmTasks[] actually necessary? Only non-lineman-aware packages? But it's not clear (trying to get grunt-ect to compile, with very little luck so far)
4) Any other configuration tips for integrating non-lineman tasks into a lineman app?
5) Removing tasks also seems to have no effect (bug?):
removeTasks: {
common: ["jshint"]
},
Still runs the linter on my coffeescript, obviously resulting in 2K errors.
##Lineman-Rails##
Why are these gems (+ rails-lineman) necessary when we have all asset complication + fingerprinting as npm modules? I thought the goal here was to REDUCE build&deploy dependencies? :)
More to come!<issue_closed>
Status: Issue closed |
hernad/atom-language-harbour | 221216456 | Title: lost of syntax highlight after a strange line
Question:
username_0: Hello,
working on my code I found this line, after its the syntax highlight does not work:
`METHOD aBitmap( n ) INLINE ( If( n > 0 .and. n <= 10, 5 , nil ) )`
The strange thing is it returns after a `>`...
I tried to understand why, but I am not able to do this.
Antonino
Answers:
username_1: there is type parameter in C, something like
<var>, which doesn't exists in harbour as we know :)
It seems ok with last version 3.3.4
Status: Issue closed
|
bids-standard/bids-specification | 520834607 | Title: Is the CODE_OF_CONDUCT file a duplicate?
Question:
username_0: As far as I see it, we are using the [GitHub functionality](https://help.github.com/en/github/building-a-strong-community/creating-a-default-community-health-file-for-your-organization) to host default files for the entire organization (bids-standard) in a `.github` repository, which we have [here](https://github.com/bids-standard/.github/).
Using this functionality, can we get rid of the duplicate [CODE_OF_CONDUCT](https://github.com/bids-standard/bids-specification/blob/master/CODE_OF_CONDUCT.md) file in this repo?
Perhaps @KirstieJane has an informed opinion on this.
Answers:
username_0: it *is* a duplicate, but it's okay to have it here, because "bids-specification" is arguably our main repo and an actual file is more visible than the small links that GitHub provides (see my screenshot in the original post).
The only important thing is that we keep our Codes of Conduct in SYNC
- https://github.com/bids-standard/.github/blob/master/CODE_OF_CONDUCT.md
- (see also https://github.com/bids-standard/.github/pull/1)
- https://github.com/bids-standard/bids-specification/blob/master/CODE_OF_CONDUCT.md
- (see also https://github.com/bids-standard/bids-specification/pull/281)
- https://github.com/bids-standard/bids-validator/blob/master/CODE_OF_CONDUCT.md
- (see also https://github.com/bids-standard/bids-validator/pull/1052)
there is a slightly different CoC for the bids-starter-kit:
- https://github.com/bids-standard/bids-starter-kit/blob/master/CODE_OF_CONDUCT.md
which is fine I think.
closing now, feel free to bring up other points if you have any.
Status: Issue closed
|
code-corps/code-corps-api | 198204483 | Title: Default task lists are not ordered
Question:
username_0: # Problem
In the `Project.create_changeset` we're making use of `put_assoc(:task_lists, TaskList.default_task_lists())` in order to create the default task lists for a project (Inbox, etc).
Unfortunately, since this function does not use the `TaskList.changeset`, then `set_order()` is never called (from `EctoOrdered`). This means the `order` is never set via the `position` attribute used in `default_task_lists()`.
## Subtasks
- [ ] Write a failing regression test to catch the issue
- [ ] Make the test pass by setting the order of the new default task lists when creating a new project<issue_closed>
Status: Issue closed |
silentsokolov/django-admin-rangefilter | 706645298 | Title: Italian translation
Question:
username_0: Hello,
do you prefer a PR with the `.po` file or transifex (you should invite me)?
Thank you for this useful piece of code
Status: Issue closed
Answers:
username_1: We use [Transifex](https://www.transifex.com/django-admin-rangefilter/). Just join to the project |
buildo/react-components | 238077423 | Title: Add typescript definition file
Question:
username_0: ## requirements
Declare `.d.ts` file with all components' types
## specs
{optional: describe technical specs to implement this feature, if not obvious}
## misc
{optional: other useful info}
Answers:
username_0: I've made some research around to discover how other similar libraries structure the `.d.ts` files.
I'm reporting the more significative here:
- [React Toolbox](https://github.com/react-toolbox/react-toolbox/tree/dev/components/app_bar)
- [Semantic-UI-React](https://github.com/Semantic-Org/Semantic-UI-React/tree/master/src/elements/Button)
I noticed the *namespace* strategy is not widely used for similar projects, but I came up with an interesting (IMO) solution:
```ts
import { PureComponent } from 'react'
declare class Divider extends PureComponent<Divider.Props, void> {}
declare namespace Divider {
type Orientation = 'horizontal' | 'vertical';
type Size = 'small' | 'medium' | 'large' | 'no-margin';
export type Props = {
orientation: Orientation,
size: Size,
style: object
}
}
export = Divider
```
with the above example is possible to
```js
```
Status: Issue closed
|
electron/electron | 182253207 | Title: document size not following fullscreen animation when using remote.getCurrentWindow().setFullscreen()
Question:
username_0: The following way to enter/exit fullscreen behave correctly (html document/body size follows fullscreen animation):
- Entering/exiting fullscreen using the app menu
- Entering/exiting fullscreen using the 3rd of the *window 3 dots* (maximize/fullscreen control)
- Entering/exiting fullscreen using `myBrowserWindow.setFullscreen(true)` in the main process
Here is how it looks like:
https://cl.ly/0Y0f0O0v0L0E (red is a fixed 100% 100% div, grey is the browser-window backgroundColor)
Now, when using `remote.getCurrentWindow().setFullscreen(true)` in a renderer window, here is how it looks like:
https://cl.ly/011l2g3j420q
**The div that should always be filling the window does not *follow* the fullscreen animation when entering (or exiting)**
---
* Electron version: 1.4.3
* Operating system: macOS 10.12 (Sierra)
Answers:
username_1: This is probably expected since using `remote` does a synchronous method call to the main process and the renderer process is blocked until that method returns. Since it is blocked, it probably can't handle events occurring while it is waiting for the main process to return.
Have you tried using the `ipcRenderer`/`ipcMain` modules to send a message to the main process to ask it to go to full screen and then run `setFullScreen` there when that message is sent so that it isn't blocking the renderer process.
Status: Issue closed
username_0: Using `ipcRenderer / ipcMain` doesn't have this problem indeed, I will switch to this. I always tend to forget that `remote` is sync.
Thanks! |
pivpn/pivpn | 195019370 | Title: The Future of PiVPN Debug
Question:
username_0: So in the test branch I've begun expanding on pivpn debug. It currently tries to detect if iptables are applied correctly and prompt user if we can fix. What I will prefer to do in the future is have pivpn debug be the output command. It will simply output the things we want or deem necessary to help debug. Then we add some sub-commands to it like
pivpn debug detect
which maybe will try to detect common issues.
pivpn debug firewall
will try to ensure ufw/iptables is setup properly. then can even ask the user questions to effect changes that maybe were incorrect from initial setup.
Anyway that is some high level thoughts, will flush out more as free time allows.
Status: Issue closed
Answers:
username_1: Closing this one as pivpndebug seems to be quite complete these days. |
tidyverse/dplyr | 672305807 | Title: dplyr release 1.0.1 breaks simple ggvis demo: layer_lines and layer_points produce no output.
Question:
username_0: I realise that ggvis is in limbo, but I, and presumably others, have some shiny code in end devices that's not easy to update and it's taken a while to confirm that there is an issue and near where to place it.
So I thought that I'd at least document the issue, and potential fix, here. I first found a mention here: `https://bit.ly/3k5KO2w`.
From an engineering point of view, there may be some sort of regression as the failure is very quiet.
---
ggvis layers stop working. I'm afraid that I couldn't get reprex to work with rstudio-server, so I've put some code below.
```r
install.packages ("ggvis")
library (magrittr)
library (ggvis)
mtcars %>% ggvis (x=~wt, y=~mpg) %>% layer_points()
```
I'd expect this to produce the "usual" plot. However, there are no points.
<img width="709" alt="Untitled 3" src="https://user-images.githubusercontent.com/91360/89221262-e3ca0880-d5ca-11ea-875a-0d2561847427.png">
However, this does work:
```r
require (devtools)
install_version("dplyr", version = "0.8.5")
# <cr> over which versions to update
# I'm using RStudio Server. So at this point I restart R
library (magrittr)
library (ggvis)
mtcars %>% ggvis (x=~wt, y=~mpg) %>% layer_points()
And the plot with points appears.
<img width="714" alt="Untitled 2" src="https://user-images.githubusercontent.com/91360/89221288-f3495180-d5ca-11ea-9712-33efb73b49fa.png">
I did try to `git bisect` between 0.8.5 and 1.0.0, but the commit for 0.8.5 didn't seem to be in the clone from github, so I couldn't.
Answers:
username_0: I did manage to `git bisect` this from an earlier commit that's in the repo. `git bisect log`:
```
# bad: [90b3e4ea164514b57c72954c89dc63ab958eceb0] Bullet errors for summarise() (#4729)
git bisect bad 90b3e4ea164514b57c72954c89dc63ab958eceb0
# bad: [90b3e4ea164514b57c72954c89dc63ab958eceb0] Bullet errors for summarise() (#4729)
git bisect bad 90b3e4ea164514b57c72954c89dc63ab958eceb0
# good: [689518b4eb6adcfdeb5917dd97d41419b3e3d69b] Improve arrange docs
git bisect good 689518b4eb6adcfdeb5917dd97d41419b3e3d69b
# good: [3d45972610542138a790d8c1cd00a5990c0a1304] Upgrade tidyselect (#4720)
git bisect good 3d45972610542138a790d8c1cd00a5990c0a1304
# good: [cab26df184fa7032a16faa602fc529542b508be1] Remove unnused match_vars() function
git bisect good cab26df184fa7032a16faa602fc529542b508be1
# good: [dc8ebd4cb07c61a5e8fb9648a87fbf2f4628897a] Fix broken test (#4743)
git bisect good dc8ebd4cb07c61a5e8fb9648a87fbf2f4628897a
# good: [f7f66821327ce41f9a9a49bfc0f61434039f864f] Tweak error message
git bisect good f7f66821327ce41f9a9a49bfc0f61434039f864f
# bad: [887d40bf95cfe8c526245bd61fcde2c18af65e9d] Set tibble version (#4745)
git bisect bad 887d40bf95cfe8c526245bd61fcde2c18af65e9d
# bad: [125d75d6f76c78c27da45adc258f0b8b6a4620df] Simplify group metadata (#4728)
git bisect bad 125d75d6f76c78c27da45adc258f0b8b6a4620df
# first bad commit: [125d75d6f76c78c27da45adc258f0b8b6a4620df] Simplify group metadata (#4728)
```
username_1: Fixed in https://github.com/rstudio/ggvis/pull/485
Install the branch with:
```r
remotes::install_github("username_1/ggvis@fix-dplyr-1-0-0")
```
But I think we should also fix this in dplyr. `groups()` should return `NULL` when groups are empty, as before.
username_0: Thanks. Unfortunately, that doesn't fix it, and I don't know the innards of R well enough to get any diagnostics out. I expected a print statement to spit something out, but I'd guess that stdout has been captured.
It's not really my business, but, personally, I'd make the new test more explicitly based on a boolean value, rather than a function returning a value of 0. If I could test it, I'd propose a change, but I can't - and you should ignore my comment if I'm writing in the face of the project style guide.
username_1: It is of course possible. There is a whole dynamically typed language funded on this kind of option type: lisp and nil punning.
I agree it is better to return `list()` in this case, but thought it wasn't worth the breaking change. What I missed is that dplyr would previously return `NULL` or `list()` interchangeably, making the return type complex. So the new behaviour was worth the change.
Status: Issue closed
|
jwilger/kookaburra | 156568590 | Title: Need better message when calling unimplemented method on UIComponents
Question:
username_0: Because UIComponent currently forwards methods on to the Session, the exception that gets raised when calling an unimplemented method does not identify the UIComponent class. It would be nice if it did. |
cannorin/FSharp.CommandLine | 1091877133 | Title: Required options
Question:
username_0: Hey,
It's not really an issue, but a question.
Is there a way to set an option required?
Something like:
```fsharp
opt files in fileOption |> CommandOption.requiredAndExactlyOne
```
Answers:
username_1: It's possible but not very straightforward:
```fsharp
let inline exactlyOne (co: #ICommandOption<_>) =
co |> zeroOrMore
|> map (function
| x :: [] -> x
| _ ->
let msg = sprintf "the option '%s' is required and should be provided only once"
(co.Summary.NameRepresentations |> List.head)
OptionParseFailed (co.Summary, msg) |> raise
)
```
Maybe worth adding to the library. |
rollup/rollup | 198617897 | Title: How to deal with anonymous imports?
Question:
username_0: https://babeljs.io/docs/usage/polyfill/#usage-in-browser recommends the use of an anonymous import statement:
`import "babel-polyfill"`
But if I do this, rollup warns:
```
Treating 'babel-polyfill' as external dependency
No name was provided for external module 'babel-polyfill' in options.globals – guessing 'babelPolyfill'
```
Further, when I try using the resulting bundle in a browser, I get this error:
```
Uncaught ReferenceError: babelPolyfill is not defined
at Requirements.js:2568
```
1. How are we supposed to handle shims that augment existing globals without necessarily exporting their own?
2. Is there a way to suppress rollup's warnings?
3. I'm not sure that the output bundle should be waiting on `babelPolyfill` since anonymous imports imply that a global might not be getting exported.
Answers:
username_0: As an aside, in the specific case of `babel-polyfill` I discovered that it exports an undocumented global `_babelPolyfill` so I used:
```
external: ["babel-polyfill"],
globals: {
"babel-polyfill": "_babelPolyfill"
}
```
to suppress rollup's warnings. But I'm still curious what the best-practice is for these kinds of situation (in case I run into the problem again with a different library).
username_1: `import 'babel-polyfill'` means 'this bundle depends on babel-polyfill'. If Rollup knows how to find `babel-polyfill`, it will include it in your bundle. But by default, it *doesn't* know, so you get a message about it being treated as an external dependency. To tell it how to find it in `node_modules` you need to use [rollup-plugin-node-resolve](https://github.com/rollup/rollup-plugin-node-resolve).
Rollup should probably ignore missing globals for empty imports, come to think of it. Will mark this as a bug for that reason.
Status: Issue closed
username_0: Thanks excellent. Thanks Rich!
username_0: Odd. I just tried version 0.40.2 and I still get the same error.
Are you able to reproduce this on your end, or do you need me to provide a testcase for it?
username_0: Correction, the error changed. Now I get:
```
'babel-polyfill' is imported by src\Requirements.js, but could not be resolved – treating it as an external dependency. For help see https://github.com/rollup/rollup/wiki/Troubleshooting#treating-module-as-external-dependency
No name was provided for external module 'babel-polyfill' in options.globals – guessing 'babelPolyfill'
```
username_1: That's strange. Can you provide a repro? (you'll still need to use the node-resolve plugin as per my earlier comment, but it shouldn't print the `options.globals` warning)
username_0: @username_1 Sorry for the false alarm. I had totally forgotten about the node-resolve plugin. I confirm that this issue is fixed in 0.40.2. Thank you for the quick turnaround! |
ericam/christmasorigami | 142897990 | Title: Origami Simple Cat
Question:
username_0: Origami Simple Cat<br>
http://www.origami-kids.com/blog/cat/origami-simple-cat.htm<br><p><img src="http://www.origami-kids.com/blog/wp-content/uploads/2013/07/easycat150.jpg?d6f214" alt="Paper Very Simple and easy Cat">This is a very simple paper cat, is easy to fold so is a great introduction to origami for kids This is a traditional 0rigami cat model. Folder and Photo: @OrigamiKids Complexity Easy. You need 2 sheets of classic uncut … <a href="http://www.origami-kids.com/blog/cat/origami-simple-cat.htm">Continue reading <span>→</span></a></p>
<p>The post <a href="http://www.origami-kids.com/blog/cat/origami-simple-cat.htm">Origami Simple Cat</a> appeared first on <a href="http://www.origami-kids.com/blog">Origami Blog</a>.</p>
<br><br>
via MixOrigami http://www.rssmix.com/<br>
March 22, 2016 at 11:37PM |
abulmo/MPerft | 598574124 | Title: Unsigned long long
Question:
username_0: Unsigned long long should be replaced with uint64_t for consistency (avoiding assignment from uint64_t to unsigned long long).
Answers:
username_1: Printing uint64_t trough string concatenation with PRIu64 is ugly at least and not well supported by some (old) compilers. Assigning an uint64_t to an unsigned long long should not be a problem, as unsigned long long minimal size is 64 bits.
Status: Issue closed
|
ant-design/ant-design | 221484448 | Title: Popover can not closed when it is in another popover content
Question:
username_0: ### Version
2.9.1
### Environment
Mac:0.10.5 Chrome:57.0.2987.98
### Reproduction link
[http://codepen.io/SmilePark/pen/ybLRKX?editors=0010](http://codepen.io/SmilePark/pen/ybLRKX?editors=0010)
### Steps to reproduce
1.click the button
2.click the text 'click me'
3.click the body
### What is expected?
all two popover should be closed
### What is actually happening?
the child popover can not be closed
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: `getPopupContainer`
http://codepen.io/anon/pen/RVwqbP?editors=0010
Status: Issue closed
username_0: @username_1 非常感谢。。但是 第二次点击进来的时候 第二个popover会闪一下,然后又消失
username_1: Seems to be a problem..
username_1: ### Version
2.9.1
### Environment
Mac:0.10.5 Chrome:57.0.2987.98
### Reproduction link
[http://codepen.io/SmilePark/pen/ybLRKX?editors=0010](http://codepen.io/SmilePark/pen/ybLRKX?editors=0010)
### Steps to reproduce
1.click the button
2.click the text 'click me'
3.click the body
### What is expected?
all two popover should be closed
### What is actually happening?
the child popover can not be closed
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
username_1: 两个 state 隔离开吧:http://codepen.io/anon/pen/YVxOay?editors=0011
Status: Issue closed
|
FH-Potsdam/connecting-bits | 129444305 | Title: (Software) Lint javascript with ESLint
Question:
username_0: http://eslint.org/
- http://jshint.com/
- http://jscs.info/
Answers:
username_0: Maybe we should choose a set of meaningful rules among this eslint plugin list:
https://www.npmjs.com/browse/keyword/eslintplugin
PS: I added a eslint config and a gulp task. @Ourelius and @username_2, it would be great if you would follow these steps recommended by @username_1. I did this and it works like a charm!
https://medium.com/@dan_abramov/lint-like-it-s-2015-6987d44c5b48#.e083y8j8o
username_0: When you're done and the linting works please let me know and I'll close the issue. <3
username_1: added .babelrc as well. https://babeljs.io/docs/usage/babelrc/
username_1: into branch dev
username_2: Tried it yesterday but haven't got it to work yet. Will get back to you when my PC finally obeys my orders. :weary:
username_1: This is my current sublime setup https://gist.github.com/username_1/223d8bdc87bd561b4bb6
username_2: linting-problem solved from my side
username_0: :+1:
Status: Issue closed
|
jiacai2050/gooreplacer | 281946559 | Title: 不能保存数据
Question:
username_0: 操作系统是 win 10
Firefox 版本是 57.0
问题:设置好 Online Rule List, 和 cancel URL 等,重新打开 Firefox 之后,又什么都没有了。
Answers:
username_1: gooreplacer 是新装的吗? 看看 localStorage 里面
参考:https://github.com/username_1/gooreplacer/issues/44#issuecomment-341867667
username_0: @username_1 firefox 好像没有这个 local storage
username_1: 
Firefox 是一样的
username_0: @username_1 找到原因了,是因为关闭了“历史记录”引起的,谢谢。

Status: Issue closed
username_1: 好的,找到问题就好。 |
Fig77/grid-framework | 529382370 | Title: Peer to Peer review [27th Nov 2019]
Question:
username_0: 1. Good project, work well done(over all)
2. Have a proper tree for the page elements having nested folders (e.g. assets/css)
3. Verify the viewport tag, one of the two properties (width=device-width, initial-scale=1.0) causes sometimes causes some trouble when you change from portrait
4. Lowercase on files, for example line 8 -> Normalize.css
5. The 'index.html' file is less than 4KiB, but the 'js/all.js' is over 1MiB. More than 99% of the loading time for this website will be because of JavaScript. Think about this.
6. Suggestion: maybe add more breakpoint
7. Look at file format to improve loading speed
Answers:
username_1: Extra point - Including woff & woff2 webfont formats **only** should significantly reduce that file loading time from font-awesome. |
tomdionysus/foaas | 152989256 | Title: Add /deraadt/:name/:from
Question:
username_0: ```
{name} you are being the usual slimy hypocritical asshole... You may have had value ten years ago, but people will see that you don't anymore.
-- {from}
```
From the amazing bastard https://en.wikiquote.org/wiki/Theo_de_Raadt (look at what <NAME> thinks of him)
Answers:
username_1: +1
username_2: Done in v1.1.0
Status: Issue closed
|
ljharb/qs | 981175365 | Title: Does Not Parse Boolean/Number in array
Question:
username_0: Consider this code
```
Qs.stringify({
field: {
$in: [true, false,0,1]
}
})
```
`true, false, 0, 1` all become strings
Answers:
username_1: No tests here https://github.com/username_2/qs/blob/bd9e3754d2871592baf42ca9fa988c2148a469a5/test/parse.js for false/true value.
username_1: BTW https://github.com/username_2/qs/issues/91
username_2: @username_1 that's because `parse` only accepts a string. Passing a boolean in is a type error, something that I'll probably make throw an exception in the next semver-major, whenever that is.
Regarding #91, `a=true` in a query string is the string "true", not a boolean, so `qs` won't ever do the wrong thing there by default. You'll have to use a custom decoder if you want nonstandard types. |
pgogy/openattribute-wordpress_post | 190077823 | Title: Breaks https
Question:
username_0: The way you build the URLs for the stylesheet and logo using WP_PLUGIN_URL breaks https, should use plugins_url() instead, per https://wordpress.org/ideas/topic/wp_plugin_url-doesnt-take-ssl-into-account [I have a fix] |
dglo/StringHub | 634734900 | Title: [dglo on 2013-06-05 11:17:01] : pDAQ: StringHub - Dropped DOMs alert condition changed
Question:
username_0: JohnK noticed that the dropped DOMs were not reported during the Furthermore_rc1 run. Further investigation by JohnJ showed that the pDAQ dropped DOM alert condition string had changed from "Dropped DOMs" to "Dropped DOMs during configure"
Answers:
username_0: [username_0 on 2013-06-05 11:23:39]
svn commit -m"Issue 6538: Fix broken \"Dropped DOM\" alert condition" src/main/java/icecube/daq/stringhub/StringHubComponent.java
Sending src/main/java/icecube/daq/stringhub/StringHubComponent.java
Transmitting file data .
Committed revision 14546.
username_0: [username_0 on 2014-01-20 15:11:43]
Furthermore has been running for several months
Status: Issue closed
|
broadinstitute/picard | 826152352 | Title: Markduplicates with UMI in GATK pipeline, which is the correct approach?
Question:
username_0: Feature request - Markduplicate
Hi, everybody. In the past, we developed a pipeline GATK to identify somatic variants from Illumina amplicon-based gene panel. Now we are changing our pipeline to a new one in order to analyze data from an Agilent capture-based gene panel with MolecularBarcode (UMI).
To run our pipeline we used a GATK 4.1.4.1 WDL workflow file that call every tool.
To use UMI information we made a script that add UMI sequence and quality as RX and QX tags in the header of each read in the fastq files.
We changed WDL file to use Picard Markduplicate and to identify pcr duplicates thanks to UMI sequences.
This is the command we use
MarkDuplicates \
--INPUT ${sep=' --INPUT ' input_bams} \
--OUTPUT ${output_bam_basename}.bam \
--METRICS_FILE ${metrics_filename} \
--VALIDATION_STRINGENCY SILENT \
--OPTICAL_DUPLICATE_PIXEL_DISTANCE 2500 \
--ASSUME_SORT_ORDER "queryname" \
--MOLECULAR_IDENTIFIER_TAG RX \
--CREATE_MD5_FILE true
Unfortunately, using the same data with or without RX and QX tags the Markduplicate metrics are identical and also the variants called had the same number of supporting reads.
We also created two fake Fastq files repeating 6 times a read with RX and QX tags identical. In the second file we changed 2 RX sequences but Markduplicate identified 5 duplicates in both analysis.
Is it correct our approach?
Which is the exact Markduplicate command line to insert in WDL file?
Thanks,
Mat
Answers:
username_1: Have you tried the tool, UmiAwareMarkDuplicatesWithMateCigar?
Also, what is the purpose of the UMIs here? Could you describe your UMIs in more detail, are you using UMIs to increase depth, or to do consensus calling, or something else? |
jmcnamara/XlsxWriter | 110980604 | Title: Allow vba_extract.py to support an output file name
Question:
username_0: I have created a small patch that allows the user to specify an output filename when running vba_extract.py. The primary reason for this is that if I extract the VBA from two (or more) Excel sheets, in the same directory the new extracted file overwrites the old one.
I proposed
The new syntax is would be:
python vba_extract.py inputfile.xlsm outputVBA.bin
The other option would be to base the output filename on the inputfile name. For example
python vba_extract.py inputfile.xlsm
would generate vbaProject-inputfile.bin
The code change is pretty trivial either way but seems like it would be helpful. Let me know your thoughts.
Answers:
username_1: Hi Chris,
That sounds reasonable.
I would go with the first option. Something like:
```bash
vba_extract.py macro_file.xlsm -o file.bin
```
Where `-o` is optional and the default without it is to output `vbaProject.bin`.
Use standard libs to support the commandline.
Or if you prefer you can drop the `-o` (since it unlikely that there is going to be other options) and just have an implicit optional second argument (that also defaults to `vbaProject.bin`).
If you want to submit a PR that would be good. Add a couple of lines to the "The vba_extract utility" section of the [Working with VBA Macros documentation](http://xlsxwriter.readthedocs.org/working_with_macros.html) to go with it.
John
username_1: Looking at the code again the option without `-o` is probably better.
username_0: Thanks for the feedback. Do you think I should incorporate argparse into the patch or just continue using argv?
username_1: For this case I would avoid `argparse` and just work with `argv`.
username_1: Hi Chris,
I guess you lost interest in this or found an easier way to workaround it.
Either way there doesn't seem to be a pressing need for this feature. Is it okay to close this?
John
username_0: John - Sure. you can close it out. If I get to the point where I am ready to send a pull request, I can open this up again. Thanks.
Status: Issue closed
|
Apicurio/apicurio-studio | 358608338 | Title: In Definitions "formatted as" cannot be set to string
Question:
username_0: When I go into definitions, create a property and set type to string and formatted as to string as well, after reloading, formatted as is empty ("Choose format") again.
The cause probably is, that the value string is optional as it is the default and thus the tool does not store it. However, from a usability point of view it is not intuitive. Maybe the ui should show the default value even in case it is not present?
Answers:
username_1: Yeah that's weird - and your suggestion is a good one. Thanks for the report and the suggested fix! Should be an easy change to make... |
JustusW/harparser | 114749746 | Title: How to decode har file?
Question:
username_0: more demo?
Answers:
username_1: Please consult the https://github.com/username_1/harparser/blob/master/README since the usage section answers your question quite specifically.
Status: Issue closed
username_0: more demo?
username_1: I have no idea how to make the basic usage example any simpler than it is, take any string (in the following example from a file) and put it into the function:
from harparser import HAR
f = file("some.json")
my_har = HAR.log().json(f.read()) |
RTICWDT/open-data-maker | 595271393 | Title: Using Range and not together
Question:
username_0: Is it possible to use __not__range together? Looking to find a way to get schools with null for ACT and SAT data.
Answers:
username_1: We do not currently support combining value operations. I attempted to address your use case with other API methods but was unable to produce a work around using the API exclusively. I suggest gathering the requested information from the [raw data files](https://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv).
I am going to tag this issue as a potential feature to address better handling of `null` values during query time. |
swaywm/sway | 425624666 | Title: Void Linux Musl
Question:
username_0: OS: Void Linux x86-64 Musl (No Systemd)
Sway Version: 1.0
Config: Stock
Step 1
```sh
sudo xbps-install sway
sudo chmod a+s /usr/bin/sway
```
This gave me an error saying
`XDG_RUNTIME_DIR is not set in the environment. Aborting.`
So set the environment variable, then I got this
```sh
[backend/session/logind.c:511] User has no sessions
[backend/session/logind.c:559] Couldn't find an active session or a greeter session
```
At this point the system is unresponsive.
I've looked through your issues to try and find similar situations, they have helped me get where I am now but I'm not sure what to do now.
[issue 3733](https://github.com/swaywm/sway/issues/3733)
[issue 3024](https://github.com/swaywm/sway/issues/3024)
I have tried to run `sway -d 2> ~/sway.log.` The system becomes unresponsive so I'm not sure that this is a complete log.
```sh
2019-03-26 19:42:44 - [sway/main.c:153] Linux Moonshot 4.19.31_1 #1 SMP PREEMPT Sun Mar 24 19:15:14 UTC 2019 x86_64 GNU/Linux
2019-03-26 19:42:44 - [sway/main.c:169] Contents of /etc/os-release:
2019-03-26 19:42:44 - [sway/main.c:153] NAME="void"
2019-03-26 19:42:44 - [sway/main.c:153] ID="void"
2019-03-26 19:42:44 - [sway/main.c:153] DISTRIB_ID="void"
2019-03-26 19:42:44 - [sway/main.c:153] PRETTY_NAME="void"
2019-03-26 19:42:44 - [sway/main.c:141] LD_LIBRARY_PATH=(null)
2019-03-26 19:42:44 - [sway/main.c:141] LD_PRELOAD=(null)
2019-03-26 19:42:44 - [sway/main.c:141] Path=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin
2019-03-26 19:42:44 - [sway/main.c:141] SWAYSOCK=(null)
2019-03-26 19:42:44 - [sway/server.c:39] Preparing Wayland server initialization
2019-03-26 19:42:44 - [backend/session/logind.c:511] User has no sessions
2019-03-26 19:42:44 - [backend/session/logind.c:559] Couldn't find an active session or a greeter session
2019-03-26 19:42:44 - [backend/session/direct.c:271] Successfully loaded direct session
```
Answers:
username_1: Since you are using the direct backend, is you user in the `input` and `video` groups?
username_0: Yes
username_0: Reddit seemed pretty confident that sway on void Linux needed elogind. I've done a clean install and repeated the process.
After running `sway -d 2> ~/sway.log` I get the same results
username_0: Also I am using the nouveau drivers
username_2: Can you SSH into your machine to make sure logs aren't truncated?
username_0: I am currently unable to ssh into it as it's running on my only computer at the moment.
username_3: if you have a smartphone you can install a ssh client on it :-)
username_0: #### I have given up on Musl
I was having problems getting i3 to work on it as well so I figured I should stick with GlibC on this laptop if I really want this to work
### Void Linux GlibC
I have taken the same course of action as I had previously but now I am running with GlibC version.
When I run `sway -d 2> ~/sway.log` I get this
```sh
2019-04-02 07:37:30 - [sway/main.c:153] Linux Luna 4.19.32_1 #1 SMP PREEMPT Wed Mar 27 20:41:38 UTC 2019 x86_64 GNU/Linux
2019-04-02 07:37:30 - [sway/main.c:169] Contents of /etc/os-release:
2019-04-02 07:37:30 - [sway/main.c:153] NAME="void"
2019-04-02 07:37:30 - [sway/main.c:153] ID="void"
2019-04-02 07:37:30 - [sway/main.c:153] DISTRIB_ID="void"
2019-04-02 07:37:30 - [sway/main.c:153] PRETTY_NAME="void"
2019-04-02 07:37:30 - [sway/main.c:141] LD_LIBRARY_PATH=(null)
2019-04-02 07:37:30 - [sway/main.c:141] LD_PRELOAD=(null)
2019-04-02 07:37:30 - [sway/main.c:141] PATH=/home/doc/.cargo/bin:/home/doc/.cargo/bin:/home/doc/.cargo/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/doc/.shell/scripts/tools:/home/doc/.shell/scripts/
2019-04-02 07:37:30 - [sway/main.c:141] SWAYSOCK=(null)
2019-04-02 07:37:30 - [sway/server.c:39] Preparing Wayland server initialization
2019-04-02 07:37:31 - [backend/session/logind.c:663] Successfully loaded logind session
2019-04-02 07:37:56 - [backend/session/logind.c:74] Failed to take device '/dev/dri/card1': Connection timed out
```
username_0: This issue has been solved by
`export WLR_DRM_DEVICES=/dev/dri/card0`
Status: Issue closed
username_4: Huh, that is really strange, although probably indicates a systemd/logind than our one.
I don't know how well systemd plays with musl. It certainly not what you'd call portable code.
username_5: The first time I try to run Sway it works, but after a reboot I encounter this issue. I'm on a fresh Arch Linux install using the integrated graphics of a Ryzen 3400g.
When I try to run sway I get
```
2019-10-25 10:18:02 - [backend/session/logind.c:73] Failed to take device '/dev/dri/card0': Operation not permitted
2019-10-25 10:18:02 - [backend/backend.c:339] Failed to open any DRM device
2019-10-25 10:18:02 - [sway/server.c:46] Unable to create backend
```
and if I run `export WLR_DRM_DEVICES=/dev/dri/card0` and then try to run sway I get the the following:
```2019-10-25 10:18:54 - [backend/session/logind.c:73] Failed to take device '/dev/dri/card0': Operation not permitted
2019-10-25 10:18:54 - [backend/session/session.c:271] Unable to open /dev/dri/card0 as DRM device
2019-10-25 10:18:54 - [backend/backend.c:339] Failed to open any DRM device
2019-10-25 10:18:54 - [sway/server.c:46] Unable to create backend
```
username_6: same problem on void musl
but i also get
[sway/server.c:47] Unable to create backendDRM deviceerererers)
after
Couldn't find an active session or a greeter session
the system doesn't freeze the command just aborts
nothing i tried got sway to work
username_7: While debugging I noticed that `lspci` would freeze (then shutdown my system). A Ubuntu 19.10 live CD would also shutdown my laptop after a few minutes of idling. It turns out this is an issue with with the nouveau driver. I believe it's being tracked here: https://bugzilla.kernel.org/show_bug.cgi?id=156341
Using the Intel card (`export WLR_DRM_DEVICES=/dev/dri/card0`) solves the issue for me. You can also add `modprobe.blacklist=nouveau` to your kernel params.
username_8: bump as i am having the exact same issue on void linux musl
username_8: does a solution exist
username_8: have you ever got it to work |
BrightstarDB/BrightstarDB | 56507508 | Title: InverseProperty behaves differently depending on which side of the relationship it is applied
Question:
username_0: I was under the impression one could apply `InversePropertyAttribute` to either side of the relationship (but not both) and it would work the same. But I found this not to be true whilst putting together a repro for a different problem :)
Consider this:
```C#
[Entity]
public interface IQuote
{
string Id
{
get;
}
//[InverseProperty("Quote")]
ICollection<IQuoteItem> Items
{
get;
set;
}
}
[Entity]
public interface IQuoteItem
{
string Id
{
get;
}
string Name
{
get;
set;
}
[InverseProperty("Items")]
IQuote Quote
{
get;
set;
}
}
using (var context = new MyEntityContext(connectionString))
{
var quote = new Quote
{
Items = new[]
{
new QuoteItem
{
Name = "First"
},
new QuoteItem
{
Name = "Second"
},
new QuoteItem
{
Name = "Third"
},
[Truncated]
context.Quotes.Add(quote);
context.SaveChanges();
}
```
As it stands, the call to add the quote fails with:
```
System.ArgumentNullException was unhandled by user code
HResult=-2147467261
Message=Value cannot be null.
Parameter name: value
Source=BrightstarDB
ParamName=value
StackTrace:
at BrightstarDB.Client.DataObject.AddProperty(IDataObject type, Object value, String lang)
at BrightstarDB.Client.DataObject.AddProperty(String type, Object value, String lang)
at BrightstarDB.EntityFramework.BrightstarEntityObject.SetRelatedObjects[T](String propertyName, ICollection`1 relatedObjects)
at BrightstarDB_REPRO.Quote.set_Items(ICollection`1 value)
```
On a gut feeling I moved the `InverseProperty` to the `Items` property instead and then it worked.
Answers:
username_1: It should work either way. They will result in different structures in the datastore, so you can't chop and change once your data is set up, but it should be possible to run the code as you had it here. This sounds like a bug I'll need to look into.
Status: Issue closed
|
Subsets and Splits