repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
cinder92/react-native-get-music-files | 386973124 | Title: Can't make it work
Question:
username_0: Hi, I simply can't get anything out of `MusicFiles.getAll`...
I tried putting some logs in the module:
```js
if(Platform.OS === "android"){
console.log('android');
RNReactNativeGetMusicFiles.getAll(options,(tracks) => {
console.log('tracks', tracks);
resolve(tracks);
},(error) => {
console.log('error', error);
resolve(error);
});
}
```
this will log 'android' but not 'tracks' or 'error'. I call it this way:
```js
componentDidMount() {
const test = MusicFiles.getAll({
id: true,
blured: false,
artist: true,
duration: true, //default : true
cover: true, //default : true,
title: true,
// date: false,
// lyrics: false,
batchNumber: 5, //get 1 songs per batch
minimumSongDuration: 10000, //in miliseconds,
fields: ['title', 'artist', 'duration']
});
console.log('test', test);
}
```
and it logs this:

Any help is welcome!
Answers:
username_0: @username_2 any ideas ?
username_1: getAll is returning promise. You have to put after the closing tag ).then to access the data so its should like this: MusicFiles.getAll({
id: true,
blured: false,
artist: true,
duration: true, //default : true
cover: true, //default : true,
title: true,
// date: false,
// lyrics: false,
batchNumber: 5, //get 1 songs per batch
minimumSongDuration: 10000, //in miliseconds,
fields: ['title', 'artist', 'duration']
}).then(tracks => {
// do your stuff...
for example: console.log(tracks)
});
username_2: i will close this issue because answered
Status: Issue closed
|
AY2021S1-CS2103T-W15-4/tp | 714051638 | Title: Morph address field into species field
Question:
username_0: I think the current address field is the best field to be morphed into species since there doesn't seem to be any restriction as to what may be inputted to as an address.
By morphing address, we can then get our desired command `add n/NAME s/SPECIES i/ID [t/TAG]…`
Answers:
username_1: Unit tests should be double-checked to conform to species field specifications.
Status: Issue closed
|
google/googletest | 642659559 | Title: provide binary packages for googletest in Ubuntu/Debian
Question:
username_0: The packages provided in Ubuntu come without any binaries. [There used to be a recommendation here that googletest should be provided directly in the project repository](https://web.archive.org/web/20150801115151/https://code.google.com/p/googletest/wiki/FAQ#Why_is_it_not_recommended_to_install_a_pre-compiled_copy_of_Goog). Since it's gone and the package is not included or picked in [other projects,](https://github.com/google/shaderc) it would be recommended the libary shipped in Debian/Ubuntu not be a header only and provide the required binaries so gmock and other useful tools could be used without building from source.
Answers:
username_1: We have nothing to do with distro provided packages. Please request this from the distro.
Status: Issue closed
username_0: It would be good if there was something in the README for maintainers know it's alright to ship binaries now since it was asked not to in the past.
username_0: See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=802587
username_0: Saying "We have nothing to do with distro provided packages" won't fix things when there's a clear communication issue somewhere. |
Atlantiss/NetherwingBugtracker | 802938727 | Title: ([Spell][Shaman] Windfury Totem)
Question:
username_0: **Description**: My guild has noticed the windfury proc rate seems off. At first we blamed the shamans for poor uptime like any sensible individual would. With near perfect uptime however, the problem persisted.
I did some testing and Windfury totem seems to somehow interact with sinister strike (SS) causing less procs to occur than intended. I do not know if this has anything to do with sinister strike itself, or if it is an issue with mainhand instant attacks.
**Current behaviour**: When using sinister strike with windfury the totem buff proc rate drops way below 20%. I have done some testing, links provided below. WF looks weird in the logs because recount attributes to procs to sometimes incorrect damage instances.
Here i have tested spaming SS with Slince and Dice (SnD) up using only my mainhand weapon. I did two sessions so the results of both images need to be added together. The reason for doing 2 sessions was because the white hit analysis was not statistically significant with the amount of data from the first one:
https://media.discordapp.net/attachments/804372167465762857/805840900486987826/unknown.png?width=956&height=465
https://media.discordapp.net/attachments/804372167465762857/805848132884627504/unknown.png?width=956&height=437
The results are 770 white hits, 131 windfury procs, 271 SS. SS + white hits = total amount of attacks eligible to proc WF = 1041
Proc chance for all attacks: 131/1041 = 12.58%
Proc chance if not counting SS 131/770 = 17.01%
I wanted to see if the problem was that SS for some reason did not procs WF, but as we can see from the results the problem is not that simple. For some reason using SS causes a reduction in WF procs attributable to white hits as well.
To examine if there was a problem with the speed of the weapon i examined the behavior of windfury totem using first a slow dagger without SnD or other haste buffs, and then a fast dagger. The slow dagger became redundant after testing fast dagger. Linking to it anyway for extra data:
https://media.discordapp.net/attachments/804372167465762857/806535833972375562/slow_dagger.png?width=956&height=443
https://media.discordapp.net/attachments/804372167465762857/806538057033449523/fast_dagger_white_hits_no_SnD.png?width=956&height=456
Fast dagger white hits only with WF: 732 white hits, 151 WF procs
Proc chance: 151/732 = 20.63%
Looks like normal behavior.
To exclude my normal mainhand, its haste buff, or mongoose to be the cause of problems i tested the fast dagger with SnD + SS spam, like in my first test, to see if the problem persists:
https://media.discordapp.net/attachments/804372167465762857/806549036098256897/fast_dagger_max_Sinister__SnD.png?width=944&height=465
Results are: 961 white hits, 166 WF procs, 202 SS. Total hits eligible to proc WF 1163
Proc chance for all attacks: 166/1162 = 14.27%
Proc chance if not counting SS: 17.27%
As we can see the problem persists.
To examine if the problem is related to SnD i tested equipping 2 slow daggers (same speed for convenient calculations). I did as few shiv attacks as possible to generate combo points for keeping SnD up.
https://media.discordapp.net/attachments/804372167465762857/806559995474083880/double_slow_dagger_SnD_with_shiv.png?width=942&height=465
Results are 1403 white hits of which half 701 (rounded down) should be by the mainhand, 136 WF procs, 41 shivs (which are made by off hand, and thus NOT eligible for WF). Furthermore, 30 attacks missed. Half of those misses can be expected to be mainhand, making the total amount of hits eligible for WF 701-15 = 686
Proc chance for WF with SnD up: 136/686 = 19.86%
The results would indicate the problem is not related to SnD, or the fact that an off hand is equipped.
Then i tested whether haste effects from mainhand or mongoose could, together with SnD have an impact on WF behavior using my normal setup with MH + OH. As in the dagger test, Shiv was used to generate combo points for SnD upkeep.
https://media.discordapp.net/attachments/804372167465762857/806571067551907890/unknown.png?width=956&height=457
Results are 1127 white hits, 28 sword specialization hits, 81 WF procs, 17 Shiv, 23 miss. Excluding the misses 1104 attacks landed. The number of hits performed by the MH can be calculated by finding the ratio of MH hits to OH hits, which is OH speed / (OH speed + MH speed) = 1.5/4.2. This ratio is constant with haste effects and the base speed of the weapons can thus be used for calculations. 1104x(1.5/4.2) = 394. Sword spec attacks are always made by the MH, total number of attacks eligible for WF is thus 394+28 = 422
Proc chance for WF: 81/411 = 19.19%
Results display normal behavior.
**Expected behaviour**:
WF totem proc chance should be 20% for all attacks made by the mainhand other than WF attacks themselves. Before someone comes in here and says WF totem effect has a 3 second internal cooldown i will just state that this is wrong. It never was, and shouldn’t be.
http://web.archive.org/web/20080811071236/http://elitistjerks.com/f47/t20765-shaman_enhancement/#How_to_choose_totems This doesn’t implicitly state no CD, it is however implied.
https://www.mmo-champion.com/threads/603849-sword-spec-and-windfury Citation from the link: “windfury totem tho has a secret cooldown on its proc (?) which is said to be 3 secs.”
“incorrect, this is only with the shamans windfury weapon buff. the cooldown is also shared with both mh and oh.and, of course, as the poster above stated... **the 3 sec hidden CD just affects, windfury WEAPON. windfury TOTEM are just clean 20% per white**”
https://github.com/Atlantiss/NetherwingBugtracker/issues/6178 This issue has been dealt with before on netherwing and was corrected to no CD, further proof in the link posted by some else.
To conclude: Sinister strike, or possibly instant attack spells by the mainhand somehow prevents WF procs, reducing the rate by a significant amount. I couldn’t find the root cause of the problem during my tests, but I feel there is sufficient evidence to show that there is indeed a problem with the totem effect.
Thanks
**Server Revision**: 3596
Answers:
username_1: 

Neither Only SS nor SS+SnD with a fast dagger gave me a lower number of windfury procs
Status: Issue closed
|
WildernessLabs/Meadow.Foundation | 718827689 | Title: Exception when try to use Wifi
Question:
username_0: I don't know if did something wrong, but I try to make a simple code base on the starter meadowapp. I try to connect with WIFI and I get this error.
'App.exe' (CLR v4.0.30319: DefaultDomain): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
'App.exe' (CLR v4.0.30319: DefaultDomain): Loaded 'C:\Users\walte\source\repos\MeadowApplication1\MeadowApplication1\bin\Debug\net472\App.exe'. Symbols loaded.
'App.exe' (CLR v4.0.30319: App.exe): Loaded 'C:\Users\walte\source\repos\MeadowApplication1\MeadowApplication1\bin\Debug\net472\Meadow.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled.
The program '[18380] App.exe: Program Trace' has exited with code 0 (0x0).
The program '[18380] App.exe' has exited with code 0 (0x0).
Initializing Esp32 coproc.
Esp32 coproc initialization complete.
Unhandled Exception: System.Exception: Cannot connect to network, applicaiton halted.
at MeadowApplication1.MeadowApp..ctor () <0xc131dc20 + 0x0006e> in :0
at MeadowApplication1.Program.Main (System.String[] args) <0xc1013de8 + 0x00034> in :0
Answers:
username_0: My mistake, I forgot to update the coprocessor
Status: Issue closed
|
spinnaker/spinnaker | 360962105 | Title: feat(provider/kubernetes): v2 Add support for PodPreset kind
Question:
username_0: ### Issue Summary:
Clouddriver does not support the Kubernetes [PodPreset](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#podpreset-v1alpha1-settings-k8s-io) resource. Similar to #2824
### Cloud Provider(s):
Kubernetes V2
Clouddriver version 3.5.0
### Feature Area:
Clouddriver/Kubernetes Deployments
### Description:
When deploying a kubernetes manifest with a `PodPreset` resource the pipeline fails with the error "Unsupported Kubernetes object kind 'PodPreset', unable to continue" 
### Steps to Reproduce:
Deploy a Kubernetes manifest that includes a `PodPreset` resource.
Status: Issue closed
Answers:
username_1: Closing in favor of https://github.com/spinnaker/spinnaker/issues/3007 |
rails/rails | 1012803146 | Title: Rails 7.0.0.alpha2 limits css options if setting --css=tailwind
Question:
username_0: ### Steps to reproduce
I put a post on [discuss.rubyonrails.org](https://discuss.rubyonrails.org/t/is-choosing-css-tailwind-in-rails-7-alpha2-restricting-your-css-to-only-tailwind/79052) hoping to get an answer to some questions without posting an issue. It has not been out there for very long, but appears I'm not going to get an answer to my questions.
I was just trying to see how rails 7 was going to fit into my current apps and created a couple new apps using both the default importmap and the -j esbuild option.
It appears that if using the default inportmap tailwind.css is limited to the basic tailwind.css without any options to add stuff that would be in `tailwind.config.js`. You just have raw tailwind.css since there is no JS to modify. But you can add other css files with `@import somecssfile.css`
If you choose -j esbuild you get a tailwind.config.js that you can at least add colors etc. to the `tailwind.config.js` file. I don't know if you can add components, but it appears you can't add any other css (including a components directory as in webpacker).
### Expected behavior
<!-- Tell us what should happen -->
I assume that taking the default importmap options limits you to the base tailwind.css without the ability to modify.
I'd expected that I could add other css files with the -j esbuild options, but don't know how. As I explained in the discuss.ruby.org post, `stimulus.flatpickr` depends on `flatpickr` and requires a css file `flatpicker.css`. I need to add that file but can't find a way.
Forgoing how to add it as I did with webpacker `@import "flatpickr/dist/flatpickr.css";` which got it from the node module,I just copied the flatpack.css file and added it to the stylesheets directory and tried changing the `application.tailwind.css` file to:
```css
@import "application.flatpickr.css";
@tailwind base;
@tailwind components;
@tailwind utilities;
```
But get a build error: `20:34:14 web.1 | ActionController::RoutingError (No route matches [GET] "/assets/flatpickr.css"):
`
Putting the @import command at the end of the file, did nothing but no error.
Appending the css from the `flatpickr.css` file to the `application.tailwind.css` file worked. Not sure if adding custom css to the file is the right approach!
### Actual behavior
<!-- Tell us what happens instead -->
I'm not sure this is an issue or a feature! If it's a feature I'm not sure this is progress. I'm not a developer, just a hobbyist that writes stuff for my own needs. I have been using rails since before version 1.0 and something is not connected.
### System configuration
**Rails version**: 7.0.0.alpha2
**Ruby version**:3.0.1
Answers:
username_1: @username_0 I have experienced the same. The reason for this is that `tailwindcss` overwrites jsbuild generated `application.css`. To change it you need to modify the `scripts` section a bit in your `package.json` so it would look like this:
```json
"scripts": {
"build": "esbuild app/javascript/*.* --bundle --outdir=app/assets/builds",
"build:css": "tailwindcss -i ./app/assets/stylesheets/application.tailwind.css -o ./app/assets/builds/tailwind.css"
},
```
As a result, you will get two CSS files in `app/assets/builds`. One will be `application.css` and the other will be `tailwind.css`. Then in `app/views/layouts/application.html.erb` you need to add the generated tailwind CSS file.
```rb
<%= stylesheet_link_tag 'tailwind', "data-turbo-track": 'reload' %>
<%= stylesheet_link_tag 'application', "data-turbo-track": 'reload' %>
<%= javascript_include_tag 'application', "data-turbo-track": 'reload', defer: true %>
```
Jsbuild generates CSS by default from imported CSS files in JS files. Assuming you're using Stimulus, just add the CSS files from the `flatpickr` package (in my case installed via yarn) to your sample controller file (`app/javascript/controllers/hello_controller.js`).
```js
import { Controller } from "@hotwired/stimulus";
import flatpickr from "flatpickr";
import "flatpickr/dist/flatpickr.css";
import "flatpickr/dist/themes/light.css";
```

username_0: Thank you for your help.
You solved one problem - with the same way it worked in webpacker!
```js
import "flatpickr/dist/flatpickr.css";
import "flatpickr/dist/themes/light.css";
```
Thought I tried that but guess it was getting overwritten. But what if I have other css files that aren't in JS packages? Like all the rails 6.x scss files in the stylesheets directory? Guess you just have to convert them and put them together until you clean your stuff up. This is not going to be an easy upgrade!
Again, I'm just playing with 7.0.0. I have one app that I'm slowly moving from W3.css to tailwind and tried to build that css. But build:css only allow one build:css (get duplicate key). Thought the -i argument could handle wild cards like `application.*.css` but it doesn't. You don't get an error, it just grabs the first file in the directory.
I just put w3.css in a header link with an href. Again I'll move away slowly.
Well I got stimulus-flatpickr working. The only other package I use is stimulus-autocomplete. Don't think that one is going to work without an upgrade - relies on webpacker helpers, at least there is a reference to it.
Think I'll leave this out here for a little while since someone else may run into the road block.
Again, thank you for your help.
Status: Issue closed
username_0: ### Steps to reproduce
I put a post on [discuss.rubyonrails.org](https://discuss.rubyonrails.org/t/is-choosing-css-tailwind-in-rails-7-alpha2-restricting-your-css-to-only-tailwind/79052) hoping to get an answer to some questions without posting an issue. It has not been out there for very long, but appears I'm not going to get an answer to my questions.
I was just trying to see how rails 7 was going to fit into my current apps and created a couple new apps using both the default importmap and the -j esbuild option.
It appears that if using the default inportmap tailwind.css is limited to the basic tailwind.css without any options to add stuff that would be in `tailwind.config.js`. You just have raw tailwind.css since there is no JS to modify. But you can add other css files with `@import somecssfile.css`
If you choose -j esbuild you get a tailwind.config.js that you can at least add colors etc. to the `tailwind.config.js` file. I don't know if you can add components, but it appears you can't add any other css (including a components directory as in webpacker).
### Expected behavior
<!-- Tell us what should happen -->
I assume that taking the default importmap options limits you to the base tailwind.css without the ability to modify.
I'd expected that I could add other css files with the -j esbuild options, but don't know how. As I explained in the discuss.ruby.org post, `stimulus.flatpickr` depends on `flatpickr` and requires a css file `flatpicker.css`. I need to add that file but can't find a way.
Forgoing how to add it as I did with webpacker `@import "flatpickr/dist/flatpickr.css";` which got it from the node module,I just copied the flatpack.css file and added it to the stylesheets directory and tried changing the `application.tailwind.css` file to:
```css
@import "application.flatpickr.css";
@tailwind base;
@tailwind components;
@tailwind utilities;
```
But get a build error: `20:34:14 web.1 | ActionController::RoutingError (No route matches [GET] "/assets/flatpickr.css"):
`
Putting the @import command at the end of the file, did nothing but no error.
Appending the css from the `flatpickr.css` file to the `application.tailwind.css` file worked. Not sure if adding custom css to the file is the right approach!
### Actual behavior
<!-- Tell us what happens instead -->
I'm not sure this is an issue or a feature! If it's a feature I'm not sure this is progress. I'm not a developer, just a hobbyist that writes stuff for my own needs. I have been using rails since before version 1.0 and something is not connected.
### System configuration
**Rails version**: 7.0.0.alpha2
**Ruby version**:3.0.1 |
postcss/autoprefixer | 171857037 | Title: License problem
Question:
username_0: Hi,
I've installed this package, and noticed that it uses module `caniuse-db`, which is published under **CC-BY-3.0** license. While I see that you gave attribution to the `caniuse-db`, do I have to do the same if this `autoprefixer` package is published under **MIT** license?
Status: Issue closed
Answers:
username_1: Hi. Good questions — after years in Wikipedia I love all this copyright law things :D.
Short answer: this licences are compatible.
Long answer: in copyright you can do only what author allowed. MIT allow you to use, modify and distribute. CC-BY allows you to use, modify and distribute too.
Also MIT allow you to use different licences in subprograms.
username_0: Well, I understand that you've made no mistake by licensing it with MIT now :) but do I have to attribute the author of `caniuse-db`, if I only use this `autoprefixer` module?
username_1: If you develop a open source npm module, `package.json` already contusername_1ns that you use `autoprefixer` and Can I Use.
If you develop a stand-alone program or website, you could just mention “Based on Autoprefixer and Can I Use data” with a links. |
irbv-collections/MT-controlled-vocabulary | 94801815 | Title: New Taxon
Question:
username_0: Carex du Mexique, des Antilles et d'AmC
Carex scabrella Wahlenb.
Carex ballsii Nelmes
Carex turbinata Liebm.
Carex planostachys Kunze
Carex bermudiana Hemsl.
Carex consors <NAME>
Carex marianensis Stacey
Carex autumnalis Mack.
Sarah
Status: Issue closed
Answers:
username_1: Carex scabrella acc
Carex ballsii acc
Carex turbinata acc
Carex planostachys acc
Carex bermudiana acc
Carex consors = Carex psilocarpa
Carex marianensis acc
Carex autumnalis Mack. = nom. illeg. = Carex marianensis |
rasvaan/accurator | 65286088 | Title: Pages which contain exclusively text
Question:
username_0: Chris, you have probably already noticed this, but there are several pages which contain exclusively text and no illustrations in Accurator. These are from Taferelen der voornaamste geschiedenissen [...]. Is it possible to remove these pages?
Answers:
username_1: There are also empty pages :( At the moment I automatically selected every page within the UBVU bible collection, but we could also make a hand curated list. I propose to leave it like this, these pages will only pop up in the random cluster, just advice annotators to skip over them.
At the moment you feel like you have nothing to do anymore give me a heads up, I can send you a file from which you can remove the uris which have only text/are blanc.
Status: Issue closed
|
axios/axios | 975308441 | Title: axios response body contains random properties.
Question:
username_0: <!-- Click "Preview" for a more readable version --
Please read and follow the instructions before submitting an issue:
- Read all our documentation, especially the [README](https://github.com/axios/axios/blob/master/README.md). It may contain information that helps you solve your issue.
- Ensure your issue isn't already [reported](https://github.com/axios/axios/issues?utf8=%E2%9C%93&q=is%3Aissue).
- If you aren't sure that the issue is caused by Axios or you just need help, please use [Stack Overflow](https://stackoverflow.com/questions/tagged/axios) or [our chat](https://gitter.im/mzabriskie/axios).
- If you're reporting a bug, ensure it isn't already fixed in the latest Axios version.
- Don't remove any title of the issue template, or it will be treated as invalid by the bot.
⚠️👆 Feel free to these instructions before submitting the issue 👆⚠️
-->
#### Describe the issue
local REST API server responses with array with empty objects, but the response object from client contains random properties in each should-have-been empty objects.
#### Example Code
From my client side
```js
getReadyWorks = async () => {
const response = await axios.get("/workorders/ready?limits=1");
console.log(response);
return response.data;
};
```
From my console
```js
{
"data": [
{
"receiveTime": "Invalid Date",
"startTime": "Invalid Date"
},
{
"receiveTime": "Invalid Date",
"startTime": "Invalid Date"
},
{
"receiveTime": "Invalid Date",
"startTime": "Invalid Date"
}
],
"status": 200,
"statusText": "OK",
"headers": {
"content-length": "10",
"content-type": "application/json; charset=utf-8"
},
"config": {
"url": "/workorders/ready?limits=1",
"method": "get",
"headers": {
"Accept": "application/json, text/plain, */*"
},
"baseURL": "http://localhost:5000/",
"transformRequest": [
null
],
"transformResponse": [
[Truncated]
"maxContentLength": -1,
"maxBodyLength": -1
},
"request":
}
```
#### Expected behavior, if applicable
the data object from axios response should be 1 array of 3 empty objects.
#### Environment
- Axios Version [e.g. 0.21.1]
- Adapter [e.g. XHR/HTTP]
- Browser [ Chrome]
- Additional Library Versions [React 17.2]
#### Additional context/Screenshots
see how the response from the get request has empty objects, but the data object from axios response object contains `receiveTime` and `startTime` properties, which i haven't used at all.


Answers:
username_1: do you have any transformResponse functions?
username_2: I am not a Pro, but I think this is no axios-issue. You should check your workorders/ready-API-endpoint. How is the request handled there and how is the response build?
username_3: If there was `302 (Redirect)` received prior to provided response, axios will return the entity-body in response |
utkuozbulak/pytorch-cnn-visualizations | 605259839 | Title: Vanilla Gradient as heat map
Question:
username_0: Great work, I highly appreciate your work and it helps me alot in my reaearch. I have been exploring your technique and codeand want to ask you is that is there any way to render vanilla gradients as heatmap
Answers:
username_1: Hey, I'm glad it was useful for you.
You can simply pass grayscale values to colormap and get similar heatmaps but if you do it that way, I'm pretty sure it will look jittery (not as smooth grad-cam, quick drops from high to low value and vice versa). Maybe try applying smoothing to the saliency map and then applying colormap. I think that would do it.
username_2: Great, Thanks. Well, can I get heat maps which shows the behavior of gaussian, just like grad cam. Even after smoothing its quite jittery
username_1: You can smooth harder with bigger kernels. I personally didn't try smoothing so not sure what kind of stuff you will get.
Status: Issue closed
|
toggl-open-source/toggldesktop | 941354639 | Title: Autotracker should start AND stop automatically.
Question:
username_0: <!-- Before submitting a new issue, please make sure that the same issue has not been created already -->
### 💻 Environment
<!-- Let us know the platform you would like the improvement to be in -->
Platform: Windows/all
### 📒 Description
<!-- Short and concise description of the imporovement/feature -->
Autotracker starts a new entry as expected, but doesn't stop automatically. Is that by design? I would expect:
I set up an autotracker entry for "study" when a particular window is focused. When that window is open/focused, it should record study time, when I close that window (when the focused window does not match the tracker), it should not record study time.
### ⭐️ Why do you want this?
<!-- Let us know what is the use case that this improvement solves -->
I'm looking for a RescueTime alternative, I don't want to ever manually enter any task, I want something that will just record what I've got open and log that so I can look back at it later.
Maybe I'm just doing something wrong, I can't work out why someone would want auto-start but not auto-stop.
Answers:
username_1: I second this. |
GRIDAPPSD/GOSS-GridAPPS-D | 1116841773 | Title: TransformerTank measurement objects not handled correctly
Question:
username_0: Measurements are currently defined for PowerTransformer but not for TransformerTank.
This issue affects all split-phase customer transformers and regulator banks.
Current models in platform do not have any TransformerTank measurements defined.
When a TransformerTank measurements is attempted to be loaded, the platform throws error below:


Answers:
username_1: This was an issue with IEEE123 transactive model. It can be closed right now because @temcdrm commented out Transformer measurements. Same as #1102 .
Status: Issue closed
|
jpd002/Play-Compatibility | 1182615617 | Title: [SLES-53572] Rig Racer 2
Question:
username_0: **Last Tested On**
[27/03/2022] - https://github.com/jpd002/Play-/commit/630c31b2e754a67fb7f69fb6998a7fbdfa31ea90
**Known Issues & Notes**
In-game, slow and with high EE usage
**Related**
**Screenshots** |
Billmate/prestashop | 55028575 | Title: Change formating of text on part payment options
Question:
username_0: The text should be "3 månaders delbetalning - 188,18 SEK / månad"

Answers:
username_1: Should be fixed in english. But should work in swedish also
username_0: Closing, not relevant anymore since it´s not on a separate page anymore.
Status: Issue closed
|
FStarLang/FStar | 57907672 | Title: Possible improvement : assuming "asserts" later in the code
Question:
username_0: If I am correct currently the <code>assert (...);</code> are proven and then discarded by F*.
I think it would be very useful to assume them later in the code and hand them to the SMT solver like what happens when a lemma is manually instantiated.
It would give a flexible and efficient way to find what the prover needs to make the verification go through.
Answers:
username_1: Agreed, `assert` should first prove the assert and then assume it in the rest of the code. That's the semantics of `assert` in any other verification system I know.
username_2: I thought the F* type checker would feed to Z3 a single formula consisting of conjunction of the assert and VC for rest of the code ? If so, Z3 should have the assert in its context when proving VC for rest of the code (if Z3 proves conjunction a /\ b in the order of a then b) ?
username_3: Prims.cut does what you want
username_1: I think it would be good to make assert do the same as cut.
username_4: module COUNT100
type nat = i:int{0 <= i}
val countTo100: n:int{0 <= n /\ n <= 100} -> x:int{x=100}
let rec countTo100 n =
if n < 100 then countTo100 (n + 1)
else n
let test1 = assert((countTo100 20) = 100)
username_4: @username_1 How can i use assert in F*.
username_5: @username_4 Your code is fine except that you need to prove that it terminates
```
module COUNT100
val countTo100: n:int{0 <= n /\ n <= 100} -> Tot (x:int{x=100}) (decreases %[100-n])
let rec countTo100 n =
if n < 100 then countTo100 (n + 1)
else n
let test1 = assert((countTo100 20) = 100)
```
username_1: Seems we're left with both `assert` and `cut`, and `cut` now actually works (#180). Closing.
Status: Issue closed
|
imrenagi/rojak-web-frontend | 261955572 | Title: Setting up Configuration for React JS Unit Test
Question:
username_0: To resolve this issue, we might need your help to setup react js unit test and please provide with some documentation about how to add new unit test and how to run the existing unit test.
Answers:
username_1: Hi, I am happy to try to help with this project. Can you provide more information? What exactly is the issue?
username_0: Hi, thanks for your interest! basically we are planning to start doing BDD or TDD in react, but for now we haven't setup any configuration that enables us to do unit testing in this project. So we need your help to enables unit testing by installing several libraries required for unit test in react js(add some new libraries in package.json, give some working example of success unit test in react, etc). We also expect that you will give short documentation about what the other contributors need to do if they want to add unit test to this project. Does it makes more sense? Im sorry for the incomplete information before. 😄
username_1: Hi, this is a large undertaking and honestly I'm not sure I have the time to dedicate to it. You will need to use Jest to test the React components. To help break this into a smaller project for me to work on, would it be possible to tell me of a static component in this repo and then I'll do my best to get a passing test for it? That will help others be able to write tests since there will be a template and the configuration will be done.
username_0: Hi, yeah I completely understand your concern. Right now, we just started developing a map component located in `/src/components/maps/IndonesiaMap.js`. There is a problem with that map. We are still using the hardcoded value of the map size. Our expectation is that the size of the map will change as the size of the screen is changing. Is that possible to add a unit test for that case? That's one of the components that I can think of to be tested. 😄
username_2: @username_1 if you would like some help on this I would be happy to, <3 TDD in React :)
username_1: @username_2 You can totally work on this! I am good at setting up Jest for snapshot tests.
username_2: @username_1 how do you want to split this up?
username_1: @username_2 You can do as much as you want! I am pretty busy and didn't realize how much time this would take.
username_2: I am on it @username_1 I will keep you up-to-date in case you want to jump back in 😄
username_1: Thank you!
username_2: @username_0 I see you are using feature branches, would you like me to open a feature branch using gitflow or fork your repo? I have forked it for now, if you would like me to use git flow just let me know 🛩
username_0: please just ignore the feature branches. You can create PR from your forked repo 😄
username_2: @username_0 using unit testing I am not sure you can test the issue you mentioned in `/src/components/maps/IndonesiaMap.js` I can provide some examples about how to TDD and snapshot your react components.
If you want to test window resize you will need some more complicated integration testing using phantom.js or something like that.
username_0: Yeah. You dont really have to test the map. Basically I just need to established unit test setup for this project. So, I need unit test setup and an example about how to test a sample component (You can create your sample component). it should be enough. Please include a bit of documentation if necessary to help the other contributors who want to try TDD in this project in the future. Does it make sense?
username_2: Yes that is perfect! @username_0 I will test a few of your components and write some documentation in the readme about testing 😃
username_0: Cool. I'm looking forward to your PR, @username_2 ! 👍
Status: Issue closed
|
helloworld1/FreeOTPPlus | 817716709 | Title: Slight lag when loading custom icons
Question:
username_0: When a custom icon (PNG in my testing) is specified, there is a slight delay in loading the icon when opening up the main activity for the first time. The placeholder icon is displayed briefly, before showing the custom icon. This is particularly noticeable when there are several tokens configured using the built-in icons, but one or two with custom icons - the built-in icons load immediately, while the custom icons load after a short (~100-200 ms) delay.
Answers:
username_1: This is caused by async loading by Picasso library. I am not sure about the cost of preloading of all token image and possibility of blocking the UI thread. If one has a lot of tokens and custom images, it can also run out of ram.
username_0: I updated the PR to move the Picasso fetch() call to the IO thread (in a suspend function in ImageUtil). That shouldn't matter though, since the fetch() function is asynchronous. I did some testing by passing in some image URLs from https://deelay.me/ and there's no effect on the UI thread.
Memory usage won't change, token images are already being fetched into memory by setTokenImage. This just changes the order so it's fetched a bit earlier.
username_1: Currently I think the RecyclerView will only load images that is on screen.I am worried if we prefetch, we will fetch all of them
username_0: Yeah, you're right.
For the fetch() call to help, it has to be made before some unrelated operation causing a delay in the UI, so that the image can be fetched in parallel during the delay. It looks like the biggest delay in rendering the MainActivity is from calculating the diff of the ListAdapter, to determine which tokens are rendered. So, if it waits until after the list is calculated so that only the tokens shown on screen have their images fetched (e.g. by putting the fetch call in onCurrentListChanged), then the list is rendered immediately, and the fetch doesn't help. If it fetches beforehand, it wouldn't know which tokens are being rendered, so it would have to fetch all images, like you describe.
I still think it's not a big deal to fetch all images, since in my use case (and I suspect the median use case) there are less than 10 tokens, so the memory overhead of loading all those thumbnails isn't noticeable. So, I'll keep the change on my fork.
username_0: Updated the PR, it now only loads a subset (the first 12) token images.
username_1: I found there are still performance problem even pre-fetched when I scroll around. The Picasso library has clearly cached the image already but it takes time to load. I will investigate further into the image performance.
Status: Issue closed
username_1: With commit <PASSWORD> and c97ff2f380446e80ee18f29a2f208ce125bebde6. The image cache loading performance is much better. I don't think we need prefetch anymore. Would you please rebase and test out? |
allcrypto/auto_switch | 314365532 | Title: Not an issue - but can this be modified to support remote configs?
Question:
username_0: Your code is superb - I am wondering if there's a way to get it to support remote.conf. I pull down configs from my dropbox, and make them active with a batch file or a manual modification at the moment. In your code, it requires config files inside the configs directory for those coins to be activated.
Could we simply place the URL to the remote config inside of each coin config inside /configs? I can experiment for you.
I think this would be a huge benefit, especially when managing multiple rigs with remote configs, this way you can modify the remote configs and have them on a remote repository, which makes editing / modifying them much easier for multiple rigs. Each rig would just pull down the configs when needed. |
ant-design/ant-design-pro | 532596276 | Title: 默认fixSiderbar失效, 一直是固定的状态
Question:
username_0: Ant Design Pro 4.0.0 我把pro-layout升级到4.9.2之后 fixSiderbar失效, 一直是固定的状态


### © 版本信息
- Ant Design Pro 版本: [e.g. 4.0.0]
- umi 版本
- 浏览器环境
- 开发环境 [e.g. mac OS]
Status: Issue closed
Answers:
username_1: 重新刷新试试呢,我这里看起来是ok的
username_2: 遇到同样的问题,请求有解决? |
jcabi/jcabi-matchers | 395649438 | Title: Make matchers that comply with EO
Question:
username_0: We must replace all static matchers in `jcabi-matchers` by matchers that comply with EO philosophy. See https://github.com/yegor256/cactoos/issues/588 for the discussion
Answers:
username_1: @yegor256[/z](https://www.username_1.com/u/yegor256) please, pay attention to this issue
username_1: @username_0[/z](https://www.username_1.com/u/username_0) this project will fix the problem faster if you donate a few dollars to it; just [click here](https://www.username_1.com/contrib/C3RUBL5H9) and pay via Stripe, it's very fast, convenient and appreciated; thanks a lot! |
plk/biblatex | 245074311 | Title: Strange sorting
Question:
username_0: I am currently extending my `biblatex-fiwi` style for archival sources. This is in the early stages yet, so don't pay too much attention to the new document type and fields. My problem is with sorting. I want to entries to sort by archive/library and the respective archive number of the document in question since archival documents can have neither author nor title nor even a publication date. For this, I created the following sorting scheme:
```
\DeclareSortingScheme{archiv}{
\sort{
\field{presort}
}
\sort[final]{
\field{sortkey}
}
\sort{
\field{library}
}
\sort{
\field{librarylocation}
}
\sort{
\field{sortyear}
\field{year}
}
```
Now look at the following three entries:
```
@archival{Wedegartner.T:1976a,
Author = {Wedegärtner, Thomas},
Date = {1976-07-19},
Library = {BArch},
Librarylocation = {DR 1/25990},
Title = {Charakteristik des Regisseurs Dr. <NAME> (futurum)},
Year = {1976}}
@archival{Hellwig.J:1979a,
Author = {Hellwig, Joachim and Wedegärtner, Thomas and Raue, Dieter and Brückner, Thekla and Steinheisser, Jürgen and Giesen, Frank},
Library = {BArch},
Librarylocation = {DR 117/17769},
Title = {Untersuchung der Konzeption, Ergebnisse sowie der Wirksamkeit der Discofilme und des Internationalen Jugendmagazins \film{IN}},
Year = {1979}}
@archival{unbekannt:2017a,
Library = {BArch},
Librarylocation = {DR 118/3555},
Standort = {PDF},
Title = {Auszug aus dem \enquote{Kulturmagazin} vom 10.12.77}}
```
What I want is the following sort order: Wedegartner.T:1976a – Hellwig.J:1979a – unbekannt:2017a. So "DR 1/25990" should be before "DR 117/17769". But no matter what I try, `Wedegartner.T:1976a` always ends up as the last entry. Somehow the slash seems to mess the sorting and "DR 1/" is sorted last, but I don't really understand the logic behind this.
Answers:
username_1: Hmm, can you run biber with `--trace` which will give the sorting objects.
username_0: Something more fundamental seems to be broken. When I run it with `--trace`, I get a lot of
````Use of uninitialized value in substitution iterator at /opt/local/lib/perl5/site_perl/5.26/Biber/LaTeX/Recode.pm line 307.````
This is with the latest build on Perl5.26. Not sure whether `biber` or Perl is the problem here.
username_1: Hmm, I don't see that problem but I don't think that it is relevant anyway. Perhaps you can package up a minimal broken example and I can try it?
username_2: Can you please show us a full minimal example. I just tried to reproduce your issue, but because I don't have your datamodel, things obviously didn't work out. When I worked around that and used field names that were available for me, things sorted as expected by you.
Only if we know that we run the exact same code can we start debugging this.
username_2: @PLK Now this is curious. With the following MWE I get
```
[221] Internals.pm:1071> DEBUG - Sorting object for key 'unbekannt:2017a' -> ["mm", "", "BArch", "DR 1183555", ""]
[221] Internals.pm:1071> DEBUG - Sorting object for key 'Wedegartner.T:1976a' -> ["mm", "", "BArch", "DR 125990", 1976]
[222] Internals.pm:1071> DEBUG - Sorting object for key 'Hellwig.J:1979a' -> ["mm", "", "BArch", "DR 11717769", 1979]
```
in the `--debug` output, so the slash is missing.
```
\documentclass{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[british]{babel}
\usepackage{csquotes}
\usepackage{filecontents}
\begin{filecontents*}{\jobname-archival.dbx}
\DeclareDatamodelEntrytypes{archival}
\DeclareDatamodelFields[type=field,datatype=literal]{
librarylocation,
}
\DeclareDatamodelEntryfields[archival]{
library,
librarylocation,
title,
author}
\end{filecontents*}
\usepackage[style=authoryear-icomp,datamodel=\jobname-archival]{biblatex}
\begin{filecontents*}{\jobname.bib}
@archival{unbekannt:2017a,
Library = {BArch},
librarylocation = {DR 118/3555},
Standort = {PDF},
Title = {Auszug aus dem \enquote{Kulturmagazin} vom 10.12.77}}
@archival{Wedegartner.T:1976a,
Author = {<NAME>},
Date = {1976-07-19},
Library = {BArch},
librarylocation = {DR 1/25990},
Title = {Charakteristik des Regisseurs Dr. <NAME> (futurum)},
Year = {1976}}
@archival{Hellwig.J:1979a,
Author = {Hellwig, <NAME> Wedegärtner, Thomas and Raue, Dieter and Brückner, Thekla and Steinheisser, Jürgen and Giesen, Frank},
Library = {BArch},
librarylocation = {DR 117/17769},
Title = {Untersuchung der Konzeption, Ergebnisse sowie der Wirksamkeit der Discofilme und des Internationalen Jugendmagazins},
Year = {1979}}
\end{filecontents*}
\addbibresource{\jobname.bib}
\DeclareSortingScheme{archiv}{
\sort{
\field{presort}
}
\sort[final]{
[Truncated]
}
\sort{
\field{library}
}
\sort{
\field{librarylocation}
}
\sort{
\field{sortyear}
\field{year}
}
}
\ExecuteBibliographyOptions{sorting=archiv}
\begin{document}
\nocite{*}
\printbibliography
\end{document}
```
username_0: I attached a folder with all relevent files, but I think @username_2 is already up to something.
[fiwi.zip](https://github.com/username_1/biblatex/files/1170037/fiwi.zip)
username_2: Consider the following MWE
```
\documentclass{article}
\usepackage[style=authortitle]{biblatex}
\usepackage{filecontents}
\begin{filecontents*}{\jobname.bib}
@online{elk,
author = {<NAME>},
title = {Towards a Unified Theory on /B/r/o/n/t/o/sauruses},
url = {http://www.example.edu/~elk/bronto.pdf},
}
\end{filecontents*}
\addbibresource{\jobname.bib}
\begin{document}
\nocite{elk}
\printbibliography
\end{document}
```
Which yields https://gist.github.com/username_2/93a7fc607a589f9cbef6a459aa2534d3
in particular
```
[332] bibtex.pm:1436> TRACE - Buffer before decoding -> '@online{elk,
author = {<NAME>},
title = {Towards a Unified Theory on /B/r/o/n/t/o/sauruses},
url = {http://www.example.edu/~elk/bronto.pdf},
}
'
[335] Recode.pm:228> TRACE - String before latex_decode() -> '@online{elk,
author = {<NAME>},
title = {Towards a Unified Theory on /B/r/o/n/t/o/sauruses},
url = {4a747dc9b7c730285018df89b5f4bc65},
}
'
[336] Recode.pm:322> TRACE - String in latex_decode() now -> '@online{elk,
author = {<NAME>},
title = {Towards a Unified Theory on /B/r/o/n/t/o/sauruses},
url = {http://www.example.edu/~elk/bronto.pdf},
}
'
[336] bibtex.pm:1442> TRACE - Buffer after decoding -> '@online{elk,
author = {<NAME>},
title = {Towards a Unified Theory on /B/r/o/n/t/o/sauruses},
url = {http://www.example.edu/~elk/bronto.pdf},
}
'
```
But then
```
[355] Internals.pm:1071> DEBUG - Sorting object for key 'elk' -> [
"mm",
"",
"Elk!Anne",
"Towards a Unified Theory on Brontosauruses",
"",
0,
]
[356] Biber.pm:3454> DEBUG - Sorting is by default case-SENSITIVE
[356] Biber.pm:3460> DEBUG - Keys before sort:
[356] Biber.pm:3462> DEBUG - elk => mm,,Elk!Anne,Towards a Unified Theory on Brontosauruses,,0
```
```
sortdata => {
elk => [
"mm,,Elk!Anne,Towards a Unified Theory on Brontosauruses,,0",
[
"mm",
"",
"Elk!Anne",
"Towards a Unified Theory on Brontosauruses",
"",
0,
],
],
},
```
username_1: I think I know why this is - sorting strips punctuation first because that messes some things up but is being a bit aggressive for such fields. Looking at it.
username_0: This seems to have fixed it. Thanks a lot.
Status: Issue closed
|
architecture-building-systems/CityEnergyAnalyst | 477651711 | Title: Problem with occupancy schedules.
Question:
username_0: done
Running demand calculation for all buildings in the zone
No schedules detected for building B1000. Creating schedules from archetypes database
Saving schedules for building B1000 to inputs/building-properties directory.
Building No. 1 completed out of 18: B1000
No schedules detected for building B1009. Creating schedules from archetypes database
This behavior just makes the tool use the newly created occupancy schedule of the time.
what if I change the type of occupancy in buildings?
it will read the past type of schedule? this is not how it should work.<issue_closed>
Status: Issue closed |
UBC-MDS/pyxplr | 572695953 | Title: Put together a team contract
Question:
username_0: Outline general rules, guidelines and responsibilities to follow by all team members
Answers:
username_0: Account for Furqan's request
username_0: Team contract shared on Slack, please check (not posting publicly here for privacy reasons)
Status: Issue closed
|
ballerina-platform/ballerina-lang | 491614780 | Title: Ballerina CLI `build -a` fails with `unable to copy native jar`
Question:
username_0: **Description:**
Project structure looks like below;
```
.
├── Ballerina.conf
├── Ballerina.lock
├── Ballerina.toml
├── conf
├── lib
│ └── mysql.jar
├── openapi
│ ├── choreo-api.yaml
│ └── choreo-internal-api.yaml
├── src
│ ├── db
│ │ ├── app_mgt_dao.bal <uses ballerinax/java.jdbc>
│ │ └── client.bal <uses ballerinax/java.jdbc>
│ └── foo
│ └── foo.bal <depends on db>
└── tests
└── resources
```
Compilation with `ballerina build -a` gives below error;
```
Rasikas-MacBook-Pro:api4 username_0$ ballerina build -a
Compiling source
choreo_api/foo:0.1.0
choreo_api/db:0.1.0
Creating balos
target/balo/foo-2019r3-any-0.1.0.balo
target/balo/db-2019r3-java8-0.1.0.balo
error: unable to copy native jar: /Users/username_0/Downloads/api4/target/tmp/mysql.jar
```
**Steps to reproduce:**
Use the following project
[api4.zip](https://github.com/ballerina-platform/ballerina-lang/files/3595400/api4.zip)
**Affected Versions:**
1.0.0
Answers:
username_1: Fixed in milestone 1.0.1
Status: Issue closed
|
backdrop/backdrop-issues | 423619434 | Title: Git pull overwriting sites.php
Question:
username_0: How does/should git and sites.php work?
I have a BD sandbox site that I use for making PR's and stuff for Backdrop core and contributed modules, so BD is a git repo. When I pull from git to update BD, `sites.php` gets overwritten and I have to edit it again to make my sandbox site work.
Is this by design, or should we/I somehow make git ignore `sites.php`?
Answers:
username_1: @username_0 have you tried adding `sites/sites.php` in the `.gitignore` file in the root folder? Perhaps try that and report back if things work with that change.
username_0: I've tried adding it to `.got/info/exclude`... But that doesn't seem to affect anything.
username_1: Hmm, did you try in the Backdrop https://github.com/backdrop/backdrop/blob/1.x/.gitignore in the docroot folder?
username_0: This is where I don't understand this git ignore/exclude stuff and get confused...
If I add `sites.php` to `.gitignore`, `git status` then says:
```
modified: .gitignore
modified: sites/sites.php
```
So that seems to make things worse! And how does it know I've modified `.gitignore` if `.gitignore` is ignoring itself?
```
# Ignore this file so it can be changed
.gitignore
```
username_2: @username_0 you should be using .gitignore as the main way of ignoring files. I've never tried the other way.
If your trying to ignore a file that is currently being tracked by the repo, you may need to do something special, like in this thread https://stackoverflow.com/questions/6618612/ignoring-a-directory-from-a-git-repo-after-its-been-added. If you use `git rm --cached <filename>` make a copy of it first, then update .gitignore to exclude it and add the file back to the directory. I think you have to commit at the end. I haven't tested this so you may want a backup.
username_0: I haven't come across this issue anymore, and some good suggestions were provided, so closing this issue now.
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 714992244 | Title: [Epic] LC Home Page - MVP 1.0
Question:
username_0: ## High Level User Story/ies
As a Veteran, I need to view LC articles so I can find more information about a benefit or service I'm interested in.
## Product Outline
This is WIP and may evolve based on user research and technical findings.
Final name: Resources and support.
- [ ] H1: Resources and support for your VA benefits and services
- [ ] Lc homepage will include search option prominently. It will include both LC and 'all va' options.
- [ ] Must enable search error messages and standard system error messages.
- [ ] LC homepage must allow full width banner alert component
- [ ] LC homepage must allow in-body alert component (reusable or non-reusable) - TBD ux recommendation: below the search bar
- [ ] LC homepage CMS governance - owned and managed by PW. - Up to 5 articles can be displayed per category and then a link to see all of the articles in that category - 'Go to all articles' (in this category).
- [ ] On 1.0 and 1.1 launch, the PW content team should have the ability to curate which 5 articles to surface on the homepage under each category. (In the future, we'd like this curation to be automated based on analytics, such as what article users view most often each month or X timeframe TBD.)
- [ ] The categories in the 'Find articles by topics' should appear alphabetically by the category name, with the exception of the 'Other' category which should always be the last category.
- [ ] The audience labels under the beneficiary vs. non-beneficiary boxes should appear in alpha list. Up to 5, and then expand link to show more.
- [ ] Must enable default analytics tracking
## Hypothesis or Bet
**If** _update LC Homepage_ **then** _LC articles can be accessible to a large veteran audience_.
## OKR
_TBD_
Answers:
username_1: 1.0 completed, and closing
Status: Issue closed
|
ClinGen/clincoded | 272773611 | Title: Temporarily remove PAGE data table from the Population tab
Question:
username_0: We need to suppress the display of the PAGE data table until authentication issue is resolved in regards to the service api access.
Answers:
username_1: @username_2 can you provide a few variants for testing this?
username_2: @username_1 we are not supposed to publicly expose PAGE data... I will email the test examples we were provided.
username_2: @username_0 It's not there... which is good!
Status: Issue closed
username_0: Included in R14 release. |
anthonyjgrove/react-google-login | 721853482 | Title: isSignedIn does not work when page opened directly in the browser
Question:
username_0: I am using react component.
isSignedIn = true.
Refresh page in a chrome and you will get event:
onAutoLoadFinished - > false
If you are navigation to the login page with the router you will get:
onAutoLoadFinished - > true
and then signed event
Refresh
"react-google-login": "^5.1.21",
Status: Issue closed
Answers:
username_0: Fixed by one of the pull requests. |
pandas-dev/pandas | 524642594 | Title: BUG: merge raises for how='outer'/'right' when duplicate suffixes are specified
Question:
username_0: #### Code Sample, a copy-pastable example if possible
On `master` the following raises for `how='outer'` and `how='right'` with duplicate `suffixes`:
```python
In [1]: import pandas as pd; pd.__version__
Out[1]: '0.26.0.dev0+958.g545d17529'
In [2]: df1 = pd.DataFrame({'A': list('ab'), 'B': [0, 1]})
In [3]: df2 = pd.DataFrame({'A':list('ac'), 'B': [100, 200]})
In [4]: pd.merge(df1, df2, on="A", how="outer", suffixes=("_x", "_x"))
---------------------------------------------------------------------------
ValueError: Buffer has wrong number of dimensions (expected 1, got 0)
In [5]: pd.merge(df1, df2, on="A", how="right", suffixes=("_x", "_x"))
---------------------------------------------------------------------------
ValueError: Buffer has wrong number of dimensions (expected 1, got 0)
```
Note that above works with `how='inner'` and `how='left'`:
```python
In [6]: pd.merge(df1, df2, on="A", how="inner", suffixes=("_x", "_x"))
Out[6]:
A B_x B_x
0 a 0 100
In [7]: pd.merge(df1, df2, on="A", how="left", suffixes=("_x", "_x"))
Out[7]:
A B_x B_x
0 a 0 100.0
1 b 1 NaN
```
Likewise, if unique `suffixes` are specified then `how='outer'` and `how='right'` work fine:
```python
In [8]: pd.merge(df1, df2, on="A", how="outer", suffixes=("_x", "_y"))
Out[8]:
A B_x B_y
0 a 0.0 100.0
1 b 1.0 NaN
2 c NaN 200.0
In [9]: pd.merge(df1, df2, on="A", how="right", suffixes=("_x", "_y"))
Out[9]:
A B_x B_y
0 a 0.0 100
1 c NaN 200
```
#### Problem description
`pandas.merge` raises for `how='outer'` and `how='right'` with duplicate `suffixes`.
#### Expected Output
I'd expect `In [4]` and `In [5]` not to raise and produce output similar to `Out[8]` and `Out[9]` but with the duplicate suffix names.
#### Output of ``pd.show_versions()``
<details>
[Truncated]
bottleneck : 1.2.1
fastparquet : 0.3.0
gcsfs : None
lxml.etree : 4.3.3
matplotlib : 3.1.0
numexpr : 2.6.9
odfpy : None
openpyxl : 2.6.2
pandas_gbq : None
pyarrow : 0.11.1
pytables : None
s3fs : 0.2.1
scipy : 1.2.1
sqlalchemy : 1.3.4
tables : 3.5.2
xarray : 0.12.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.1.8
</details>
Answers:
username_1: @username_0 would like to work on this
username_2: @username_0 If this was never completed, is it good for first time contributors? If so, would like to work on it
username_3: Seems to be fixed now? Cannot reproduce, all works fine!
username_0: Yeah, looks to be working on `master`:
```python
In [1]: import pandas as pd; pd.__version__
Out[1]: '1.1.0.dev0+1044.gb8385083b'
In [2]: df1 = pd.DataFrame({'A': list('ab'), 'B': [0, 1]})
In [3]: df2 = pd.DataFrame({'A':list('ac'), 'B': [100, 200]})
In [4]: pd.merge(df1, df2, on="A", how="outer", suffixes=("_x", "_x"))
Out[4]:
A B_x B_x
0 a 0.0 100.0
1 b 1.0 NaN
2 c NaN 200.0
In [5]: pd.merge(df1, df2, on="A", how="right", suffixes=("_x", "_x"))
Out[5]:
A B_x B_x
0 a 0.0 100
1 c NaN 200
```
I'm not sure which commit fixed this and I don't immediately see any relevant tests, so would welcome tests for this.
username_4: take
Status: Issue closed
|
Canop/broot | 610203851 | Title: [Question, not a request] Single instance
Question:
username_0: Context (not important): https://old.reddit.com/r/rust/comments/gasc1f/i_wrote_a_file_manager_that_syncs_its_current/fp2e0bq/?context=3
Hi, is there a way to achieve a "single instance" of broot open? Say i say `broot --go <dir>` then it just opens that dir in the currently opened instance?
Status: Issue closed
Answers:
username_0: Context (not important): https://old.reddit.com/r/rust/comments/gasc1f/i_wrote_a_file_manager_that_syncs_its_current/fp2e0bq/?context=3
Hi, is there a way to achieve a "single instance" of broot open? Say i say `broot --go <dir>` then it just opens that dir in the currently opened instance?
username_0: Hello, I took a bit of a stab at trying, as a proof of concept, to do what I was proposing. I've managed to get the following code working as a PoC standalone program. If you agree with the following way of implementing what I was proposing, you think you could accept a "drive-by" PR?, where you could help me by roughly pointing out the places where I would place this code and I would add it, basically trying to save me time understanding the codebase, which unfortunately I cannot afford to. (I know I'm asking you to read my code but this is *really* small).
By the way, while I've only tested on Linux, this code seems cross-platform to me.
I would defer to you on the exact command names but I had 3 in mind (probably with a `--` in front)
1) `navigate` would take the "server" instance to that directory.
2) `get-cwd` - the client program gets from server and prints the current dir on stdout
3) `get-selected` - the client program gets from server and prints on stdout the value of current selection.
```
use std::net::{
TcpStream,
TcpListener,
};
use std::fs::File;
use std::io::prelude::*;
enum Connection
{
StandAlone,
Listener( std::net::TcpListener ),
Sender( std::net::TcpStream ),
}
fn get_file_name_from_server<S: AsRef<str>>( server: S ) -> String {
String::from( "/tmp/broot-server-" ) + server.as_ref()
}
fn no_server_file_or_port_error<S: AsRef<str>>( e: S, server: S ) -> String
{
String::from(
"Could not connect to existing server. "
)
+ e.as_ref()
+ " Try deleting "
+ & get_file_name_from_server( server.as_ref() )
}
fn bind_to_unused_port() -> ( TcpListener, u16 )
{
for attemped_port in 10001 .. std::u16::MAX
{
if let Ok( listener ) = TcpListener::bind( ( "0.0.0.0", attemped_port ) )
{
return ( listener, attemped_port )
}
}
panic!( "Could not start server on an unused port" );
}
fn main()
{
let args: std::vec::Vec<String> = std::env::args().collect();
let server_nest = args
.split( |a| a == "-S" || a == "--server" )
.collect::< std::vec::Vec< _ > >()
[Truncated]
// Depending on the command line param,
// send PWD and maybe commands
// Two major functions-
// 1) is to ask server to navigate in the interface
// 2) Get server to return the currrent directory
server.write( b"$PWD + all the args and maybe command" )
.expect( "Could not send message to server." )
;
// Read from server
server.read( & mut server_response )
.expect( "Could not get server response" )
;
},
_ => {}
}
}
```
username_1: I can't currently accept any PR as I'm rewriting broot (see the "panels" branch) for a big new set of features.
The idea's still interesting, I'll have a loot at it later.
username_0: Alright, thank you. Whenever you do, please do let me know and I'll do my part. I'm keen to help out with regards to this feature.
I will change the title and keep the issue open if that's alright.
username_1: I don't think it would be wise to have an automatic port grab: it would prevent having hardcoded commands in your editors or scripts. It could also be messy when you execute broot many times with the wrong arguments.
Here's a draft of how I think the API could be:
1) A `--serve 1234` launch argument telling broot that it should register as server on port `1234` (or fail if there's already one). If it goes well, broot would work as usual.
2) A `--send 1234` launch argument which would mean broot needs to connect to an existing server (or fail if there's none) and send the arguments given to `--cmd` to that server which would then execute them while the client process immediately quits. With this syntax, the optional final launch argument would be converted to a `:focus` command (i.e. `br --as-client-to 1234 -c "c/test" ~/dev/truc` would be equivalent to `br --as-client-to 1234 -c ":focus ~/dev/truc;c/test"`).
username_2: I would recommend against ports and favor Unix sockets, for the simple reason that you can reach out to local ports from a browser. An example of what you could inspire yourself from would be screen/tmux, which have the concepts of "sessions".
username_3: Hi, sorry my old account is having trouble so I missed your email-
1) **RE: Auto port grab**, I think you have objection to `
fn bind_to_unused_port()`? If so, port is attached to a servername, so everybody would only specify (and hardcode) a servername. This allows additional flexibility, a script could hardcode part of servername and get other part from its environment. A servername uniquely identifies a port. That said, I'm happy with whatever approach you prefer. I haven't explored the way @username_2 suggested (does sound better technically) and might not like to invest time there but if you're keen I'll go in that direction too.
2) The API would be uniform for both sender and receiver as far as connecting to a server is concerned. If no server exists, the caller become a server. (Maybe this behavior could be ammended and the API could be distinctly `--listen` and `--send`
I have to confess I'm not a very advanced broot user, I only use it for quick grokking (which is, to my beginner self, its distinctive feature). I mention this so that you know that I would need plenty of help from you.
Do you have a room on `https://miaou.dystroy.org/` ? If you give it to me I will contact you there ...
username_1: You can either come to the [broot room](https://miaou.dystroy.org/3490) or to the more lively [Code&Croissants](https://miaou.dystroy.org/3490) room
username_3: I messaged you there (but I don't see my own messages so not sure if you received them). I'm using the same github account.
username_1: @username_3 No, I didn't see any of your messages on miaou
username_1: Note to readers of this issue: the discussion regarding this feature specification and implementation is done in the chat. Feel free to come and ask if you're interested.
username_1: Related: https://github.com/username_1/broot/blob/master/client-server.md
username_1: I added the --get-root launch argument.
This feature seems complete... but might stay behind a compilation flag until I get more demands for it. |
zty199/HP_Pavilion_15-cb073tx_Hackintosh | 921100123 | Title: Can WiFi and NVIDIA GeForce GTX 1050 be connected?
Question:
username_0: Your build fit great for my HP Pavilion Power 15 cb007ur model. Installation and launches are going well, only a number of difficulties arose:
- WiFi - it is turned on, but does not see the networks themselves, maybe some settings need to be made or kext added?
- NVIDIA GeForce GTX 1050 - can I run it, just can't install After Effects, and it slows down during boot, can it be connected?
Answers:
username_1: 1. About WiFi - If you didn't replace your original WiFi card, it should be Intel 7265AC? You can just use itlwm and IntelBluetoothFirmware to get it work.
[OpenIntelWireless/itlwm](https://github.com/OpenIntelWireless/itlwm)
[https://github.com/OpenIntelWireless/IntelBluetoothFirmware](https://github.com/OpenIntelWireless/IntelBluetoothFirmware)
2. There is no way to make GTX 1050 work in ANY version of macOS, even in 10.12 and 10.13...... So don't count on it, I just disabled it in SSDT and WhateverGreen device-properties.
Status: Issue closed
|
dbeaver/dbeaver | 546869618 | Title: DB2 Timestamp Causes Query to Fail
Question:
username_0: <!--
Thank you for reporting an issue.
*IMPORTANT* - *before* creating a new issue please look around:
- DBeaver documentation: https://github.com/dbeaver/dbeaver/wiki
and
- open issues in Github tracker: https://github.com/dbeaver/dbeaver/issues
If you cannot find a similar problem, then create a new issue. Short tips about new issues can be found here: https://github.com/dbeaver/dbeaver/wiki/Posting-issues
Please, do not create issue duplicates. If you find the same or similar issue, just add a comment or vote for this feature. It helps us to track the most popular requests and fix them faster.
Please fill in as much of the template as possible.
-->
#### System information:
Windows 10 Enterprise Version 1809
DBeaver 6.3.2
#### Connection specification:
DB2 for z/OS
#### Describe the problem you're observing:
When trying to query a timestamp column in the format of yyyy-MM-dd HH:mm:ss.ffffff (the standard format for our databases) from DB2 table (including table data preview) I get an error "SQL Error: Internal jdbc driver error". Excluding these columns from the query returns the expected results.
#### Steps to reproduce, if exist:
Execute query against DB2 table and include timestamp formatted column.
#### Include any warning/errors/backtraces from the logs
Error: SQL Error: Internal jdbc driver error
Session Data:
eclipse.buildId=unknown
java.version=11.0.5
java.vendor=AdoptOpenJDK
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US
Command-line arguments: -os win32 -ws win32 -arch x86_64
[Error Log.txt](https://github.com/dbeaver/dbeaver/files/4035517/Error.Log.txt)
<!-- Please, find the short guide how to find logs here: https://github.com/dbeaver/dbeaver/wiki/Log-files -->
Answers:
username_1: As far as I see the error happens during column value decoding:
`java.nio.charset.UnsupportedCharsetException: Cp1027`
This is known issue with DB2 z/OS driver. It has nothing to do with dbeaver (I think). Perhaps some additional driver config is needed.
https://www.ibm.com/support/pages/javaniocharsetunsupportedcharsetexception-thrown-if-rtjar-not-included-classpath
https://stackoverflow.com/questions/36354899/unsupportedcharsetexception-cp1027-with-db2-jdbc-driver
username_0: Yes, it appears that you are correct about it not being a DBeaver issue. After some more digging it appears that there's a specific version of JRE that must be used to be fully compatible with my company's DB2 servers.
Status: Issue closed
|
sgrid/sgrid | 131126030 | Title: grid_topology vs mesh_topology
Question:
username_0: @username_1 The SGRID spec list `mesh_topolpgy` as a required attribute on the grid variable, but all of the examples use `grid_topology`.
Answers:
username_1: Hm, good point. I copied a significant part of the text from the UGRID specifications. This introduced the description of the mesh_topology attributes. Then when I started to work on the examples, I realized that mesh may be less appropriate for structured grids and hence changed the cf_role to grid_topology and the mesh attribute became grid attribute without adjusting all the text.
I'm not sure what the rest of the community prefers. There are 3 options that I see:
1a) Use cf_role = "mesh_topology" for both UGRID and SGRID, and use attribute "mesh" for both as well.
1b) Use attribute "mesh" for both UGRID and SGRID, but use cf_role = "mesh_topology" for UGRID and cf_role = "structured_mesh_topology" for SGRID.
2) Use cf_role = "mesh_topology" and attribute "mesh" for UGRID and cf_role = "grid_topology" and attribute "grid" for SGRID.
3) Use cf_role = "grid_topology" and attribute "grid" for both UGRID and SGRID.
Given the existing uptake of UGRID, I believe that option 3 is really unattractive. Option 1 (in particular 1a) would simplify the mixing of structured and unstructured meshes (e.g. for a layered unstructured mesh, we could add a vertical_dimension attribute to a UGRID horizontal mesh variable). Option 2 is what I actually implemented in Delft3D.
username_2: I like option 2 because I think it will be confusing to see "mesh" associated with SGRID datasets.
username_0: 1a or 2 work for me
username_2: @username_3, care to vote here? We need to decide this as this is currently a bug in our 0.1 standard that we need to fix!
username_3: :+1: to option 2
username_2: I believe we have a quorum. I'm changing the instances of `mesh_topology` to `grid_topology` in the SGRID doc.
username_4: Looks like this one can be closed :wink:
Status: Issue closed
|
esy/pesy | 593156983 | Title: bisect_ppx support
Question:
username_0: It would be cool if pesy would support [biscet_ppx](https://github.com/aantron/bisect_ppx) to get nice coverage reports.
It could be as simple as this (from the recommended dune setup):
```ocaml
let preprocess =
match Sys.getenv "BISECT_ENABLE" with
| "yes" -> "(preprocess (pps bisect_ppx))"
| _ -> ""
| exception Not_found -> ""
``` |
jenkins-x/jx | 324336371 | Title: create an addon for the GitWebhookProxy for folks running behind firewalls
Question:
username_0: see https://github.com/stakater/GitWebhookProxy
Answers:
username_1: on my recent travels in `Prow` land I’ve stumbled across a github proxy which might be useful
https://github.com/kubernetes/test-infra/blob/e3650ae/prow/cluster/ghproxy_deployment.yaml
https://github.com/kubernetes/test-infra/blob/e3650ae/prow/cluster/ghproxy_service.yaml
username_2: I tried using GitWebhookProxy since our kubernetes setup isn't accessible from the public internet otherwise. The helm chart didn't work for me, but after some changes the configuration worked. Except that GitWebhookProxy doesn't support *not* having a secret configured for the webhook and jenkins-x don't set a secret... I added an option to disable validation in GitWebhookProxy. I've made a pull request. My changes can be found here:
https://github.com/deniedboarding/GitWebhookProxy
username_2: For reference it might be good to explain why the supplied helm chart for GitWebhookProxy didn't work for me: My setup is a k8s cluster in aws that originally only had a loadbalancer in a private subnet.
In the GitWebhookProxy helm chart the host you configure is used to create an Ingress. But this collides with the ingress for jenkins. Even if it had worked it would have been connected to the private load balancer. I resorted this by instead removing the ingress and making the Service type Loadbalancer and adding the annotation "service.beta.kubernetes.io/aws-load-balancer: nlb". So then an ELB is created in a public subnet. |
PIQuIL/QuCumber | 344572359 | Title: RBM Layers
Question:
username_0: Work has been done on this in the `layers` branch.
For now it doesn't seem like this way of structuring the RBMs will work out, for a few reasons:
- `torch.matmul` and `torch.einsum` currently aren't memory efficient in the sense that they don't allow overwriting an existing tensor
- `torch.tensordot` would likely be a faster alternative to `torch.einsum`, but it doesn't exist yet (hopefully it'll support an `out` parameter
- sampling a full RBM is currently a little bit tricky since we need an initial state which must have the valid format (in the sense that the tensor has the right shape); this shouldn't be too hard to fix, since we just need to implement a `generate_state` in each `Layer` class, and then call the method on the visible layer when we sample the RBM
The first two bullets will hopefully be resolved once PyTorch 0.4.1 is released in a few weeks.
In addition, this new method seems to be *slightly* slower:
For a Binary(10) to Binary(10) RBM, generating 1000 samples through 10000 gibbs steps was about 0.15s slower using the layer-based structure. In relative terms, the deviation is more than a few standard deviations, but in absolute terms, it is fairly trivial, at least for small system sizes.
This slow down is likely due to the increased level of indirection required by this layout; there are likely ways to optimize this though (`torch.jit`?, metaprograming?)
Answers:
username_0: Work has been done on this in the `layers` branch.
For now it doesn't seem like this way of structuring the RBMs will work out, for a few reasons:
- `torch.matmul` and `torch.einsum` currently aren't memory efficient in the sense that they don't allow overwriting an existing tensor
- `torch.tensordot` would likely be a faster alternative to `torch.einsum`, but it doesn't exist yet (hopefully it'll support an `out` parameter
- sampling a full RBM is currently a little bit tricky since we need an initial state which must have the valid format (in the sense that the tensor has the right shape); this shouldn't be too hard to fix, since we just need to implement a `generate_state` in each `Layer` class, and then call the method on the visible layer when we sample the RBM
The first two bullets will hopefully be resolved once PyTorch 0.4.1 is released in a few weeks.
In addition, this new method seems to be *slightly* slower:
For a Binary(10) to Binary(10) RBM, generating 1000 samples through 10000 gibbs steps was about 0.15s slower using the layer-based structure. In relative terms, the deviation is more than a few standard deviations, but in absolute terms, it is fairly trivial, at least for small system sizes.
This slow down is likely due to the increased level of indirection required by this layout; there are likely ways to optimize this though (`torch.jit`?, metaprograming?)
username_0: actually, I think this could work out, since `torch.sum` and `torch.add` both take `out` parameters, so the traces that require `matmul` won't be as heavy (memory-wise) as I initially thought
username_0: todo:
- [ ] Finish Gaussian Layer
- [ ] Test Categorical RBM
- [ ] Try to use metaclasses to build modules
- [ ] Merge 0.2 into this branch
- [ ] Fix compatibility issues
- [ ] Tweak Complex Wavefunction to throw an error if it gets given a GaussianRBM
Status: Issue closed
|
pascalabcnet/pascalabcnet | 278963047 | Title: extensionmethod копирует своё описание при открытии скобки
Question:
username_0: ```
procedure p1(Self:byte; a1,a2,a3,a4,a5:byte);extensionmethod;
begin
end;
begin
var b:byte;
b.p1(0,1,2,3,4);
end.
```
Каждый раз когда убирается и ставится скобка после `b.p1` - добавляется 1 описание `p1`, и так их можно сколько угодно расплодить.

<issue_closed>
Status: Issue closed |
formio/formio.js | 277531985 | Title: Datagrid not triggering ON change event (on row delete)
Question:
username_0: I'm having a problem with the Datagrid component. When I delete a row, the On change event is not triggered. Any clue about this?
Regards!
Answers:
username_0: I think https://github.com/formio/formio.js/pull/271 could fix it
username_1: Yes, it works now. But the row isn't deleted from the form values. I'll open an issue. |
yakovmanshin/YMKit | 493612905 | Title: Update README with extensive documentation
Question:
username_0: @username_1 I think you’re right. A separate file (or a set of files) is a better place for the comprehensive documentation.
In README, I just want to give a taste of what kind of tasks this framework can help with.
Thanks for your suggestion!
Status: Issue closed
Answers:
username_1: Couldn't this be done with an automated documentation tool? So it will reduce the time wiring documentation again. And my idea is to move the documentation to a separate file instead of the README
username_0: @username_1 I think you’re right. A separate file (or a set of files) is a better place for the comprehensive documentation.
In README, I just want to give a taste of what kind of tasks this framework can help with.
Thanks for your suggestion!
Status: Issue closed
|
andi-nl/ANDI-frontend | 178587244 | Title: make it more visible what test variables the points correspond to
Question:
username_0: In line plot it's not obvious which test variables correspond to which point. There are at least two solutions to that:
- when hovering over test variable tick label make corresponding points larger
- draw vertical line for each test variable tick<issue_closed>
Status: Issue closed |
CodeForPhilly/chime | 600488326 | Title: Current Hospitalized/Date First Hospitalized combination bug
Question:
username_0: There seems to be an issue with a parameter combination. The app works correctly if I put anything less than 4 current hospitalized patients, but as soon as I try to put in a "Date First Hospitalized" value, the app crashes and shows this below error.

Answers:
username_1: So, the issue is that the parameters as entered imply an effective pre-mitigation doubling time of greater than 15 days. The code should trap this and say that it seems unreasonable.
If you're actually trying to get a forecast, try changing the inputs in a different order, since it's updating after the change to each one.
Even if an error does appear, changing subsequent parameters will keep re-running the forecast with the new inputs, so this should not be a bar to usage.
Status: Issue closed
|
foltysM/foodCheck-Android-Java | 731635262 | Title: Fix java.lang.NullPointerException: println needs a message in net.foltys.foodcheck.NewProductActivity.onActivityResult (NewProductActivity.java:320)
Question:
username_0: ### Version 0.7.0(2) ###
### Stacktrace ###
net.foltys.foodcheck.NewProductActivity.onActivityResult (NewProductActivity.java:320);
### Reason ###
java.lang.NullPointerException: println needs a message
### Link to App Center ###
* [https://appcenter.ms/users/MFoltys/apps/FoodCheck/crashes/errors/3516377595u](https://appcenter.ms/users/MFoltys/apps/FoodCheck/crashes/errors/3516377595u)<issue_closed>
Status: Issue closed |
loomnetwork/loom-js | 377427288 | Title: Move to ethers.js in Plasma Cash bindings
Question:
username_0: Currently, the Plasma Cash bindings are all using web3.js for interacting with mainnet. We should move to Ethers.js which has strictly better APIs for monitoring events, as well as creating transactions offline. This also allows us to have both offline tx signing and metamask in a very intuitive way.
How to connect to existing contracts with offline private key & watching events:
https://docs.ethers.io/ethers.js/html/api-contract.html#connecting-to-existing-contracts
How to connect to contracts with metamask: https://docs.ethers.io/ethers.js/html/cookbook-providers.html?highlight=metamask#metamask
```
if (typeof web3 !== 'undefined') {
var web3Provider = new ethers.providers.Web3Provider(web3.currentProvider, ethers.providers.networks.ropsten);
web3Provider.getBalance("..some address.."). then(function(balance) {
var etherString = ethers.utils.formatEther(balance);
console.log("Balance: " + etherString);
});
}
```<issue_closed>
Status: Issue closed |
agda/agda | 212716457 | Title: Record parameter not printed in field-derived goal type
Question:
username_0: The goal type in the following snippet is pretty-printed as `explicit _ r` rather than `explicit X r`.
```Agda
module Underscore where
record R (X : Set) : Set₁ where
field
f : Set
explicit : ∀ X → R X → Set
explicit X = R.f
test : ∀ X (r : R X) → explicit X r
test X r = ?
```
Answers:
username_1: Since `explicit` is a projection-like function, its argument is treated as a parameter and dropped. The printer is not type-directed, thus, cannot reconstruct the parameter, and simply prints an underscore.
Status: Issue closed
|
CDLUC3/dmptool | 177532565 | Title: Institutional admin unable to assign roles
Question:
username_0: We received an email from the admin at UNLV saying he couldn't search in the "Assign Roles" tab to update role for a UNLV colleague. I was able to confirm that this doesn't work. Here's the list of UNLV users with the one in question circled:

I created an institutional admin for UNLV, and searched for this user by either name or email, and got an error:

Status: Issue closed
Answers:
username_0: Oh, wait, this seems to be caused because the user has multiple accounts (under different email addresses). I don't think this is an issue. |
NOAA-ORR-ERD/PyGnome | 733365061 | Title: PyGNOME validation and calibration
Question:
username_0: Hello,
I am trying to validate and calibrate the PyGNOME outputs for a case in the Colombian Pacific, the validation I hope to do using satellite images of a spill that occurred in the past in the study area. I want to know if there is any configuration of the model where the particles do not move in the simulation at the same time and that when they move the trace of the oil "stain" on the water is observed. Do you have any other ideas to validate the outputs of the model?
thanks for the kindness in your answers.
Answers:
username_1: I'm not sure what you mean by " the particles do not move in the simulation at the same time" ? could you clarify?
As to: "when they move the trace of the oil "stain" on the water is observed.".
Not directly, but you can post process the output to see where all the particles are at all times, and produce any visualization you want.
Depending on what tools you are familiar with, you can use shapefile, KMZ, or netcf output -- and then post process as you need.
username_0: Thanks @username_1
Could you tell me how to get the outputs as shapefile. I can get the outputs for GIS from GNOME Desktop, but I don't know how to do it in PyGNOME.
username_1: you an add a number of "outputters" to the model in py_gnome:
https://gnome.orr.noaa.gov/doc/pygnome/scripting/outputters.html
for example:
```
import gnome.scripting as gs
model.outputters += gs.ShapeOutput( filename, zip_output=True, surface_conc="kde")
```
username_0: Hello @username_1
Thanks for the information, I had the manual 0.6.
I get this error when I try to have the outputs as shapefiles:

username_1: that's a bug / issue with the shapefile package.
what did you use as a filename? I think it cannon have a "dot: in it.
Try using a_name rather than a_name.shp
username_1: BTW: I just updated the master branch -- there may be a fix (or at least a beter error message) for this issue in there. If it's what I think it is.
username_0: Hi Chris!
I already updated the git but I still get the same error.
git pull https://github.com/NOAA-ORR-ERD/PyGnome.git

I am not typing the name with a dot, I am not getting a different message than the image I attached above. I wrote like this in my code:
import gnome.scripting as gs
`model.outputters + = gs.ShapeOutput ('tumaco_output', zip_output = True, surface_conc = "kde")`
username_1: it's working on my end (from gitHub master)
Try the scripts/script_surface_concentration/ script.
It could be your version of pyshp. Do a `conda list` and see what you get, I have:
pyshp 1.2.12 py_2 conda-forge
and I know 1.3 had some significant changes (which we will be using on future versions, but for now, you need 1.2.12
You can also try:
`python -c "import shapefile; print(shapefile.__version__)"`
to get the version you are running. |
xenia-project/issue-graveyard | 1111049640 | Title: Top Spin 4 crash
Question:
username_0: Hi guys, i downloaded top spin 4 xbox 360 version: iXtreme (6th wave), and downloaded latest update xenia emulator, but loading screen after game crashed and fonts changed like that;

I would be glad if you help, thanks. |
RDFLib/rdflib | 835045986 | Title: Make querying thread-safe
Question:
username_0: The query implementation currently uses pyparsing, which is known to not be thread-safe.
However, in order to use rdflib in a multithreaded environment, thread-safety is required.
A work-around is to monkey-patch the rdflib.Graph.query method with a method that wraps the call to the original rdflib.Graph.query call with a lock; however, it obviously would be preferred if the locking was handled natively, given this inherent weakness in pyparsing.
Quickly perusing the code, it appears rdflib/plugins/sparql/parser.py and rdflib/plugins/sparql/results/tsvresults.py are the two hot spots. Not sure if adding a lock in rdflib.Graph.query will catch all cases but it was sufficient for our purposes.
Answers:
username_1: That sounds like a good improvement. Do you have the capacities to provide a solution/pull request?
username_2: Would be great to see a benchmark before this gets merged to ensure it doesn't hurt not-multithreaded environments
username_1: @username_0 are you able to provide a pull-request with the changes and perform some benchmark with your use case? |
haiwen/seafile-client | 70707207 | Title: make cloud file browser resizable
Question:
username_0: At least on windows resizing is currently not possible.
Furthermore at least on my device there is always a scrollbar at the bottem. Maybe you've to adjust the default width.

Answers:
username_1: What Windows version do you use?
username_0: 4.1.6
username_2: it should be fixed in #531
Status: Issue closed
|
cryptogarageinc/cfd-js | 640851075 | Title: Node APIのExampleを整理したい
Question:
username_0: # Overview
<!-- TODO: ストーリーの概要を記述 -->
- 現在のExampleは実用例に則っていないため、わかりにくい。
- APIの実用例に従ったExampleを作成する。
- TXの生成例(elementsの場合は複数)、署名例、など。
# Completion condition
<!-- TODO: ストーリーが完了したかどうかの判定を記述 -->
- [ ] 処理単位でExampleのファイルが作成されていること
- 既存のExampleが分離されていること
Answers:
username_0: 検討メモ:
フォルダを作り、1ファイルに1Exampleを記載する形式で行く。(実装例として、その方が分かりやすい)
あとは一括実行用のScriptをwrap_jsに置いて、そこからrequireして実行する。
Status: Issue closed
|
nodejs/help | 927546367 | Title: How can I debug a RST_STREAM with code PROTOCOL_ERROR response on an http2 stream?
Question:
username_0: * **Node.js Version**: 16.3.0
* **OS**: Linux
* **Scope (install, code, runtime, meta, other?)**: runtime
* **Module (and version) (if relevant)**: http2
I am trying to debug a scenario in which a Node server is responding to an http2 request with an RST_STREAM with the code PROTOCOL_ERROR. This is happening without calling any application code on the server side, so the response seems to be generated by the http2 module itself. Setting `NODE_DEBUG=http2` environment variable on the server just shows the following, which does not seem to show anything relevant:
```
NET 778668: onconnection
STREAM 778668: read 0
STREAM 778668: need readable false
STREAM 778668: length less than watermark true
STREAM 778668: do read
NET 778668: _read
NET 778668: Socket._handle.readStart
HTTP2 778668: Http2Session server: received a connection
HTTP2 778668: Http2Session server: setting up session handle
HTTP2 778668: Http2Session server: sending settings
HTTP2 778668: Http2Session server: submitting settings
HTTP2 778668: Http2Session server: created
(node:778668) Warning: Setting the NODE_DEBUG environment variable to 'http2' can expose sensitive data (such as passwords, tokens and authentication headers) in the resulting log.
HTTP2 778668: Http2Session server: settings received
```
What else can I do to determine what protocol error the server is seeing to cause it to respond that way?
Answers:
username_1: cc @jasnell
username_0: I want to add that I suspect that the root cause of the error in question is a bug in Node: I think Node may be rejecting some valid HTTP/2 traffic. But we cannot determine if that is the case or provide any details about that bug until we can determine exactly what it is that Node is rejecting.
username_2: Hey @username_0, I'm looking into it, but, I can't reproduce it locally. Could you create a reproductible code? Also, what `NODE_DEBUG_NATIVE=http2` tells you?
username_0: The question here is: if a Node server is closing an http2 session with a protocol error, how can I determine why the server sent it, or in other words, what protocol constraint was violated, so that whatever behavior is incorrect can be corrected? Or, alternatively, to determine that it is returning a protocol error incorrectly? Is `NODE_DEBUG_NATIVE=http2` supposed to provide that information?
username_2: If a RST_STREAM frame identifying an idle stream is received, the
recipient MUST treat this as a connection error ([Section 5.4.1](https://datatracker.ietf.org/doc/html/rfc7540#section-5.4.1)) of
type PROTOCOL_ERROR.
username_2: Well, I have tested the `PROTOCOL_ERROR` in a very basic example from Node core: https://github.com/nodejs/node/blob/master/test/parallel/test-http2-multi-content-length.js#L62
and using the `NODE_DEBUG_NATIVE=http2` I got a "reason":
```
Http2Session server (5) receiving 70 bytes [wants data? 1]
Http2Session server (5) complete frame received: type: 4
Http2Session server (5) beginning headers for stream 1
Http2Session server (5) invalid frame received (0/1000), code: -532
Http2Session server (5) complete frame received: type: 4
```
Indeed, without a reproducible code, I can't help with this issue. At least, it sounds like a feature request to add a "cause" inside the error event.
username_2: As I said previously, without a minimal reproducible code, I'm not able to help.
I know that some issues are very hard to reproduce easily, but, regardless of the http2 client, you should be able to reproduce it In any client.
username_2: Not really, because I can't figure out what was the cause of the above error.
---
The above code throws in v12, but, looks fixed in v16:
```
info: starting csi server - name: org.democratic-csi.nfs, version: 1.5.5, driver: local-hostpath, mode: node, csi version: 1.5.0, address: , socket: unix:///tmp/csi.sock
Http2Session server (38) session created
Http2Session server (38) i/o stream consumed
Http2Session server (38) scheduling write
Http2Session server (38) sending pending data
Http2Session server (38) nghttp2 has 9 bytes to send
Http2Session server (38) wants read? 1
Http2Session server (38) receiving 394 bytes, offset 0
Http2Session server (38) receiving 394 bytes [wants data? 1]
Http2Session server (38) complete frame received: type: 4
Http2Session server (38) complete frame received: type: 8
Http2Session server (38) complete frame received: type: 6
Http2Session server (38) beginning headers for stream 1
Http2Session server (38) Error 'Invalid HTTP header field was received: frame type: 1, stream: 1, name: [:authority], value: [/tmp/csi.sock]'
Http2Session server (38) invalid frame received (0/1000), code: -531
Http2Session server (38) complete frame received: type: 8
Http2Session server (38) complete frame received: type: 8
Http2Session server (38) sending pending data
Http2Session server (38) nghttp2 has 9 bytes to send
Http2Session server (38) nghttp2 has 17 bytes to send
Http2Session server (38) stream 1 closed with code: 1
HttpStream 1 (41) [Http2Session server (38)] closed with code 1
HttpStream 1 (41) [Http2Session server (38)] destroying stream
Http2Session server (38) nghttp2 has 13 bytes to send
Http2Session server (38) wants read? 1
Http2Session server (38) wants read? 1
HttpStream 1 (41) [Http2Session server (38)] tearing down stream
Http2Session server (38) receiving -4095 bytes, offset 0
Http2Session server (38) submitting goaway
Http2Session server (38) scheduling write
Http2Session server (38) destroying session
Http2Session server (38) closing session
Http2Session server (38) make done session callback
Http2Session server (38) sending pending data
Http2Session server (38) freeing nghttp2 session
```
```
Http2Session server (38) Error 'Invalid HTTP header field was received: frame type: 1, stream: 1, name: [:authority], value: [/tmp/csi.sock]'
```
Status: Issue closed
username_2: Yes, It looks like a feature request, right? I'm assuming that you would like to get this information in the HTTP2 server somehow. |
MarcGiffing/wicket-spring-boot | 300440037 | Title: How to run SpringBoot Aplication manualy, not from main method.
Question:
username_0: I want to launch wicket spring boot app + spring security, etc.. from this method.
And i have a problem.
1. Spring-boot-maven-plugin moves all my classes and resources to BOOT-INF
2. Maven-shade-plugin or maven-dependencies-plugin or any other packaging plugin createing fat-jar with all dependencies and normal structure, but when my BukkitMain is Starting i'm recieving this errorr:
https://imgur.com/a/DkoIl
(Starting Application with main(String args[]) is ok, but i need to run this app in Bukkit/Spigot ecosystem)
Answers:
username_0: Here is a source code:
https://github.com/username_0/McServerSubscribeWeb
For now i`m using maven shade plugin. All is working, SpringApplication is starting without errors but there is no web-service, no wicket webapp, nothing. Just Spring Application.
Why it`s not running https://github.com/username_0/McServerSubscribeWeb/blob/master/src/main/java/com/bodyash/spring/boot/wicket/minecraft/WicketApplicationMain.java this class?
Status: Issue closed
|
Azure/azure-sdk-for-c | 489408159 | Title: Create pipeline abstractions
Question:
username_0: This consists of several parts.
- Policy interface, which defines how the pipeline interacts with policies
- The pipeline itself, which is essentially a linked list of policies coupled with pipeline options
- Message, which is sent to each policy and used to build the resultant HTTP request
- HTTP request and response<issue_closed>
Status: Issue closed |
palantir/blueprint | 463711699 | Title: [Table] all EditableCell re-render every time on focus change in Table with enableFocusedCell={true} and
Question:
username_0: <!-- IF YOU ARE A PALANTIR EMPLOYEE, DO NOT POST INTERNAL LINKS OR REFERENCES HERE -->
#### Environment
- __Package version(s)__: @blueprintjs/core: 3.17.1, @blueprintjs/table: 3.6.0
- __Browser and OS versions__: <!-- fill this out -->
Link to a minimal repro: https://codesandbox.io/s/blueprintjs-rerender-editablecell-enablefocusedcell-t7uf8
#### Steps to reproduce
1. Create Table with enableFocusedCell = true and EditableCell
1. Click on some Cell
#### Actual behavior
http://g.recordit.co/PQ3R4HDBgf.gif

#### Expected behavior
Update Cell only if it is changed
like enableFocusedCell = false:
http://g.recordit.co/SwS3Rd7mwH.gif

Answers:
username_1: Hi @username_0,
The re-rendering was due to the passing of an anonymous function.

Therefore after every shallow comparing the props `shouldComponentUpdate` would return true.
check `shouldComponentUpdate` function
[https://github.com/palantir/blueprint/blob/develop/packages/table/src/cell/editableCell.tsx#L116](url)
It's your codesandbox example with some tweaks [https://codesandbox.io/s/blueprintjs-rerender-editablecell-enablefocusedcell-w2spw](url)
Status: Issue closed
|
Onnion/angular-crud | 476290452 | Title: Fix "method_complexity" issue in src/app/modules/common/validators/cpf/cpf.validator.ts
Question:
username_0: Function `checkCpf` has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
https://codeclimate.com/github/username_0/angular-crud/src/app/modules/common/validators/cpf/cpf.validator.ts#issue_5d44719ec923180001000048<issue_closed>
Status: Issue closed |
gatsbyjs/gatsby | 354057276 | Title: reach router navigate causes 404
Question:
username_0: <!--
To make it easier for us to help you — please follow the suggested format below.
Useful Links:
- Documentation: https://www.gatsbyjs.org/docs/
- How to File an Issue: https://www.gatsbyjs.org/docs/how-to-file-an-issue/
Before opening a new issue, please search existing issues https://github.com/gatsbyjs/gatsby/issues
-->
## Description
Using a MobileNav component like so
```jsx
import React, { Component } from 'react';
import { navigate } from '@reach/router';
class MobileNav extends Component {
handleChange = event => {
navigate(event.target.value);
};
render() {
const { items } = this.props;
return (
<div className="mobile docs-nav">
<select className="btn-primary" onChange={this.handleChange}>
<option>Select A Topic</option>
{items.map(item => (
<optgroup label={item.title} key={item.title}>
{item.group.edges.map(({ node }) => (
<option value={node.fields.slug} key={node.fields.slug} className="nav-link">
{node.frontmatter.title}
</option>
))}
</optgroup>
))}
</select>
</div>
);
}
}
export default MobileNav;
```
When user changes the `select`, the page becomes a 404, though if I refresh, the page works fine.
### Steps to reproduce
1. clone https://github.com/username_0/netlify-cms/tree/gatsby-v2
2. `cd website && yarn && yarn start`
3. go to /docs
4. resize viewport smaller so you get MobileNav to show
5. Change the MobileNav select input to another page
### Expected result
Page should transition to the selected item from mobile nav
[Truncated]
Yarn: 1.7.0 - /usr/local/bin/yarn
npm: 6.2.0 - /usr/local/bin/npm
Browsers:
Chrome: 68.0.3440.106
Firefox: 61.0.1
Safari: 11.1.2
npmPackages:
gatsby: next => 2.0.0-beta.105
gatsby-plugin-catch-links: next => 2.0.2-beta.9
gatsby-plugin-manifest: next => 2.0.2-beta.6
gatsby-plugin-postcss: ^1.0.0 => 1.0.0
gatsby-plugin-react-helmet: next => 3.0.0-beta.4
gatsby-remark-autolink-headers: next => 2.0.0-beta.5
gatsby-remark-prismjs: next => 3.0.0-beta.7
gatsby-source-filesystem: next => 2.0.1-beta.10
gatsby-transformer-json: next => 2.1.1-beta.5
gatsby-transformer-remark: next => 2.1.1-beta.6
gatsby-transformer-yaml: next => 2.1.1-beta.3
npmGlobalPackages:
gatsby-cli: 1.1.25
Answers:
username_0: @Chuloo does the reproduction steps not work or does this status mean someone needs to test it?
username_1: Makes sense. To summarize for future readers and check my own understanding: `@reach/router`'s `navigate` can be useful for something like modifying query params, granting that the resources are loaded, but not linking to other pages. Instead we can use Gatsby's own `navigate`, which as far as I've messed with appears to work just fine in both cases. https://www.gatsbyjs.com/docs/gatsby-link/#how-to-use-the-navigate-helper-function
Someone looking for a quick solution without changing their approach can try `import { navigate } from "gatsby"` instead of from `@reach/router`. |
clarity-h2020/csis-technical-validation | 707866881 | Title: csis-cypress #898 failed
Question:
username_0: Build 'csis-cypress' is failing!
Last 50 lines of build output:
```
[...truncated 455 lines...]
│ Duration: 1 minute, 28 seconds │
│ Spec Ran: csisVisitStudy.spec.js │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
(Video)
- Started processing: Compressing to 32 CRF
- Finished processing: /var/jenkins_home/workspace/csis-cypress/cypress/videos/csi (8 seconds)
sVisitStudy.spec.js.mp4
tput: No value for $TERM and no -T specified
================================================================================
(Run Finished)
Spec Tests Passing Failing Pending Skipped
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ ✔ csisLocalAuthentication.spec.js 00:07 3 3 - - - │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✔ csisTestIncludeInReportButton.spec. 00:13 1 1 - - - │
│ js │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✖ csisTestMapComponent.spec.js 02:01 3 - 1 - 2 │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✔ csisTestScenarioAnalysis.spec.js 00:06 1 - - 1 - │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✔ csisTestTableComponent.spec.js 00:09 1 1 - - - │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✔ csisViewMyStudies.spec.js 00:08 1 1 - - - │
├────────────────────────────────────────────────────────────────────────────────────────────────┤
│ ✔ csisVisitStudy.spec.js 01:28 8 8 - - - │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
✖ 1 of 7 failed (14%) 04:14 18 14 1 1 2
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] properties
[Pipeline] }
[Pipeline] // script
[Pipeline] step
```
Changes since last successful build:
No changes
[View full output](https://ci.cismet.de/job/csis-cypress/898/)
Answers:
username_0: Build was fixed!
Status: Issue closed
|
ag-grid/ag-grid | 618823459 | Title: grid.showOverlay issue
Question:
username_0: Hello guys, here to submit a small issue;
**I'm submitting a ...** (check one with "x")
[x] bug report => see 'Providing a Reproducible Scenario'
[] feature request => do not use Github for feature requests, see 'Customers of ag-Grid'
[] support request => see 'Requesting Community Support'
**Requesting Community Support**
**Providing a Reproducible Scenario**
a console written exception coming from attributes.optional "bean instance"
"ag-Grid: unable to find bean reference frameworkOverrides while initialising BeanStub"
this happens after we call GridApi.showLoadingOverlay after creating, populating a grid's rowData and then disposing o
my html is as follows:
/////////////////
<ag-grid-angular id="grid"
#grid style="width: 100%; height: 200px;" class="ag-theme-alpine"
[rowData]="data" [columnDefs]="columnDefs"
[rowSelection]="single" (rowClicked)="onSelectionChanged($event)" (gridReady)="onGridReady($event)">
</ag-grid-angular>
///////////////////
Full Error Stack Trace:
//////////////////
ag-Grid: unable to find bean reference frameworkOverrides while initialising BeanStub
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.Context.lookupBeanInstance @ ag-grid-community.cjs.js:3192
(anonymous) @ ag-grid-community.cjs.js:3126
(anonymous) @ ag-grid-community.cjs.js:3125
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.Context.forEachMetaDataInHierarchy @ ag-grid-community.cjs.js:3154
(anonymous) @ ag-grid-community.cjs.js:3120
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.Context.autoWireBeans @ ag-grid-community.cjs.js:3119
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.Context.wireBeans @ ag-grid-community.cjs.js:3068
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.Context.wireBean @ ag-grid-community.cjs.js:3065
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.UserComponentFactory.initComponent @ ag-grid-community.cjs.js:13655
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.UserComponentFactory.createAndInitUserComponent @ ag-grid-community.cjs.js:13406
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.UserComponentFactory.newLoadingOverlayComponent @ ag-grid-community.cjs.js:13348
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.OverlayWrapperComponent.showLoadingOverlay @ ag-grid-community.cjs.js:33011
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.GridPanel.showLoadingOverlay @ ag-grid-community.cjs.js:26037
push../node_modules/ag-grid-community/dist/ag-grid-community.cjs.js.GridApi.showLoadingOverlay @ ag-grid-community.cjs.js:27077
.........
///////////////////
Current behaviour:
prints uncatchable console error, when the grid. ShowLoadingOverlay is called. Expected behaviour.:
**Current behavior**
<!-- Describe how the bug manifests. -->
when setting the Grid.ShowOverlay on a grid that is loaded then destroyed
you encounter a console error
**Expected behavior**
<!-- Describe what the behavior would be without the bug. If possible back this up with our docs/examples if possible-->
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **ag-Grid version:** X.X.X
<!-- Check whether this is still an issue in the most recent ag-Grid version -->
yes
* **Browser:**
<!-- Run `navigator.userAgent` in console of all of the browsers where this could be reproduced -->
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
typescript (.ts)
Hopefully that helps. Feel free to contact me if you would require more information about this.
Email: <EMAIL>
Sincerely,
Answers:
username_1: Per a similar issue here:
https://github.com/ag-grid/ag-grid/issues/1778
A possible work around is as follows:
```
//@ts-ignore destroyed is private...
if (false === gridApi.gridCore.destroyed) {
gridApi.showLoadingOverlay();
}
``` |
marques-work/trello-to-github-projects-migration | 455345260 | Title: Revisit copy in terminal to make sure expectations are clear
Question:
username_0: https://docs.google.com/document/d/1XOn2nTE2em6OM5K0sk8Vc3X5GSeItuB4nj1OWRbXZcQ/edit?usp=sharing
Status: Issue closed
Answers:
username_0: @loudaTW can you clarify the work for this card?
username_0: @ibnc Sure, we know the copy was a bit rough just to get the testing version out, and that what was provided in the terminal could be polished. I don't have much more clarification for you on that, but we should ensure that what is there makes sense to the user, that any URLs are correct and that all formatting is polished. I can help with any editing, but I think since a lot of this is meant to be read by developers it may make sense to run this by the group too. Does that help? If not, happy to jump on a call.
username_0: @loudaTW gotcha. I'll use my best judgement :)
username_0: @loudaTW I was thinking that the messaging around "you may use GoCD server at: ..." could be changed. It sort of implies that the server has completely started when it's not actually fully started. I was think something around the lines of "While you wait for the server to fully start, you can go here: link" or something along those lines. Thoughts?
username_0: @ibnc We definitely saw that as a confusing message the first time around. I think your suggestion makes a lot of sense. I'd prefer for them to know that they have to wait for that button to load instead of being mislead by the terminal message. I don't want them to stray too far of the journey, so if we tell them to go somewhere it should probably be something like "... you can read about GoCD concepts while your server loads...." et al. What do you think? |
onestlatech/widget-engreve | 554039005 | Title: Statistiques
Question:
username_0: Lors du fork, j'ai purement et simplement enlevé le Google Analytics pour des raisons éthiques et de simplicité.
Afin de permettre de compléter [le framaform](https://framaforms.org/onestlatech-appel-au-blocage-numerique-contre-le-projet-de-reforme-des-retraites-le-vendredi-24) lié à l'[appel à blocage](https://onestla.tech/publications/appel-action-24-janvier/), il pourrait être interessant d'intégrer une alternative, telle que [framaclic](https://framaclic.org), qui nous permettrait d'identifier les noms de domaine qui l'intègrent.
## TODO
- [ ] s'accorder sur la solution à employer (cf. [degooglisons-internet.fr](https://degooglisons-internet.org/fr/alternatives#ganalytics))
- [ ] identifier les problématiques d'isolation et de protection des données (iframe ? RGPD ?)
- [ ] alpha release et test
- [ ] release
Answers:
username_0: framaclic / dolomon ne fonctionne pas pour notre cas, étant donné qu'il ne peut donner le domaine visité à partir d'un même code de tracking.
username_1: Effectivement ça serait une bonne idée pour connaitre automatiquement les sites utilisateurs.
Après ça pose aussi question, il faudrait bien informer l'utilisateur et offrir une possibilité de le désactiver.
username_0: Pour la désactivation de GA, elle était prévue dés le départ :
https://github.com/username_0/widget-engreve/commit/58ca88541552418b6aca3bae5936a8482e6c3374#diff-8dbf9328dd4ff71d8fc4212ce7ce59cfR33
On la remettrais en place bien entendu.
Le problème aujourd'hui serait surtout de trouver une solution alternative abordable. Matomon semble très bien, mais [trop cher](https://matomo.org/pricing/) tant qu'on ne dispose pas d'une infra où déployer notre propre instance.
Devrais-t-on faire une concession temporaire et utiliser GA entre temps ?
username_2: Avec @cchaudier on se propose d'héberger sur notre Matomo. @username_0 je te créé un compte et tu me dis si ça te va ?
username_0: thx @username_2
Je vais tâcher de faire un POC prochainement, mais je manque de temps pour le projet ce mois-ci. Si je ne vais pas assez vite à votre goût, n'hésitez pas à démarrer une PR.
username_0: Même si disposer de statistiques pourrait être intéressant pour évaluer l'ampleur de la mobilisation, mon expérience avec Matomo en l'abscence de tout traçage m'amène plutôt auourd'hui à conclure que les données qu'il nous permettrait de récolter seraient trop peu exploitable. Je ferme donc cette issue, mais n'hésitez pas à la réouvrir si quelqu'un souhaite reprendre le sujet.
Status: Issue closed
|
ossrs/srs | 344069019 | Title: Chinese interface
Question:
username_0: Hi.
I've installed fresh SRS and whole web interface is in Chinese. Is there any way to switch to English?
Previous issue #1180 has response but it's not understandable. Moreover if I try to open console on my host (http://ubuntu:1985/console) I get "Not found" page
Answers:
username_1: got exactly the same problem ...
Status: Issue closed
username_2: The console is a js project at https://github.com/ossrs/srs-ngb
However, you can use http://ossrs.net:1985/console to connect to your SRS server.
username_2: You can use HTTP API directly: http://www.ossrs.net:1985/api/v1/summaries
username_2: Hi.
I've installed fresh SRS and whole web interface is in Chinese. Is there any way to switch to English?
Previous issue #1180 has response but it's not understandable. Moreover if I try to open console on my host (http://ubuntu:1985/console) I get "Not found" page
username_2: SRS3 may translate srs console project.
username_2: Fixed.
Status: Issue closed
|
bigcommerce/cornerstone | 538506880 | Title: Ratings don't display on category pages displayed as list
Question:
username_0: ### Expected behavior
If the theme settings are set to show ratings, then ratings should display regardless of whether category pages are displayed as a list or grid.
Grid example working on demo: [https://cornerstone-light-demo.mybigcommerce.com/shop-all/](url)
### Actual behavior
Ratings do not display when categories are displayed as a list.
### Steps to reproduce behavior
Change categories to show products in a list, not a grid.
Located code in templates/components/products/list-item.html that should be calling this:
`{{#if show_rating}}
<p class="listItem-rating">{{> components/products/ratings rating=rating}}</p>
{{/if}}`
Removing "if" statement only loads empty stars on all products in list on category page, even on products known to have ratings.
Answers:
username_1: Could you share your theme's ZIP file for comparing it with the base Cornerstone template? I haven't been able to reproduce this on a sandbox. I tried removing the `{{#if show_rating}}` section mentioned and ratings no longer appear on category pages. There's likely another detail we haven't discovered yet causing this. |
spatie/laravel-medialibrary | 291883377 | Title: Mass insert/upload without a form
Question:
username_0: I use Intervention Image on my site and want to switch to medialibrary. I've got 1000+ images so it's not exactly feasible to manually upload each image through the web.
I've got a table with all of the file names, so those could be simply be inserted into the media table. But what about the file structure and the model_type, collection_name, mime_type and size? Is there a way for medialibrary to take care of that automatically?
Status: Issue closed
Answers:
username_1: You don't have to manually upload the images. Just write a script or command to import the images with code.
```php
foreach($images as $image) {
$yourModel->addMedia($image)->toMediaCollection();
}
```
Check out the docs to learn all the methods the medialibrary provides:
https://docs.spatie.be/laravel-medialibrary/v7/introduction
username_0: I did this. But something went wrong. So I deleted everything from the database and the copied files. It seems like it won't re-add the same image twice. How do I fix that? |
hankmorgan/UnderworldExporter | 273741114 | Title: Improve textures
Question:
username_0: Sorry because this is not an issue.
I would like to know if it's possible (with graphic design knowledge but not programming skills) to replace textures in the game with improved versions I could create.
If possible I would like to start replacing some basic walls, ceiling, floor textures, doors and so on.
Thanks again for this great job.
Xermán.
Status: Issue closed
Answers:
username_0: I just found the indications on the documentation, will close this, and sorry for wasting your time.
I will give it a try ASAP.
Thanks again. |
interrogator/corpkit | 112582953 | Title: Multilingual support
Question:
username_0: *corpkit* is currently oriented toward English, but nothing stops at least some features from being extended to other languages. I should be able to get around to the basics (encodings, as well as multilingual tokenisation) soon.
Answers:
username_1: I tried to collect some tools in very raw mode into Wiki: https://github.com/username_0/corpkit/wiki/Multilanguage-tokenization-tools
If you think it's useful, I can extend it to verify certain selection of languages, or to create a single huge matrix with progress for each language by technology (tokenization - stemming - tagging - syntactic parsing).
username_0: Ah, great start. Thanks so much, @username_1. All my work is currently in English, so I find it hard to stay up to date on resources for other languages.
I suppose building a list of resources would be useful, as we could just have a column for 'implemented in corpkit?', so that it could also be used as documentation for end-users.
As for how to implement these things in the actual code, I'm a little less certain. In the current version of the GUI, there is a 'Preferences' popup. I was thinking I could start by just adding 'Language' to that, and allow the user to select any language there are dedicated resources for. Then, I suppose we could just have a dict of `{language: {'parser': function1, 'lemmatiser': function2}}` that specified which tokeniser/lemmatiser/tagger/parser should be called.
Currently, the data types that can be searched are plain text, tokens, trees and dependencies. A possible issue is that lemmatisation in most languages is not possible without knowing parts of speech, but in corpkit, POS tagging is only done by CoreNLP, which has limited multilingual support.
My idea to solve this is to add dedicated POS tagging (which for English could be done also via CoreNLP). Right now, there is a 'parse' and 'tokenise' button. Perhaps another needs to be added, 'POS tag', which creates a list of tuples `(word, pos-tag)`. I imagine a lot of language now have POS taggers, but not full parsers.
Thoughts?
username_1: I tried to make the list of utilities that can be used.
Fortunately, number of utilities of greater than number of interfaces used.
I agree about settings (each language can have its own toolkit).
The toolkit should include tokenizer, stemmer, ideally - lemmatizer and morphological and syntactic parsers.
Language should be a feature of a corpus, not program environment (to say nothing about bilingual corpora, which are complicated to handle; I think if I ever need them, I'll make a preprocessor to split into two plus some viewing tools to find parallel sentences). Anyway it isn't the urgent task, I think.
Sure, good lemmatization is impossible out of context.
While morphological features differ largely between languages (and parsers), they still have a single key feature: each word has a set of properties.
In English where words aren't declined, it's quite simple (a 2-3 letter POS tag, adopted from PennTreeBank, e.g. cartoons: NNS).
In German, for example, the tag will be bigger to include part of speech, immutable properties (gender) and variable ones (case, number).
Syntactic parsing (dependency tree) varies largely from parser to parser; you can either treat it as block of text assigned to each sentence, or try to process the trees, brackets and/or any other output. I think, from your simplistic and robust approach, it's the researcher who must know parser output, and Corpkit should only provide text search over it.
Do you want me to make common interface for language-specific utilities?
However, it might be an epic fail and require significant redesign, say, for Oriental languages.
username_0: @username_1 You're right. Language can be a feature of a corpus. Project settings are stored in `settings.ini` for each project ... this could easily contain a list of tuples for `(corpus, language`), just as it already stores `corpus, speakers`. Upon opening, any language dependent features are switched to that language via the dict object mentioned in my post above. I think the easiest way would be a popup when the user hits 'Add corpus' --- I can do that. We'd just have to write a bunch of wrappers that made sure any additional parsers, stemmers etc were called exactly like the existing ones, and gave the exact equivalent output.
Currently, when CoreNLP isn't detected, it's downloaded. We could reuse that code to download the Russian/Estonian stuff. Shouldn't be too tough, now that CoreNLP is downloading alright.
I can see what you mean by the fact that the annotation will simply be a list of words and their properties. A serious issue though is that we don't have a query language for searching this morphologically annotated data. What resources already exist for interrogating text that has been marked up in this way? Is there something we could use out of the box that saves us writing ten search functions for 'match token by pos', 'match pos by token', 'match token by lemma', etc etc? Have you ever used [CQL](https://www.sketchengine.co.uk/xdocumentation/wiki/SkE/CorpusQuerying)? We could perhaps use that for this morpho data, but retain the existing search types for the English dependencies.
Actually, I'm thinking, how about we make a 'multilingual' branch of corpkit to hack away on, as it won't be ready for some time. Also, how are you finding the code? Just let me know what needs more comments and I'll go back and do it!
username_1: Sure I heard about CQL, although it was about 7 years ago. There are many changes in how it's processed.
Meanwhile I see that _manatee_ development has diverged. I used the release from 2008 by Uni of Brno team, while it seems to be a proprietary development by a British company.
I am just a bit afraid of following syntax of a language that is updated incrementally, without certain versions and releases (i.e. unlike TMX, where there is version 1.4 that is standard and you are sure that any translation memory of that version can be imported, CQL might become incompatible over releases).
username_0: Newer CoreNLP releases have better support for German and French, nothing integrated into corpkit yet though.
username_0: @username_1 A short update on this.
Recently I decided to deprecate any searching of data that is not in a [CONLL-U-like](http://universaldependencies.org/format.html) format. Now, parsing or tokenising puts the texts results in CONLL-U. The form is lightweight, human-editable, extensible and in use elsewhere. The main advantage is that now, the same code (`corpkit/conll.py`) does all the searching, no matter if your text is parsed or not. The only difference is that you can't search parse trees, governor, dependents etc., because they don't exist.
This has multilingual implications. Right now, a person can do:
```python
corpus.tokenise(lang=en)
```
which will select the English NLTK tokeniser, the English WordNet Lemmatiser and the English NLTK POS tagger. It should be very clear from looking at `corpkit/tokenise.py` how multilingual annotators could simply be added to the `dicts` from which annotators are selected:
```python
def plaintext_to_conll(inpath, postag=False, lemmatise=False,
lang='en', metadata=False, outpath=False,
nltk_data_path=False, speaker_segmentation=False):
"""
Take a plaintext corpus and sent/word tokenise.
:param inpath: The corpus to read in
:param postag: do POS tagging?
:param lemmatise: do lemmatisation?
:param lang: choose language for pos/lemmatiser (not implemented yet)
:param metadata: add metadata to conll (not implemented yet)
:param outpath: custom name for the resulting corpus
:param speaker_segmentation: did the corpus has speaker names?
"""
import nltk
import shutil
import pandas as pd
from corpkit.process import saferead
fps = get_filepaths(inpath, 'txt')
# IN THE SECTIONS BELOW, WE COULD ADD MULTILINGUAL
# ANNOTATORS, PROVIDED THEY BEHAVE AS THE NLTK ONES DO
tokenisers = {'en': nltk.word_tokenize}
tokeniser = tokenisers.get(lang, nltk.word_tokenize)
if lemmatise:
from nltk.stem.wordnet import WordNetLemmatizer
lmtzr = WordNetLemmatizer()
lemmatisers = {'en': lmtzr}
lemmatiser = lemmatisers.get(lang, lmtzr)
if postag:
# nltk.download('averaged_perceptron_tagger')
postaggers = {'en': nltk.pos_tag}
tagger = postaggers.get(lang, nltk.pos_tag)
```
@username_1, Do you know of which functions/methods I'd need to add here to support some of the other languages you've mentioned? |
changkun/occamy | 955892322 | Title: Change to a GIO client
Question:
username_0: **Is your feature request related to a problem? Please describe.**
The nature of maintaining a html5 canvas can be easily done using gio because of the nature of how gio works.
The advantage is that then Clients work in Browsers, Desktop and Mobile from the exact same code base.
**Describe the solution you'd like**
Write a GIO based client for Occamy.
**Describe alternatives you've considered**
https://github.com/deluan/bring also uses a canvas, but is not as performant or portable as using gio.
**Additional context**
Making a client with GIO that can quickly swap in images and also sense the mouse, touch and key presses is easy with gio because its inherently synergistic with how GIO is designed.
Here are the links. I would be happy to collaborate on this. Been using gio for a while now and its really getting better and better.
https://gioui.org/
Code:
https://github.com/gioui/gio
OR
https://git.sr.ht/~eliasnaur/gio
GIO works using only a GPU accelerated canvas and is 100% golang.
You can compile the same code for web ( wasm), desktop or Mobile.
Examples:
https://github.com/gioui/gio-example/
Extension components:
https://github.com/gioui/gio-x
---
Ironically, GIo makes it easy to share an App over VNC also. For example see this example of a headless rendering example.
https://github.com/gioui/gio-example/tree/main/opengl
Answers:
username_0: also the youtube community meetup videos are a good resource
https://www.youtube.com/channel/UCzuKUnKK5gAFJKNyA1imIHw
username_1: That's also a good suggestion. Using Gio would be great too.
username_0: Hey @username_1
I am really amazed how good GIo has become now.
Its a joy to develop on and debug on. On Desktop you just go run . and your debugging the golang client and your golang server.
Good example btw:
https://github.com/gioverse/chat
username_0: Regarding [issue](https://github.com/username_1/occamy/pull/51) and the need for hotkeys
your hotkey is one way.
https://github.com/jkvatne/gio-v also has all focus between fields working to esc and tab. I dont knwo if it can do custom key combinations. Hotkeys seems to be key combinations ? |
bioconda/bioconda-recipes | 189865475 | Title: simulate-travis.py failure reporting
Question:
username_0: I had forgotten to add my current user to the docker group, and simulate-travis.py was failing with exit code 127 (command not found). This took me quite a while to figure out.
I guess it would be helpful to report the stderr of whatever command returns a non-zero exit code.
Answers:
username_0: Or another instance, now it's probably lack of online connectivity, because the PROXY env vars are not passed on to docker build:
```
(.venv) marius@bardin-gx:~/src/bioconda-recipes$ BIOCONDA_UTILS_TAG="exit-code-check" ./simulate-travis.py --packages cityhash
+ bioconda-utils build recipes config.yml --docker --loglevel=info --mulled-test --packages cityhash
INFO bioconda_utils.docker_utils:_pull_image(302): BIOCONDA DOCKER: Pulling docker image condaforge/linux-anvil
INFO bioconda_utils.docker_utils:_pull_image(308): BIOCONDA DOCKER: Done pulling image
Traceback (most recent call last):
File "/home/marius/conda3/bin/bioconda-utils", line 11, in <module>
load_entry_point('bioconda-utils==0.9.0', 'console_scripts', 'bioconda-utils')()
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/cli.py", line 172, in main
argh.dispatch_commands([build, dag, dependent])
File "/home/marius/conda3/lib/python3.5/site-packages/argh-0.26.2-py3.5.egg/argh/dispatching.py", line 328, in dispatch_commands
File "/home/marius/conda3/lib/python3.5/site-packages/argh-0.26.2-py3.5.egg/argh/dispatching.py", line 174, in dispatch
File "/home/marius/conda3/lib/python3.5/site-packages/argh-0.26.2-py3.5.egg/argh/dispatching.py", line 277, in _execute_command
File "/home/marius/conda3/lib/python3.5/site-packages/argh-0.26.2-py3.5.egg/argh/dispatching.py", line 260, in _call
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/cli.py", line 92, in build
conda_build_version=conda_build_version,
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/docker_utils.py", line 296, in __init__
self._build_image()
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/docker_utils.py", line 339, in _build_image
p = utils.run(cmd)
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/utils.py", line 68, in run
raise e
File "/home/marius/conda3/lib/python3.5/site-packages/bioconda_utils/utils.py", line 64, in run
p = sp.run(cmds, stdout=sp.PIPE, stderr=sp.STDOUT, check=True, env=env)
File "/home/marius/conda3/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'build', '-t', 'tmp-bioconda-builder', '/tmp/tmp7h4t2_gw']' returned non-zero exit status 1
Traceback (most recent call last):
File "./simulate-travis.py", line 150, in <module>
sp.run(['scripts/travis-run.sh'], env=env, universal_newlines=True, check=True)
File "/home/marius/conda3/lib/python3.5/subprocess.py", line 708, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['scripts/travis-run.sh']' returned non-zero exit status 1
```
I guess the call to subprocess.run should be wrapped in a try except statement, so that docker's stdout and stderr can be reported?
username_1: `simulate-travis.py` is a Python script using subprocess to call the bash script `travis-run.sh` used on travis-ci, which in turn is calling the Python CLI tool `bioconda-utils`, which in turn is calling docker in a subprocess call (!). So the trick is to get the docker call to be more verbose rather than a try/except in simulate-travis. I think [this commit](https://github.com/bioconda/bioconda-utils/pull/35/commits/d05f55b7ca0de26fa660bf053d0fa7eefe58d88c) should do it.
In bioconda-utils I was originally sending all of `/usr/bin/env` over to docker, but that caused lots of subtle issues during testing. So I restricted the env vars. Clearly not a good idea for proxies. I'll expose a way of configuring simulate-travis to specify which additional env vars should be exported to docker.
Status: Issue closed
username_0: yeah, I realize this is quite the corner-case, and a bit tricky to get right. I just switched to cloud machines, they're faster anyway. Thanks for working on this!
username_1: I think I'm going to hold off for now on the env-var-to-docker stuff, but please feel free to open a PR on bioconda-utils. |
react-component/select | 1032096890 | Title: Almost ineffective useMemo?
Question:
username_0: Hi everyone, I saw a useMemo with children props as its dependency, but react renews children references in every render, so is this useMemo used just to prevent rerender caused by internal state changes? I mean whenever the parent component re-renders, Select re-renders too, it's a little strange
https://github.com/react-component/select/blob/95aa5b937b871c8eac0283d62d14d9b6c6acadac/src/generate.tsx#L454 |
soenkehahn/dead-code-detection | 143054219 | Title: Fails on Hoogle
Question:
username_0: Given a checkout of Hoogle from https://github.com/username_0/hoogle I get:
```
C:\Neil\hoogle>dead-code-detection --root Main -isrc
dead-code-detection: Could not find module `Input.Type'
Use -v to see a list of the files searched for.
C:\Neil\hoogle>dead-code-detection --root Main -isrc -v
unrecognized option `-v'
```
The reason is that `src/Hoogle.hs` imports Input.Type. But `src/Hoogle.hs` isn't anywhere on the dependency graph from Root, so is unnecessary - it's a totally dead file which doesn't compile.
Saying use `-v` and then `-v` not working is a bit sad, but I suspect that's the GHC API doing it for you.
Answers:
username_1: Here's a PR that gives a more sensible error message: #5. The output now clearly states that the error message is from ghc. I hope this will result in less confusion. (If you don't agree, I'd like to hear about better ideas.)
Regarding the behavior apart from the error messages: `dead-code-detection` intentionally doesn't use `ghc` to resolve the dependencies, but looks at the file system itself. This way it can find dead code in modules that aren't imported anywhere. (And as you point out, `src/Hoogle.hs` is a totally dead file, so the tool should find and report it.) I do consider this an important feature and I tend to think this is the right default behavior.
Would you want to have a way to tweak this default behavior? How exactly would you imagine that to work?
Apart from that: If you just delete `src/Hoogle.hs` `dead-code-detection` seems to be working fine.
username_0: Scanning the file system seems entirely reasonable. I'd be tempted to say that if a module can't be parsed you: 1) print out a message saying it can't be parsed, and 2) pretend it is a blank module with no uses and no exports. In this case seeing an error for this module, and then seeing useful results for the rest of the code, would be useful. In many of my projects I have `travis.hs` modules or similar that are only run on Travis, and where I might not have the necessary dependencies installed locally.
username_0: I should say the behaviour of continuing would be nice, but not essential - it sounds like its now clear how to avoid the issue, so not a big problem if you decide to do nothing.
username_1: In this case, the module could be parsed (or at least the import statements), but the imported modules couldn't be found.
In any case, I'll merge these changes.
Regarding the idea to ignore modules with errors while warning about them: I wonder if this would be bad behavior in the case where your project actually doesn't compile, because you have a type error or the like. You would see a warning that one module doesn't compile, but then all other modules that (transitively) import that module would also not compile. And also issue a warning, that they don't compile? So you end up with a bunch of warnings for every module?
I always liked the idea that a prerequisite of `dead-code-detection` is, that your project (and every file in it) compiles.
But I can perfectly see, why you'd want to have files like `travis.hs` (or other scripts) that you want to exclude from being analyzed. What do you think about an additional command line flag to ignore certain files. For example:
``` bash
dead-code-detection --root Main -isrc --ignore src/Hoogle.hs
```
username_0: An ignore flag works too, and is probably the simplest and most direct solution.
username_1: I've opened #8 and #7 instead of this ticket, therefore closing this. If you have further problems with `hoogle`, please, re-open.
Status: Issue closed
|
wmo-im/BUFR4 | 848344174 | Title: New BUFR descriptor and code table for Retrieval identifier
Question:
username_0: # Branch
Add when created.
# Summary and purpose
ECMWF is proposing new BUFR descriptor for Retrieval identifier and related code table.
# Authors
<NAME> (ECMWF)
# Action proposed
The team is kindly asked to review and approve the contents for inclusion within the next update to the WMO Manual on Codes.
# Discussions
ECMWF is preparing for assimilation of lidar observations from satellite. In order to identify different retrievals we need a new descriptor and code table to define it.
# Detailed proposal
**1. Add a new entry in the identification class of table B (0-01-154) and a new code table to describe the retrial**
F X Y | ELEMENT NAME | UNIT | SCALE | REFERENCE VALUE | DATA WITH (bits)
-- | -- | -- | -- | -- | --
0-01-154 | Retrieval identifier | Code table | 0 | 0 | 4
CODE TABLE 0-01-154
0-01-154 Retrieval identifier
Code Figure | Retrieval identifier
-- | --
0 | SCA
1 | SCA mid-bin
2 | MCA
3 | Group
4| ICA
5-15| Reserved
16| Missing
Answers:
username_1: per meeting discussion: Abbreviation will be spelled out, data width will be changed, in validation, branch will be created
username_2: @username_0 @username_1 I confirm that this proposal is finalized and the branch is updated, and move this issue to "Validated" status.
username_1: @username_2 @username_0 is "Standard Correct Algorithm" a proper noun?
Status: Issue closed
|
SeldonIO/seldon-core | 623310785 | Title: pass ServiceAccountName in predictor to prepackaged servers initContainer
Question:
username_0: To use the the new fine-grained AWS IAM roles for service accounts feature with the default pre-packaged servers SeldomDeployment example, the `ServiceAccountName` ( defining a service account that have the new `eks.amazonaws.com/role-arn` annotation like defined here https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) needs to be passed to the generated initContainer (or any of the container in the pod).
The eks annotation doesn't work if the SeldonDeployment is defined like in the documentation here:
```
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
name: tfserving
namespace: testseldon
spec:
name: mnist
predictors:
- componentSpecs:
graph:
children: []
implementation: TENSORFLOW_SERVER
modelUri: s3://9y167l-eks-ml-data/mnist/tf_saved_model
serviceAccountName: iam-s3-sa
name: mnist-model
parameters:
- name: signature_name
type: STRING
value: predict_images
- name: model_name
type: STRING
value: mnist-model
name: default
replicas: 1lt
replicas: 1
```
This is because the `serviceAccountName` is used to in `model_initializer_injector.go` controller to get the secrets defined in the Service Account __only__ and doesn't attach it to the containers itself so that EKS can do its magic...
The workaround for now is to explicitly define all the containers and use the v0.3.0 or above of `gcr.io/kfserving/storage-initializer` (example below). It would be nice to keep using the pre-packaged servers with this new feature.
```
{
"apiVersion": "machinelearning.seldon.io/v1alpha2",
"kind": "SeldonDeployment",
"metadata": {
"labels": {
"app": "seldon"
},
"name": "tfserving-mnist",
"namespace": "testseldon"
},
"spec": {
"name": "tf-mnist",
"predictors": [
{
"componentSpecs": [{
"spec": {
"serviceAccountName": "iam-s3-sa",
"volumes": [
{
"name": "podinfo",
"downwardAPI": {
"items": [
{
"path": "annotations",
[Truncated]
},
{
"name":"model_input",
"type":"STRING",
"value":"images"
},
{
"name":"signature_name",
"type":"STRING",
"value":"predict_images"
}
]
},
"name": "mnist-tfserving",
"replicas": 1
}
]
}
}
```<issue_closed>
Status: Issue closed |
YoeDistro/yoedistro.org | 568980596 | Title: switch to another newsletter provider
Question:
username_0: tinyletter (mailchimp) is not responding to my support requests, so I think it is time to switch to someone else like convertkit.
Answers:
username_0: Although convertkit is a much more capable platform than tinyletter, I think tinyletter will suffice for our purposes (simply keep people up to date with what is going on). We're not trying to sell stuff, etc. Additionally, tinyletter gives us 5000 subscribers and convertkit, we'd need to start a paid subscription at 1000 or less. Since we don't have any sponsers for yoedistro (other than our own efforts), I think going with tinyletter makes sense from a cost perspective.
@username_2, @username_1 -- let me know if you think any differently.
username_0: Since tinyletter is gracious enough to provide this service for free, perhaps we should consider keeping a link, but perhaps make it a lot smaller, and somehow indicate tinyletter is only powering the newsletter, not the entire site/project.
username_1: @username_0 I agree; sounds like TinyLetter works. And I agree, the TL link could be misleading.
Possibilities:
- Smaller font for the TL link, as @username_0 suggests
- Lighter color for the TL link
- Increase space between the first paragraph and the "Looking for tips" pitch, which clarifies that the form is separate
- Move the TL link closer to the "Sign me up!" button.
username_2: I agree lets keep using tiny letter if this is the case. and I agree about mentioning them
Status: Issue closed
|
UBC-MDS/DSCI_522_GROUP_312 | 560219821 | Title: [URGENT] Milestone 3 Script/Pipeline error
Question:
username_0: Makefile fails on script 3 (EDA) with error:
```
python scripts/eda_v2.py --train_path='data/train.csv' --out_folder_path='results/eda_charts/'
Traceback (most recent call last):
File "scripts/eda_v2.py", line 139, in <module>
main(train_path = opt["--train_path"], out_folder_path = opt["--out_folder_path"])
File "scripts/eda_v2.py", line 35, in main
train = pd.read_csv(train_path)
File "C:\Users\7ks42\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\7ks42\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 457, in _read
parser = TextFileReader(fp_or_buf, **kwds)
File "C:\Users\7ks42\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 895, in __init__
self._make_engine(self.engine)
File "C:\Users\7ks42\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1135, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "C:\Users\7ks42\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1917, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas\_libs\parsers.pyx", line 382, in pandas._libs.parsers.TextReader.__cinit__
File "pandas\_libs\parsers.pyx", line 689, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: [Errno 2] File b"'data/train.csv'" does not exist: b"'data/train.csv'"
```
Also, having the Makefile call python3 instead of just python is causing problems on my machine. I don't think it *should* be causing problems so I'm checking to see if this is a weird configuration issue, and what best practice is.
Answers:
username_1: As discussed, I don't get this error. George and I are both running OS Catalina and Eithar is running a previous OS. Firas will try running on his machine (he also is running OS Catalina) and we will update from there.
username_0: Update from Firas: the problem is that the eda command calls python3 while the others call python. Change this and it will run.
username_1: Addressed in #62
Status: Issue closed
|
cibernox/ember-power-select | 907379213 | Title: Can’t differentiate between focus coming outside and focus coming after the click by option
Question:
username_0: One of the most straightforward use case is to automatically open the dropdown component on focus if the operation has been initiated from the outside.
[Power Select Focus handling](https://ember-power-select.com/docs/action-handling/) documentation says that we can check the 'event.[relatedTarget](https://developer.mozilla.org/en-US/docs/Web/API/FocusEvent/relatedTarget)' to know the element that had the focus before. Code example suggests the following:
```
handleFocus(select, e) {
console.debug('EPS focused!');
if (this.focusComesFromOutside(e)) {
select.actions.open();
}
}
// Methods
focusComesFromOutside(e) {
let blurredEl = e.relatedTarget;
if (isBlank(blurredEl)) {
return false;
}
return !blurredEl.classList.contains('ember-power-select-search-input');
}
```
If the `relatedTarget` is not null then we can assume that focus comes from outside and trigger dropdown open event.
But the weak part comes into play when the user focuses dropdown after the non-focusable element (using Tab key)
or when dropdown is the first element in the form. In this case the `relatedTarget` will be `null`. And dropdown element **won't be opened.**
Also when the user picks the dropdown option from the list, the input is refocused. The `relatedTarget` will be `null as well.
So **We can’t distinguish between the cases when the focus comes outside to the dropdown trigger when there was no focused element and when it comes after clicking the dropdown option.**
Please look at the [twiddle example](https://ember-twiddle.com/1efa3ace670b74631a99ebe53b7ee827?openFiles=components.city-picker%5C.js%2Ctemplates.components.city-picker%5C.hbs):
1. Press 'tab' key focus on city-picker
**Expected result:** dropdown is opened. **Actual result**: dropdown is not opened.
2. Press 'tab' key to focus on the next input
3. Press 'shift+tab' key to return back to the city-picker
Dropdown is opened now.
**Behaviour is not consistent.**
I suggest the following solution:
Make dropdown option as focusable element by setting "-1" `tabindex`. On blur/focus events check the 'event.relatedTarget' to identify exactly that we faced with inner interaction (not the outside element)
```
function relatedTargetIsTimeOption(relatedTarget: EventTarget | null): boolean {
if (!relatedTarget) {
return false;
}
return (relatedTarget as HTMLElement).classList.contains('ember-power-select-option');
}
```
And use it in blur/focus events:
```
@action
onFocus(dropdown: Select, event: FocusEvent): void {
if (!relatedTargetIsTimeOption(event.relatedTarget)) {
dropdown.actions.open();
}
}
@action
onBlur(_dropdown: Select, event: FocusEvent): void {
if (relatedTargetIsTimeOption(event.relatedTarget)) {
return;
}
// do required actions on blur
}
```
What do you think? |
TCraig7/job-tracker | 353629080 | Title: A User is Taken to The Job Show Page After Submitting a Comment
Question:
username_0: As a single user,
When I am on a job show page and I have submitted a comment, I am taken to the job show page where I can see my comment listed above previous comments (so in reverse chronological order.)<issue_closed>
Status: Issue closed |
dadhi/FastExpressionCompiler | 369911900 | Title: Problems with ref parameters
Question:
username_0: I want to use FEC in [my serializer](https://github.com/username_0/Ceras).
However there are multiple cases where FEC seems to have problems with ref parameters.
How to reproduce:
- Clone Ceras
- Add conditional symbol `FAST_EXP` in the project options (will enable FEC in `DynamicObjectFormatter` and `ReferenceFormatter`)
- Start the "LiveTesting" project, if everything works it will run through with no exceptions or asserts getting triggered.
Answers:
username_1: Hi, what version of DryIoc did you use?
username_0: I copied the FastExpressionCompiler.cs from the master branch today.
username_1: Thanks. That means more work is needed.
@username_2, whould you have time to look?
username_2: i will look
username_1: I did not run your tests yet.
But, if you will fidentify and distill some of the failed expressions it will speed up things.
username_0: Alright, this is the smallest example I could come up with.
It does not assign to ref parameters:
https://hastebin.com/esesaxetot.cs
I do not know if this is the only issue, but it definitely is a blocking bug for me (obviously need to get the updated values that the function wrote to the ref parameters)
username_1: Thank you.
username_2: I have reduce the problem with class with one field from constant as on parameter on my branch. Same struct works fine. So one step from fix. Will fix to today or during next several days.
Status: Issue closed
|
sass/sass | 335028113 | Title: Deprecate support for @-moz-document
Question:
username_0: Firefox is [dropping support for `@-moz-document`](https://www.fxsitecompat.com/en-CA/docs/2018/moz-document-support-has-been-dropped-except-for-empty-url-prefix/), and Sass implementations should do so as well. This doesn't have to be a quick change, both because Firefox 61 isn't even out yet and most users are still on Firefox 59, and because continuing to support it doesn't cause much pain other than implementation complexity. But we should probably get deprecation warnings out at some point so that the next breaking release of each implementation can drop support entirely.
Note that an empty `url-prefix` function will still be allowed for a while, so we shouldn't deprecate that yet. |
thegenemyers/DALIGNER | 50777882 | Title: memory error
Question:
username_0: Every once in a blue moon, I am getting the following error. The machine has plenty of RAM (512GB) and so the error is possibly related to dalign itself.
Do you need any more information?
---------------------------------------------------------------------------
*** glibc detected *** ../../DALIGN-code/Dalign/daligner: malloc(): memory corruption: 0x000000000074e4e0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x76a16)[0x7fac475f3a16]
/lib/x86_64-linux-gnu/libc.so.6(+0x79a83)[0x7fac475f6a83]
/lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x70)[0x7fac475f88a0]
../../DALIGN-code/Dalign/daligner[0x411606]
../../DALIGN-code/Dalign/daligner[0x405181]
../../DALIGN-code/Dalign/daligner[0x401958]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd)[0x7fac4759bead]
../../DALIGN-code/Dalign/daligner[0x401b6d]
======= Memory map: ========
00400000-00417000 r-xp 00000000 08:04 127706273 /home/genemyers/Genome-assemblies/Elec-eel/DALIGN-code/Dalign/daligner
00616000-00617000 rw-p 00016000 08:04 127706273 /home/genemyers/Genome-assemblies/Elec-eel/DALIGN-code/Dalign/daligner
0074e000-007a1000 rw-p 00000000 00:00 0 [heap]
7fac40000000-7fac40021000 rw-p 00000000 00:00 0
7fac40021000-7fac44000000 ---p 00000000 00:00 0
7fac45363000-7fac45378000 r-xp 00000000 08:01 1967226 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fac45378000-7fac45578000 ---p 00015000 08:01 1967226 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fac45578000-7fac45579000 rw-p 00015000 08:01 1967226 /lib/x86_64-linux-gnu/libgcc_s.so.1
7fac45579000-7fac4557a000 ---p 00000000 00:00 0
7fac4557a000-7fac45d7a000 rw-p 00000000 00:00 0
7fac45d7a000-7fac45d7b000 ---p 00000000 00:00 0
7fac45d7b000-7fac4657b000 rw-p 00000000 00:00 0
7fac4657b000-7fac4657c000 ---p 00000000 00:00 0
7fac4657c000-7fac46d7c000 rw-p 00000000 00:00 0
7fac46d7c000-7fac46d7d000 ---p 00000000 00:00 0
7fac46d7d000-7fac4757d000 rw-p 00000000 00:00 0
7fac4757d000-7fac476ff000 r-xp 00000000 08:01 1977025 /lib/x86_64-linux-gnu/libc-2.13.so
7fac476ff000-7fac478ff000 ---p 00182000 08:01 1977025 /lib/x86_64-linux-gnu/libc-2.13.so
7fac478ff000-7fac47903000 r--p 00182000 08:01 1977025 /lib/x86_64-linux-gnu/libc-2.13.so
7fac47903000-7fac47904000 rw-p 00186000 08:01 1977025 /lib/x86_64-linux-gnu/libc-2.13.so
7fac47904000-7fac47909000 rw-p 00000000 00:00 0
7fac47909000-7fac4798a000 r-xp 00000000 08:01 1977031 /lib/x86_64-linux-gnu/libm-2.13.so
7fac4798a000-7fac47b89000 ---p 00081000 08:01 1977031 /lib/x86_64-linux-gnu/libm-2.13.so
7fac47b89000-7fac47b8a000 r--p 00080000 08:01 1977031 /lib/x86_64-linux-gnu/libm-2.13.so
7fac47b8a000-7fac47b8b000 rw-p 00081000 08:01 1977031 /lib/x86_64-linux-gnu/libm-2.13.so
7fac47b8b000-7fac47ba2000 r-xp 00000000 08:01 1968929 /lib/x86_64-linux-gnu/libpthread-2.13.so
7fac47ba2000-7fac47da1000 ---p 00017000 08:01 1968929 /lib/x86_64-linux-gnu/libpthread-2.13.so
7fac47da1000-7fac47da2000 r--p 00016000 08:01 1968929 /lib/x86_64-linux-gnu/libpthread-2.13.so
7fac47da2000-7fac47da3000 rw-p 00017000 08:01 1968929 /lib/x86_64-linux-gnu/libpthread-2.13.so
7fac47da3000-7fac47da7000 rw-p 00000000 00:00 0
7fac47da7000-7fac47dc7000 r-xp 00000000 08:01 1968930 /lib/x86_64-linux-gnu/ld-2.13.so
7fac47f8b000-7fac47faf000 rw-p 00000000 00:00 0
7fac47fc4000-7fac47fc6000 rw-p 00000000 00:00 0
7fac47fc6000-7fac47fc7000 r--p 0001f000 08:01 1968930 /lib/x86_64-linux-gnu/ld-2.13.so
7fac47fc7000-7fac47fc8000 rw-p 00020000 08:01 1968930 /lib/x86_64-linux-gnu/ld-2.13.so
7fac47fc8000-7fac47fc9000 rw-p 00000000 00:00 0
7fff353c8000-7fff353e9000 rw-p 00000000 00:00 0 [stack]
7fff353ff000-7fff35400000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]<issue_closed>
Status: Issue closed |
hashgraph/hedera-mirror-node | 1086082231 | Title: HIP-171 Add payer account to topic message REST APIs
Question:
username_0: ### Problem
https://hips.hedera.com/hip/hip-171 requests that a payer account ID be added to the responses of the topic message REST APIs.
### Solution
- Update importer to populate `topic_message.payer_account_id` for both chunked and non-chunked messages
- Migrate missing `topic_message.payer_account_id` with data from `transaction`
- Change `topic_message.payer_account_id` to be not null
- Add `payer_account_id` field to topic message REST API response
- Add chunk information to REST API as noted in https://github.com/hashgraph/hedera-mirror-node/issues/2178
Verify by reading https://hips.hedera.com/hip/hip-171 that nothing else was missed
### Alternatives
_No response_<issue_closed>
Status: Issue closed |
behave/behave-django | 408230626 | Title: Set server host and port for one scenario
Question:
username_0: I have one scenario inside of a feature file where the LiveServer needs to be running on port 8000 due to an external service I have no control over expecting it to be running there.
I've tried setting the port value in BehaviorDrivenTestCase inside of before_scenario but that causes the scenario AFTER the current one to run on port 8000.
I've also tried setting the port in before_all in the same manner and running all the tests on the same port but that causes an address in use error since the server isn't torn down before the next scenario starts.
Is there a "best practice" to setting the port for a scenario? I can work around this by creating a scenario before the one that needs to be ran on port 8000 who's sole purpose is to set the port number but that doesn't seem like a good solution.
Answers:
username_1: Let's see ...
- [BehaviorDrivenTestCase](https://github.com/behave/behave-django/blob/master/behave_django/testcase.py#L26) is a [LiveServerTestCase](https://docs.djangoproject.com/en/stable/_modules/django/test/testcases/#LiveServerTestCase), which is instantiated in the [setup_testclass() method](https://github.com/behave/behave-django/blob/master/behave_django/environment.py#L65) run in [any before_scenario() call](https://github.com/behave/behave-django/blob/master/behave_django/environment.py#L113) by [monkey_patch_behave](https://github.com/behave/behave-django/blob/master/behave_django/management/commands/behave.py#L148).
- If you set the `port` attribute on the `testcase_class` (the attribute of the test runner) the port value should be used by Django when it instantiates the `LiveServerTestCase`. Hence, `BehaviorDrivenTestRunner.testcase_class.port = 8000`.
Now, we only need to find out _where_.
username_1: Can you share the code that you use for doing that? Because, if you do this on the class _before_ it is instantiated this should work fine. After the scenario you may need to reset the port value to 0, probably.
username_0: `Here's my before_scenario,
def before_scenario(context, scenario):
BehaviorDrivenTestCase.port = 0
if "My Scenario" == scenario.name:
BehaviorDrivenTestCase.port = 8000`
context.test during my scenario reveals that
`context.test.live_server_url = u'http://localhost:38863'`
`context.test.port = 8000`
While the next scenario is running the values are
`context.test.live_server_url = u'http://localhost:8000'`
`context.test.port = 0`
username_1: According to [the code](https://github.com/behave/behave-django/blob/master/behave_django/environment.py#L110-L116) setting the port number on the test case class in `before_scenario` should result in a `LiveServer` reachable on the port you specified in all other hooks related to the scenario.
What value returns [context.get_url()](https://behave-django.readthedocs.io/en/stable/usage.html#web-browser-automation) when you run it in the [django_ready hook](https://behave-django.readthedocs.io/en/stable/usage.html#django-ready-hook)?
And no, there is no "best practice" on overriding the `context.base_url` port. You should probably be mocking the external service to make your test independent.
Status: Issue closed
username_1: Feel free to reopen or comment if you still need help. |
badges/shields | 1071671178 | Title: Last Commit based on File/Folder in repo
Question:
username_0: :clipboard: **Description**
<!--
A clear and concise description of the new badge.
- Which service is this badge for e.g: GitHub, Travis CI
- What sort of information should this badge show?
Provide an example in plain text e.g: "version | v1.01" or as a static badge
(static badge generator can be found at https://shields.io)
-->
A badge depicting when a specific folder or project within a repository was updated or committed to.
:link: **Data**
<!--
Where can we get the data from?
- Is there a public API?
- Does the API requires an API key?
- Link to the API documentation.
-->
https://github.com/username_0/KadeEngineMods
take my repo for instance. I have a mod in there that is in a folder named MFM. In order to get the last commit to that folder, I can go to this API link:
`https://api.github.com/repos/username_0/KadeEngineMods/commits?path=MFM`
after that, you can take the commit in slot 0, go to committer (or author), and in there is a data tag listed as date. Both in committer and author, it is the same date.
:microphone: **Motivation**
<!--
Please explain why this feature should be implemented and how it would be used.
- What is the specific use case?
-->
On the readme of my repo (listed in the above Data section as an example), I host several mods that I created that were ported using [Kade Engine](https://github.com/KadeDev/Kade-Engine)'s (relatively) new ModCore feature. I did this as it didn't really make sense to me to make a separate repo for each one as all of them are ports (so far). I host download links on the readme as well, as I don't really feel comfortable doing a full release of all of them, or even individual releases as whenever I make a large update I'd have to do a new release and that takes time. So I just opted to make it where (since the GitHub already has the mods in downloadable form) there are download links to the GitHub itself (through downgit) to save myself time. The only other thing that I added to the readme as something more for me than anyone else was when it was last updated, but that also takes me going through each time I modify it to replace the date that it was updated on the readme, and doing a readme bump every time was a hassle, and I just did one massive one since I'd neglected to add the mod links anyway and updated all of the dates as well. Having an automated process to show the date would save me a lot of time, and I know a few other repos that could implement this shield if they wanted to as well.
<!-- Love Shields? Please consider donating $10 to sustain our activities:
👉 https://opencollective.com/shields --> |
asciidoctor/asciidoctor-pdf | 455171493 | Title: Table cell padding setting not affect table header cells
Question:
username_0: When working with version:
```
Asciidoctor PDF 1.5.0.alpha.16 using Asciidoctor 2.0.5 [https://asciidoctor.org]
Runtime Environment (ruby 2.2.10p489 (2018-03-28 revision 63023) [x86_64-linux-gnu]) (lc:UTF-8 fs:UTF-8 in:UTF-8 ex:UTF-8)
```
Following setting in custom theme:
```
table:
cell:
padding: 3
```
Was affecting also cells in table header row. That is not the case for later versions - table header cells padding can't be customized anymore.
Answers:
username_1: You're absolutely right. Somehow, that got dropped. I'll fix it, add support for table_head_cell_padding as well, then add a test.
username_1: Of course, if you'd like to have a go at it, feel free!
Here's the logic needed in the head section:
```
padding: theme.table_head_cell_padding || theme.table_cell_padding,
```
username_0: Unfortunately, I don't possess any Ruby skills so it would be hard to provide any solution from my side (not even saying about proper one with tests and support for table_head_cell_padding)
username_1: No worries. I just wanted to make sure I didn't step in front of you if it's something you could do.
username_1: This got missed because the reference PDF for the chronicles example was created after the regression occurred. Had it been the other way around, the visual test would have caught it. But the test suite will catch any future regression.
Status: Issue closed
|
emersion/go-imap-compress | 179641117 | Title: go-imap-compress does not work with latest go-imap
Question:
username_0: go-imap-compress deadlocks since https://github.com/username_1/go-imap/commit/ba92a48ae62c3d26cf99af90b38ec80dbab08b2d
tested against fastmail's IMAP server which supports COMPRESS=DEFLATE.
### client.go
package main
import (
"crypto/tls"
"log"
"os"
"os/signal"
"runtime"
"syscall"
compress "github.com/username_1/go-imap-compress"
client "github.com/username_1/go-imap/client"
)
func main() {
go func() {
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT)
buf := make([]byte, 1<<20)
for {
<-sigs
stacklen := runtime.Stack(buf, true)
log.Printf("=== received SIGINT ===\n*** goroutine dump...\n%s\n*** end\n", buf[:stacklen])
os.Exit(1)
}
}()
c, _ := client.DialTLS(
"mail.messagingengine.com:993",
&tls.Config{ServerName: "mail.messagingengine.com"},
)
c.SetDebug(os.Stderr)
_ = c.Login("<EMAIL>", "<PASSWORD>")
comp := compress.NewClient(c)
if comp.SupportsCompression(compress.Deflate) {
if err := comp.Compress(compress.Deflate); err != nil {
log.Fatal(err)
}
}
mbox, _ := c.Select("INBOX", false)
log.Println("Flags for INBOX:", mbox.Flags)
}
this waits forever, the output with stack trace looks something like:
kzZSvw COMPRESS DEFLATE
kzZSvw OK DEFLATE active
wffn1Q SELECT INBOX
^C2016/09/27 16:09:54 === received SIGINT ===
*** goroutine dump...
goroutine 6 [running]:
[Truncated]
* 0 RECENT
* FLAGS (\Answered \Flagged \Draft \Deleted \Seen $X-ME-Annot-2 $IsMailingList $IsNotification $HasAttachment $HasTD $IsTrusted)
* OK [PERMANENTFLAGS (\Answered \Flagged \Draft \Deleted \Seen $X-ME-Annot-2 $IsMailingList $IsNotification $HasAttachment $HasTD $IsTrusted \*)] Ok
* OK [UIDVALIDITY 1464806015] Ok
* OK [UIDNEXT 10] Ok
* OK [HIGHESTMODSEQ 65] Ok
* OK [URLMECH INTERNAL] Ok
* OK [ANNOTATIONS 65536] Ok
tEp8pA OK [READ-WRITE] Completed
imap/client: 2016/09/27 16:11:20 response has not been handled: &{* OK HIGHESTMODSEQ [65] Ok}
imap/client: 2016/09/27 16:11:20 response has not been handled: &{* OK URLMECH [INTERNAL] Ok}
imap/client: 2016/09/27 16:11:20 response has not been handled: &{* OK ANNOTATIONS [65536] Ok}
2016/09/27 16:11:20 Flags for INBOX: [\Answered \Flagged \Draft \Deleted \Seen $X-ME-Annot-2 $IsMailingList $IsNotification $HasAttachment $HasTD $IsTrusted]
the last revision of go-imap to work is https://github.com/username_1/go-imap/commit/ebd8ee9fbeb2392d3f83e2b6433aa9f2f295bd82 (which builds against https://github.com/username_1/go-imap-compress/commit/e0a20ded6a2df940a3dee1fb8ffe4c92f10930c3). the output of client.go from above (without .SetDebug) is:
2016/09/27 16:14:49 Response has not been handled &{* OK HIGHESTMODSEQ [65] Ok}
2016/09/27 16:14:49 Response has not been handled &{* OK URLMECH [INTERNAL] Ok}
2016/09/27 16:14:49 Response has not been handled &{* OK ANNOTATIONS [65536] Ok}
2016/09/27 16:14:49 Flags for INBOX: [\Answered \Flagged \Draft \Deleted \Seen $X-ME-Annot-2 $IsMailingList $IsNotification $HasAttachment $HasTD $IsTrusted]
Status: Issue closed
Answers:
username_1: Thanks for reporting this issue! It's now fixed.
Pro-tip©: you can press <kbd>Ctrl</kbd> + <kbd>\\</kbd> to stop the process and get a stack trace ;-) |
mantycore/chant | 464994000 | Title: Previews and cuts
Question:
username_0: By default preview of the body is the first paragraph (part before the first \n\n); or maybe first N characters, whichever is shorter; it can be made shorter by a special tag. Preview is sent with the psalm; the full body is stored as content.
Find a way to do picture previews.
Answers:
username_1: What is the goal of such cuts?
username_0: The goal is twofold; first, separate preview mode (where only snippet of the full post is shown) and reading mode with the full text; second, minimize traffic by only sending the full text to the nodes which user requests it. |
angular/angular | 172408639 | Title: angular-forms: "Cannot read property 'validate' of null" on AoT compiled app
Question:
username_0: **I'm submitting a ...** (check one with "x")
```
[x] bug report
```
**Current behavior**
`normalizeValidator` in [normalize_validator.ts](https://github.com/username_0/angular/blob/eef9512ce6b8b72f8753d2964304745d621cf0d7/modules/@angular/forms/src/directives/normalize_validator.ts#L14-L14) checks for undefined, but most default values on controls are null:

Results in Error: `Cannot read property 'validate' of null`
**Reproduction of the problem**
Error does not happen in JIT, but when app precompiled and router is used.
**Plunkr**: http://plnkr.co/edit/j4j4stKtA9dRbEwBdhXC?p=preview `app/app-precompiled.ts` is the precompiled `app/app.ts` via `ngc`.
Versions used:
```log
"@angular/common": "2.0.0-rc.5",
"@angular/compiler": "2.0.0-rc.5",
"@angular/compiler-cli": "0.5.0",
"@angular/core": "2.0.0-rc.5",
"@angular/forms": "0.3.0",
"@angular/http": "2.0.0-rc.5",
"@angular/platform-browser": "2.0.0-rc.5",
"@angular/platform-browser-dynamic": "2.0.0-rc.5",
"@angular/platform-server": "2.0.0-rc.5",
"@angular/router": "3.0.0-rc.1",
```
**Please tell us about your environment:**
* **Angular version:** 2.0.0-rc.5 - dev-build
* **Language:** [TypeScript 2.0.0]
Answers:
username_1: same issue here
Status: Issue closed
username_3: same issue on 6.0.1 |
SharePoint/sp-dev-docs | 567036742 | Title: Troubleshooting missing "Save as Site Template" option
Question:
username_0: This article should include information to enable the "Save as Site Template" option if it is unavailable. (enabling scripts, disabling publishing, etc.)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 90e5f465-6d87-5a7d-9b0a-7807da51d30b
* Version Independent ID: bc4f4c42-d5ba-a36a-1578-70103e14cde8
* Content: [Save, download, and upload a SharePoint site as a template](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/save-download-and-upload-a-sharepoint-site-as-a-template#feedback)
* Content Source: [docs/general-development/save-download-and-upload-a-sharepoint-site-as-a-template.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/general-development/save-download-and-upload-a-sharepoint-site-as-a-template.md)
* Product: **sharepoint**
* Technology: **sharepoint-framework**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs**
Answers:
username_1: Good suggestion... care to make the edit and submit a PR to help with the docs?
username_2: My guess the reason it's not outlined is because Save Site as a Template is not supported in Modern SharePoint. Thus, disabling scripting and enabling publishing isn't a supported scenario.
If adding a PR, can you make sure it's clear (aside from the current notification at the top of the page), that the process is only for classic SharePoint sites?
username_1: Correct... Save Site as a Template should ONLY be used for classic sites... not modern & not publishing sites.
username_0: @username_2 and @username_1- I am a SCA in an enterprise environment that maintains our own SP16. I would be happy dig in with that specific version but I really do not have access to SharePoint online from my work environment (blocked because of DLP concerns) so do not have a larger perspective. I can see where adding this sort of content into support for an evergreen product may not be the best practice. With your last comments in mind, is there a particular way to contribute to a certain version of the documentation?
username_1: @username_0 There's no docs for a specific version ... at least not the dev docs. The IT support (support.microsoft.com) docs might have something though.
The more I think about this, I think this doesn't fit in this article. This isn't really a dev issue so I don't think it really belongs in the dev docs.
For your specific use case, there may be specific reasons why it's not present. Namely, custom scripts are disabled / publishing is enabled. See the support article here, specifically the troubleshooting section's first **Resolving common problems**: https://support.office.com/en-us/article/create-and-use-site-templates-in-sharepoint-server-versions-60371b0f-00e0-4c49-a844-34759ebdd989
username_0: @username_1 Thanks.
Status: Issue closed
|
onsi/ginkgo | 657279799 | Title: Problem in version 1.14.0 Multiple Suites Same Package
Question:
username_0: I noticed that after updating to the new release.
tests started to fail in case there are multiple suites under the same package.
This is the error message:
You may only call BeforeEach from within a Describe, Context or When
I can't say for sure what is the cause.
but it started once download to latest release,
Answers:
username_1: there was a subtle change introduced to the testing lifecycle in 1.14. Can you share some code so i can reproduce what you're seeing?
username_2: Hi, I'm experiencing this issue as well, see https://github.com/weaveworks/eksctl/pull/2561.
At https://github.com/weaveworks/eksctl/blob/0f4f59afd63f6b67ce96eba0cf8683bb30ad8384/pkg/gitops/gitops_test.go#L39, which is in package `gitops`, and https://github.com/weaveworks/eksctl/blob/0f4f59afd63f6b67ce96eba0cf8683bb30ad8384/pkg/gitops/url_test.go in package `gitops_test`, I get the error:
```
profile
/home/mike/projects/weaveworks/eksctl/pkg/gitops/url_test.go:17
RepositoryURL
/home/mike/projects/weaveworks/eksctl/pkg/gitops/url_test.go:18
returns Git URLs as-is [It]
/home/mike/projects/weaveworks/eksctl/pkg/gitops/url_test.go:19
You may only call BeforeEach from within a Describe, Context or When
/home/mike/projects/weaveworks/eksctl/pkg/gitops/gitops_test.go:39
```
If I remove the `url_test.go` file, the error disappears.
username_1: hmm. i'm seeing `url_test.go` call `testutils.RegisterAndRun(t)`. You should only call Ginkgo's `RunSpecs` once in any given package. Calling it twice will lead to duplicate test runs in serial and poor/undefined behavior when running tests in parallel. On Ginkgo's end I should make it clearer that it is an error to call `RunSpecs` more than once (both in the documentation and at run-time). 1.14 made a few changes to the test lifecycle but I don't think it introduced an issue (though I could be wrong!) but rather has made calling `RunSpecs` twice fail more readily.
All this to say that removing [these lines](https://github.com/weaveworks/eksctl/blob/0f4f59afd63f6b67ce96eba0cf8683bb30ad8384/pkg/gitops/url_test.go#L13-L15) should fix the eksctl tests.
I share a bit more detail in this issue: https://github.com/username_1/ginkgo/issues/708
username_2: Might be my golang ignorance showing here but if `url_test.go` is the only file in `package gitops_test` then we have to call `RunSpecs` there, right?
username_1: ah, i see. i'm a _bit_ rusty on the distinction (and am not near a dev environment) but I leave the idea is that the `_test` package is compiled as a separate package under the covers and then linked in. This is to help surface package export/naming issues. Because it's linked in the resulting test binary basically has everything in the `_test` and the non-`_test` packages.
As far as Ginkgo is concerned any Ginkgo nodes (i.e. `It`, `Describe`, etc.) defined in either the `_test` or non-`_test` package will be combined and run by the single `RunSpecs` invocation. Since you have a `gitops_suite_test.go` file with that invocation that should be all you need _even_ in mixed-package mode where some files are in `_test` and some are not.
If you see any behavior that contradicts that please let me know and I'll put together a reproducer locally once I get to my dev machine.
username_2: Ah got it, so once per package directory thingy. That change seems to work, thanks!
username_3: I have hit this problem too, specifically when using `go test -count=n` to run tests, where `n > 1`.
Using the test below under Ginkgo v1.14+ I see the failure as described by @username_0. It works as expected under Ginkgo v1.13.0.
```go
package ginkgotest_test
import (
"testing"
"github.com/username_1/ginkgo"
)
var _ = ginkgo.Describe("<describe>", func() {
ginkgo.BeforeEach(func() {
})
ginkgo.It("passes", func() {
})
})
func TestSuite(t *testing.T) {
ginkgo.RunSpecs(t, "<suite>")
}
```
Console output: <details>
```
❯ go get github.com/username_1/[email protected]
❯ go test -count=2 .
ok ginkgotest 0.285s
```
```
❯ go get github.com/username_1/[email protected]
❯ go test -count=2 .
Running Suite: <suite>
======================
Random Seed: 1608768041
Will run 1 of 1 specs
•
Ran 1 of 1 Specs in 0.000 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
Running Suite: <suite>
======================
Random Seed: 1608768041
Will run 2 of 2 specs
• Failure in Spec Setup (BeforeEach) [0.000 seconds]
<describe>
/tmp/ginkgotest/pkg_test.go:9
passes [BeforeEach]
/tmp/ginkgotest/pkg_test.go:13
You may only call BeforeEach from within a Describe, Context or When
/tmp/ginkgotest/pkg_test.go:10
------------------------------
•
Summarizing 1 Failure:
[Fail] <describe> [BeforeEach] passes
/tmp/ginkgotest/pkg_test.go:10
Ran 2 of 2 Specs in 0.000 seconds
FAIL! -- 1 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestSuite (0.00s)
FAIL
FAIL ginkgotest 0.134s
FAIL
```
</details>
username_1: yeah `-count` isn't supported by Ginkgo. I commented on this here:
https://github.com/username_1/ginkgo/issues/485#issuecomment-692343611
I'm planning on (eventually) updating Ginkgo to abort and explain when users use `-count`.
Out of curiosity I'd love to understand if there are circumstances that require you to use `go test` instead of the `ginkgo` cli.
username_3: There is no technical reason I can point to. That is, it's not because the `ginkgo` CLI is lacking in any way that we're aware of. Rather, our use of `go test` arose naturally as part of my workplace's general effort to standardise tooling across our projects.
Some of our projects use Ginkgo, other's don't, some projects use "testable examples" or even just "regular Go tests" alongside Ginkgo test, etc, etc. Being able to run `go test` without having to understand how the tests are written has been a positive for us. |
mrakitin/bnlcrl | 324649811 | Title: pip.main no longer public api
Question:
username_0: [This code](https://github.com/mrakitin/bnlcrl/blob/04729e86cce81d143d9b3e824cc82db8e3751c80/setup.py#L11)
will not work with pip 10.x.
pykern has been updated, but there is no easy fix for doing this trick any more. Rather, you must install pykern before bnlcrl. The following is the recommended way to install from pip:
```sh
pip install -r requirements.txt -e .
```
Answers:
username_0: https://github.com/mrakitin/bnlcrl/pull/8 will fix this |
MicrosoftDocs/azure-docs | 741038646 | Title: THIS SUCKS
Question:
username_0: You people cannot put together a SINGLE FUCKING WORKING EXAMPLE of an application hitting a damned Azure API endpoint to save your fucking lives. You give shitty snippets of doing perfunctory bullshit and nothing tangible.
You fucking suck
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 006699d8-d208-ec2e-9b14-ec6c5d4bd71f
* Version Independent ID: b9ad346f-2140-8d7b-c5aa-b5e4d7bae866
* Content: [Quickstart: Configure an app to access a web API - Microsoft identity platform](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-configure-app-access-web-apis)
* Content Source: [articles/active-directory/develop/quickstart-configure-app-access-web-apis.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md)
* Service: **active-directory**
* Sub-service: **develop**
* GitHub Login: @mmacy
* Microsoft Alias: **marsma**
Answers:
username_1: @username_0 we have taken your feedback but I am closing this out because the language here is unnecessary.
cc @mmacy
Status: Issue closed
|
mag0716/Google_IO_2018_CatchUp | 365467307 | Title: [WorkManager] 1.0.0-alpha09
Question:
username_0: ## 1.0.0-alpha09
* https://developer.android.com/jetpack/docs/release-notes#september_19_2018
Answers:
username_0: ## 1.0.0-alpha09
### Bug Fixes
* 100以上の Job を実行しようとするとクラッシュする(https://issuetracker.google.com/issues/115560696)
* FOREIGN KEY 制約でのクラッシュ(https://issuetracker.google.com/issues/114705286)
* `ConstraintTrackingWorker#onStopped` が呼ばれない不具合が修正(https://issuetracker.google.com/issues/114125093)
* Firebase JobDispatcher の最小の delay 値(30sec) が使われるようになった(https://issuetracker.google.com/issues/113304626)
* ライブラリ内のスレッドが改善された
* `LiveData` での問題が改善された
### API Changes
* `WorkManager.Configuration` の一部として、`WorkerFactory` を指定することで実行時に `Worker` を作れるようになった
* `DefaultFactory`
* `Worker`, `NonBlockingWorker` のデフォルトコンストラクが deprecated になった
* `Worker(Context, WorkerParameters)` を使うようにする必要がある
* 内部で `ListenableFuture` を使うようになった
* 指定した時間に Job を実行するために `TestDriver` が追加された(https://issuetracker.google.com/issues/113360060)
### Breaking Changes
* `Worker`, `NonBlockingWorker` のデフォルトコンストラクタが deprecated |
tangrams/heightmapper | 1116253437 | Title: Map is not loading
Question:
username_0: The map is not loading, leaving a fully blank page.
Answers:
username_1: If this issue is right at opening the website, turning auto-exposure off and on should fix it. If it is occurring while zoomed in it is possible there is just no imagery at that zoom level. |
GameServerManagers/Game-Server-Configs | 545199237 | Title: HowTo Disable Bots in csgoserver
Question:
username_0: Hello Guys,
can you tell me how can i disable bots in csgoserver?
I have tried already bot_quota.
Thank you very much.
Kind regards
Answers:
username_1: Try adding -nobots to your command line until you find how to do it via configure files.
username_2: `-nobots` starting command has a [known issue](https://forums.alliedmods.net/showthread.php?t=261158): you can see enemies thru smoke on the radar. So it's not recommended.
`bot_quota 0` is also buggy, and not working reliably.
Your best option is to use a 3rd party SourceMod plugin:
https://forums.alliedmods.net/showthread.php?t=237004
I'am using this, and it's working fine. |
Bis167/Project | 574404366 | Title: App crashes when data more than 6 is given
Question:
username_0: When we put the data more than 6 for all the days then the app crashes..
Ex.. Let's suppose we put value 7 on the first day as the data and similarly we put 7 for all the rest days for the week. Now when we retrieve the data to the application, the application crashes.. But when we give data less than 7 even for all the days in a week data is retrieved properly. |
joakikr/fortnite-stats-tracker | 524539180 | Title: Optimize animation of user cards
Question:
username_0: The current animation renders all items in spring array. This is because the animation looks better this way.
* Optimize animation by only rendring `prev - current - next`, but remain the smoothness of the current animation. |
minakovuri/ood | 493853494 | Title: Замечания по паттерну "Наблюдатель"
Question:
username_0: - [ ] Интерфейс IDisplayElement никто не использует. Поэтому он не нужен
Answers:
username_0: - [ ] В интерфейсах не хватает виртуальных деструкторов
username_0: - [ ] Наблюдатели должны отписываться от субъекта при своём удалении
username_0: ```c++
class CInsideStatsDisplay : public IObserver<SInsideWeatherInfo>
{
public:
typedef IObservable<SInsideWeatherInfo> ObservableType;
```
- [ ] Лучше приватно унаследоваться
username_0: - [ ] Не надо делать getter-ы классу только ради тестов
username_0: ```c++
CInsideStatsDisplay::CInsideStatsDisplay(ObservableType& weatherDataRef)
{
weatherDataRef.RegisterObserver(*this);
m_observables.insert(&weatherDataRef);
}
CInsideStatsDisplay::~CInsideStatsDisplay()
{
for (auto& observable : m_observables)
{
observable->RemoveObserver(*this);
}
}
```
- [ ] Зачем нужен set из одного элемента?
username_0: ```c++
class CInsideWeatherData
{
public:
using DataChangeSignal = signals::signal<void(SInsideWeatherInfo data)>;
```
- [ ] Лучше структуры передавать по константной ссылке
username_0: - [ ] В реализации с сигналами и слотами следует завести отдельные сигналы на каждый параметр
username_1: - [ ] Добавить подписок при удалении observable объекта
Status: Issue closed
|
shenxn/homeassistant-broadlink-cover | 511798394 | Title: How to add this to hass.io?
Question:
username_0: Can you please help in how to add this to hass.io? I tried uploading as a file to config/custom_components/cover/broadlink.py
Then configuration.yaml:
cover:
#Broadlink
- platform: broadlink
host: !secret broadlink_ip
mac: !secret broadlink_mac
type: rm2
timeout: 30
covers:
redony1:
friendly_name: "Redőny 1"
command_open: 'xxx'
command_close: 'yyy'
command_stop: 'zzz'
It's working as a switch, but I cannot get it to display as a cover, I get configuration error all the time.
Answers:
username_1: @username_0
FYI. I got other Broadlink switch, so I rename it to custom_components/broadlink_cover.
And config as below:
cover:
- platform: broadlink_cover
host: 192.168.1.x
mac: '00:00:00:00:00:00'
covers:
cover_1:
name: "Cover 1"
travel_time: 13
command_open: '000'
command_stop: '000'
command_close: '000'
username_2: its working for you in hassio?
username_3: Sorry, I'm no longer using broadlink devices so I cannot further work on this. You can try to look at the forks to see if they work. Also, you can add switches to the homeassistant and use [Template Cover](https://www.home-assistant.io/integrations/cover.template/) to make it work. |
ikedaosushi/tech-news | 491716389 | Title: 画像処理と機械学習に特化したエッジAIチップ売価は1000円以下を目標
Question:
username_0: 画像処理と機械学習に特化したエッジAIチップ、「売価は1000円以下を目標」<br>
 ArchiTekは、「イノベーション・ジャパン2019 〜大学見本市&ビジネスマッチング〜」(2019年8月29〜30日、東京ビッグサイト)において、同社が開発するエッジデバイス向けAI(人工知能)プロセッサの技術紹介を行った。競<br>
https://ift.tt/2UNWATa |
awslabs/aws-config-resource-schema | 620235374 | Title: ElasticSearch resource type is not supported
Question:
username_0: Hi,
Pretty self explanatory from the title. I have followed the contributing guidelines and as far as I can see this resource type is not supported. Can you please confirm?
I ultimately want to run a AWS config advanced query against this resource type. Is it necessary for the schema to be added here first? I am just a little confused as the resource type is listed as supported (https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html).
I would be happy to submit a PR for this issue.
Thanks
Answers:
username_1: We have added support for querying on Elasticsearch Domain resource type now. Only the resource types with schemas added here are supported for querying using Advanced Query and those are a subset of the resource types mentioned in https://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html
username_2: @username_1 shouldn't this information be on the supported resource pages of config docs, to denote which ones are only partially supported.
username_3: @username_1 I second what Kapil said above. Not only are these schema not mentioned anywhere I could find in the documentation, it's been very frustrating to figure out why "supported" resource types are not included in these schema and what that means. Supported resource page definitely needs to be updated. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.