repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
Hieromon/AutoConnect | 419108176 | Title: Configure new AP blank at times
Question:
username_0: Hey the page "Configure new AP" renders blank at times, after multiple refresh it loads, its usually happening when more than 7+ ssid are found. Is it a bug or is there any workaround for that or is it some issue with my code?
My Code - [Gdrive Link](https://drive.google.com/open?id=1Rv3uhxafs2aK7ndMPLRKxMsHG2qmedu2)
Answers:
username_1: Reset wifi credentials, can be done by reflashing and clearing all flash storage.
As far as i know there is no simple reset command to erease all data.
@hieromon, i need to load the wifi pag 2 times to see all networks.
Thought it was an issue from my side.
username_0: @username_1 so i am not the only one with the issue it seems. Could you confirm with a simple sketch if the issue is still there? Obviously test when there's a lot of ssid!
username_0: I did further digging and found out the issue is there only when we load custom web page otherwise its absolutely fine handling lots of SSID.
username_1: I go try it tonight when back at home.
@username_0 do you have some test files which i can use.
Vor non custom webpage i could use the basic example from the programm folder.
But what did you use for the custom webpage.
So if i have the same problem, hieromon could maybee reproduce the problem and solve it.
username_0: @username_1 Look at your code for Wifi-Dimmer, just remove [this](https://github.com/username_1/Wifi-Dimmer/blob/bd38f33fe0e31edefa83553e20d582e0733a3cbe/dimmer_3.0/dimmer_3.0.ino#L57) line and try, you will see it works perfectly! From my guess there is no memory left after loading so many config and then after returning a huge string to be served.
In the library you can see that with lots of SSID the string "ssidList" can become really large and thus might lead to blank render of the config page.
https://github.com/username_2/AutoConnect/blob/2fee92ca54c2a2bdd3280001b3f3b4fbaa0a2059/src/AutoConnectPage.cpp#L1106-L1126
@username_2 i hope you look into this.
username_2: @username_0 I have not tested for reproducing with your uploaded code yet, but probably It will be insufficient memory as you guessed. The root cause is not the generation of the SSID list. It caused by the [PageBuilder](https://github.com/username_2/PageBuilder). PageBuilder concatenates the page contents by String.
https://github.com/username_2/PageBuilder/blob/3a6c5b27998ff2ddf601e27d50aa8cc2029fe13d/src/PageBuilder.cpp#L204-L227
So, the String class of ESP8266 reserves the heap by the realloc on concatenation. If realloc fails, String will be lost. The function is **String::changeBuffer**. This function has been updated with the upstream of ESP8266 core. https://github.com/esp8266/Arduino/pull/5690
But, this PR does not seem to be included in stable release 2.5.0.
However, there is still the possibility that the heap will run out. AutoConnect generated HTML is long. If there are seven SSIDs, it will be near the 10K bytes. It means that a heap of about 20K bytes will be required for concatenation by PageBuilder. It is limit of an available heap that will remain after the ESP8266WebServer + AutoConnect + ArduinoJson resides excepts Blynk & ESP8266HTTPUpdateServer.
You can enable PageBuilder's debug option to actually check for out of memory. If the heap is run out, you will see the Serial message of the result of getFreeHeap.
PageBuilder.h L34
```cpp
#define PB_DEBUG
```
https://github.com/username_2/PageBuilder/blob/3a6c5b27998ff2ddf601e27d50aa8cc2029fe13d/src/PageBuilder.h#L34
And I prepared a workaround for PageBuilder. Its workaround is to reserve the string buffer by the length of the content in advance and will suppress fragmentation of the heap that occurs each time a string is concatenated. I have tested it over and over. However, since its effectiveness can not be expected so much, I have not implemented the AutoConnect side yet.
https://github.com/username_2/PageBuilder/blob/3a6c5b27998ff2ddf601e27d50aa8cc2029fe13d/src/PageBuilder.cpp#L207-L214
Can you try with the debug option enabled on PageBuilder?
I will consider using AutoConnect's content string buffer reservation implementation with your code.
username_0: @username_2 you were right, anyway i am happy with refreshing multiple times for now until there's a proper workaround for that.
Serial Output:
`[AC] 5 network(s) found
[PB] at leaving build: 18632 free
[PB] Content lost, length:9211
[PB] Free heap:20536
[PB] Res:200, Chunked:1
[PB] Free heap:29760, content len.:0`
username_3: Hey @username_2 I tried the 0.98 branch and the bug is still there (in my case, here it is showing 30~40 networks ). Does this fix is already there?
username_2: It is possible. It equips with the Next Page button, and I will implement experimentally on the v.098 branch to reduce the displayed SSID to 5 at a time.
Thank you for your advice.
username_2: Hello @username_0, @username_3. I staged a trial version of paging the SSID list to the development branch. Please try this version in an environment where the esp8266 can find many SSIDs if you can.
The current configuration has 5 SSIDs for display per a page. You can change this with the **AutoConnectDefs.h** macro.
https://github.com/username_2/AutoConnect/blob/0fd3171fb5ba610c16980476b5376265857d6391/src/AutoConnectDefs.h#L132
username_0: Negative, didn't solve the issue, went down as low as 1 SSID to be displayed per page, didn't work still.
Thanks for your effort though!
username_2: @username_0, Thank you for testing. Hmmm, Yes. the operation is not stable. Sometimes appear, sometimes gone.
However, I will leave the option for paging display of the SSID list as a build option.
I think another way to resolve, sorry to keep you waiting.
username_3: @username_2 just saw now the last updates. Let me know if you need any test from my side
username_2: Hello @username_0 @username_3 , I have changed the page generation and destruction scheme to avoid heap fragmentation and tried to chunk transfer some pages. The latest commit of the development branch absorbed these fixes.
Please try it from both the [PageBuilder ](https://github.com/username_2/PageBuilder/tree/Enhance/AutoConnect_v098) and [AutoConnect](https://github.com/username_2/AutoConnect/tree/enhance/v098) development branches.
username_0: Amazing!!! :heart_eyes: Works Great! At times however the input form disappears, which is not a huge issue though, but as for blank page, it never comes anymore!
username_0:  
The minor bug i mentioned in the previous comment
username_2: @username_0 Thank you for testing and report. It seems that the changed scheme is still insufficient. I will consider more to stabilize the behavior of the Config New operation.
username_2: @username_0 I improved heap fragmentation in page building and staged the fix to the develop branch [enhance/v098](https://github.com/username_2/AutoConnect/tree/enhance/v098). However, I have not tested enough because I can not prepare an environment with 6 SSIDs on the same channel.
If the problem recurs, turn on debug logging with PageBuilder.h ```#define PB_DEBUG``` and check PageBuilder logs which say Failed building. If you find the failure log, please try increasing the following value.
AutoConnectDefs.h#L127
https://github.com/username_2/AutoConnect/blob/2c0207fb405ad1dd99708d2b4dde307d69ee693d/src/AutoConnectDefs.h#L127
This value will reserve the content buffer at the time of the chunked transfer. PageBuilder cannot reserve the buffer in the first place if it is too large, and if it is too small, PageBuilder cannot create content. Maybe correct size scope is from **12 * 1024** up to **15 * 1024**, I think.
username_0: ` #define AUTOCONNECT_CONTENTBUFFER_SIZE (13 * 1024) `
This ensured maximum stability with as many as 12 SSID's! The issue is finally solved completely! Thanks! @username_2
Status: Issue closed
username_2: @username_0 Thank you for testing. The default value is `15 *1024' based on your test results, which will be released in v098. |
jlippold/tweakCompatible | 421801369 | Title: `StatusBarXS` working on iOS 12.1.2
Question:
username_0: ```
{
"packageId": "com.dpkg.statusbarxs",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.dpkg.statusbarxs",
"deviceId": "iPhone9,3",
"url": "http://cydia.saurik.com/package/com.dpkg.statusbarxs/",
"iOSVersion": "12.1.2",
"packageVersionIndexed": false,
"packageName": "StatusBarXS",
"category": "Tweaks",
"repository": "dpkg9510's Repo",
"name": "StatusBarXS",
"installed": "1.2",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.dpkg.statusbarxs",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.4",
"shortDescription": "Enable the âmodernâ statusbar on any device.",
"latest": "1.2",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
holland-backup/holland | 121272800 | Title: holland mysqldump backup estimation is incompatible with mariadb 10.1
Question:
username_0: MariaDB 10.1 seems to return ```coalesce({data,index}_length, 0)``` as a decimal type, which causes various holland bits to fail when it expects an integer and tries to do integer math. Estimation like this really needs to just go away at some point, but for now the relevant holland bits need to be sure to coerce these values to integers.<issue_closed>
Status: Issue closed |
typeorm/typeorm | 256494502 | Title: OneToMany, ManyToMany missing RelationOptions
Question:
username_0: From all relation decorators, `@OneToMany` and `@ManyToMany` seem to be missing a full RelationOptions object. At least it should be possible to make `@OneToMany` `nullable` and `primary`.
This functionality seems to have vanished in this commit related to #151: https://github.com/typeorm/typeorm/commit/11817adc2e48c0bb56d6cc8a7870db4a58b7c93b#diff-21e67927993f10e8f4f981ccb8becdaf
Answers:
username_0: Possible workaround using a type assertion:
```typescript
@OneToMany(type => Module, module => module.rooms, { nullable: false } as RelationOptions)
myColumn: Module
```
username_1: myColumn: Module
it simply does not affect anything. `nullable: false` means it will set `NOT NULL` to column it creates. Both `OneToMany` and `ManyToMany` do not create any column.
Status: Issue closed
username_0: lol yeah thanks, makes sense! I mixed up `@ManyToOne` and `@OneToMany`. 😬 |
greenelab/deep-review | 238939111 | Title: Dr.VAE: Drug Response Variational Autoencoder
Question:
username_0: Bonus points for the title.
Answers:
username_1: ## General Summary
The authors train two variational autoencoders (VAEs) to model cancer cell lines reaction and susceptibility to drug treatments. One of the VAEs includes a semi-supervised component that predicts treatment response. The paper presents the models and data effectively while adding several valuable computational evaluations, but the paper lacks biological interpretation. I am also not entirely sold that their reported results support one of their conclusions that doing well on reconstruction does not lead to classification improvements (although I tend to agree that reconstruction and biological meaning are not monotonic). There is no available source code as far as I can see.
## Biological Aspects
The models are trained using ~600 cell line gene expression profiles using measurements for 903 landmark genes. The cell lines are measured before and after treatment by 19 drugs. Models were trained independently for each drug.
## Computational Aspects
Two VAE models train with a combination of SGD and Inverse Autoregressive Flow. Both learn two distinct but linearly-related latent representations for pre and post drug perturbation. The first model, `Perturbation VAE` learns to encode the response to drug treatment, while the related `Drug Response VAE` or "Dr.VAE" is the semi-supervised extension that predicts drug mediated cell killing.
## Evaluation
1. Compare drug response predictions for Dr.VAE against a generic semi-supervised VAE, logistic regression, random forest, and SVMs. AUROC + AUPR
- Dr.VAE convincingly outperforms all other models
- Use latent embedding features in this classification task - PertVAE vs. 100 component PCA linear regression
2. Compare gene expression reconstruction "quality" - rank correlation between reconstruction mean and original gene expression (same number of dimensions pertVAE vs. PCA reconstruction)
- Conclusion is VAE does better with smaller latent spaces, then "overfits" as compared to PCA in more latent dimensions.
- Mean correlations don't capture reconstruction quality _coverage_, which I think could have been a more compelling comparison.
3. Compare mean latent space rank correlation (observed and predicted) vs. input gene expression rank correlation (observed and predicted)
- Conclusion is that if the correlation is higher for the latent features, then it the method learns something more than just reconstruction
- Cool idea as the predictions should have higher correlation if more than just structure is learned. However, mean correlations lose resolution again. |
xoofx/SharpYaml | 153690085 | Title: [Question] Is there a way to output a string property using the multi-line block scalar styles (>, |)
Question:
username_0: Hi,
Subject pretty much covers it. I am trying to serialize a class with string properties and want to out put them in the block style without quotes and "\r" or "\n".
I have been trying to do it with the SerializerSettings like:
```C#
var ocrTemplateType = typeof(OcrTemplate);
settings.RegisterTagMapping("!OcrTemplate", ocrTemplateType);
settings.Attributes.Register(ocrTemplateType.GetProperty(nameof(OcrTemplate.Text)), new YamlMemberAttribute(0));
settings.Attributes.Register(ocrTemplateType.GetProperty(nameof(OcrTemplate.Text)), new YamlStyleAttribute(SharpYaml.YamlStyle.Block));
```
Cheers,
username_0 |
babenkoivan/elastic-migrations | 952710391 | Title: Trying to get property 'batch' of non-object in Repositories\MigrationRepository.php:53
Question:
username_0: I have 2 existing indexes in elastic search and i have written another migration for 3rd index then it give me the below error when i am trying to run the "php artisan:elastic migrate"
Trying to get property 'batch' of non-object in Repositories\MigrationRepository.php:53
When i try to debug it the i found.....
$record = $this->table()
->select('batch')
->orderBy('batch', 'desc')
->first();
return isset($record) ? (int)$record->batch : null;
Here "$record" variable contain the array not the object. I am using mongoDB as primary database
Please fix this issue asap and let me know if you need any further information about this issue
Answers:
username_1: Hi @username_0, what library do you use for MongoDB? According to the Laravel `first` method definition, it shouldn't return an array:
```php
/**
* Execute the query and get the first result.
*
* @param array|string $columns
* @return \Illuminate\Database\Eloquent\Model|object|static|null
*/
public function first($columns = ['*'])
{
return $this->take(1)->get($columns)->first();
}
```
username_0: Hello @username_1
I am using the below library for MongoDB
https://github.com/jenssegers/laravel-mongodb
it return the object not array
username_0: In you code they also declare the return type "@return \Illuminate\Database\Eloquent\Model|object|static|null"
/**
* Execute the query and get the first result.
*
* @param array|string $columns
* @return \Illuminate\Database\Eloquent\Model|object|static|null
*/
public function first($columns = ['*'])
{
return $this->take(1)->get($columns)->first();
}
username_0: Everywhere I used the first method it returning the object... but only the above case it returning array don't know why...
username_1: Hey @username_0, I'll need your help with this. Can you maybe share what exactly `$record` contains? If it's an array, then please open an issue in https://github.com/jenssegers/laravel-mongodb, because it violates Laravel's `first` method definition.
username_0: Yes, $record contains an array. If i am applying Model query with first method then it always return object but if I use the DB Query builder then it returning array.
Why you are using DB query? You can also replace it by Model query then it will work fine
username_1: Hey @username_0, please open an issue in the https://github.com/jenssegers/laravel-mongodb repository. I'm pretty sure it uses the `BuildsQueries` trait and doesn't respect the definition of the `first` method:
```php
/**
* Execute the query and get the first result.
*
* @param array|string $columns
* @return \Illuminate\Database\Eloquent\Model|object|static|null
*/
public function first($columns = ['*'])
{
return $this->take(1)->get($columns)->first();
}
``` |
getredash/redash | 342162714 | Title: bad mongodb query support
Question:
username_0: version 4.1
it can only support simple query for mongo.
cant support mongodb query "new Date()".(i know there is a key named "human...",but it cant replace new Date,for exp new Date().getFullYear().and it didnt work in aggregate.$match)
cant support mongodb query with my own params.(use {{param_name}} didn't work)
and i think if it cant support these things,it should support user variable or user function to solve it.for example i can define a variable named "now" to get the current timestamp,and use it in my query like "{{now}}",and its type should be int,not string for mongo query.
Status: Issue closed
Answers:
username_0: finally i use url datasource to solve it,give the mongo data source up .
username_1: how did you solved it?
username_0: use url datasource,just like an API,to query mongoDB and return the result |
projectstorm/react-diagrams | 623927902 | Title: error on calling addPort() in cutsom node
Question:
username_0: My error
<img width="815" alt="Annotation 2020-05-24 211518" src="https://user-images.githubusercontent.com/22187019/82762970-6df43480-9e04-11ea-9f28-533f2465c37a.png">
my nodeModel

my portModel

Answers:
username_1: Do you have the code for the Widget as well?
username_0: I did not have the port widget in the model. After adding there is this type error.

My modelWidget

username_1: Hmmm shoudn't the `PortWidget` have been created with `PortModelAlignment.BOTTOM` instead of `LEFT`?
username_0: 
username_1: Hmmmmm, and what is `this.props.node.getPort(PortModelAlignment.BOTTOM)` returning? Can you log it?
username_0: Here is the log for `this.props.node.getPort(PortModelAlignment.BOTTOM)`

username_1: That's odd... I'm not sure what TypeScript is complaining about here :frowning_face:
username_0: Strange but I made it work by making `"strict": false` in tsconfig.json . I don't know why it is working. Also can't see any type error on my IDE :(
username_2: try with `port={this.props.node.getPort(PortModelAlignment.Bottom) as YourPortModelClassHere}`
for instance if you are using DefaultPortModel, then
`port={this.props.node.getPort(PortModelAlignment.Bottom) as DefaultPortModel}`
Status: Issue closed
|
Myrsmeden/tagcloud | 354052233 | Title: Firewall on Digital Ocean Web Interface?
Question:
username_0: I've tried the following:
- `sudo reboot`
- Opened Nginx HTTP in ufw
- Close docker with `docker-compose down`in `/var/www/tagcloud.online`to see if Docker is blocking nginx in some way.
Apparently there is a firewall that can be activated throug the web interface on Digital Ocean. Is that active for this droplet?
Answers:
username_0: Since I ran `docker-compose down`, is there a script that will automatically start docker when the server is rebooted? |
numpy/numpy | 506333963 | Title: Python 3.7 Windows debug build fails
Question:
username_0: Install fresh python 3.7.4 with debug
run:
```
call vcvars64.bat
SET DISTUTILS_USE_SDK=1
python_d -m pip install pip --upgrade
python_d -m pip install wheel
python_d -m pip install cython
```
```
python_d -m pip install numpy
Collecting numpy
C:\pro\py\37d\lib\site-packages\pip\_vendor\msgpack\fallback.py:133: DeprecationWarning: encoding is deprecated, Use raw=False instead.
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
Using cached https://files.pythonhosted.org/packages/ac/36/325b27ef698684c38b1fe2e546e2e7ef9cecd7037bcdb35c87efec4356af/numpy-1.17.2.zip
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py) ... done
Created wheel for numpy: filename=numpy-1.17.2-cp37-cp37dm-win_amd64.whl size=4825877 sha256=9c94add5bdfd3273595f3cfb00721fc2860aeb0745fc5d6fb4bda77bb9ffbacb
Stored in directory: C:\Users\lol\AppData\Local\pip\Cache\wheels\f6\42\5b\432471ce1e0a21eddd7593c15a7d5c5ef11c792cbeaa4d49a7
Successfully built numpy
Installing collected packages: numpy
Successfully installed numpy-1.17.2
```
```
python_d -c "import numpy as np; print(np.get_include())" | tee eet
Traceback (most recent call last):
File "C:\pro\py\37d\lib\site-packages\numpy\core\__init__.py", line 17, in <module>
from . import multiarray
File "C:\pro\py\37d\lib\site-packages\numpy\core\multiarray.py", line 14, in <module>
from . import overrides
File "C:\pro\py\37d\lib\site-packages\numpy\core\overrides.py", line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: Module use of python37.dll conflicts with this version of Python.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\pro\py\37d\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import core
File "C:\pro\py\37d\lib\site-packages\numpy\core\__init__.py", line 47, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\pro\py\37d\python_d.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.17.2" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: Module use of python37.dll conflicts with this version of Python.
```
This could be related to this https://github.com/cython/cython/issues/3185
Answers:
username_1: It seems the non-cython `_multiarray_umath` dll linked to the non-debug library. Maybe you need to set some environment variables to link to the correct import library?
Status: Issue closed
username_0: Could you elaborate on these "environment variables"?
username_0: Install fresh python 3.7.4 with debug
run:
```
call vcvars64.bat
SET DISTUTILS_USE_SDK=1
python_d -m pip install pip --upgrade
python_d -m pip install wheel
python_d -m pip install cython
```
```
python_d -m pip install numpy
Collecting numpy
C:\pro\py\37d\lib\site-packages\pip\_vendor\msgpack\fallback.py:133: DeprecationWarning: encoding is deprecated, Use raw=False instead.
unpacker = Unpacker(None, max_buffer_size=len(packed), **kwargs)
Using cached https://files.pythonhosted.org/packages/ac/36/325b27ef698684c38b1fe2e546e2e7ef9cecd7037bcdb35c87efec4356af/numpy-1.17.2.zip
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py) ... done
Created wheel for numpy: filename=numpy-1.17.2-cp37-cp37dm-win_amd64.whl size=4825877 sha256=9c94add5bdfd3273595f3cfb00721fc2860aeb0745fc5d6fb4bda77bb9ffbacb
Stored in directory: C:\Users\lol\AppData\Local\pip\Cache\wheels\f6\42\5b\432471ce1e0a21eddd7593c15a7d5c5ef11c792cbeaa4d49a7
Successfully built numpy
Installing collected packages: numpy
Successfully installed numpy-1.17.2
```
```
python_d -c "import numpy as np; print(np.get_include())" | tee eet
Traceback (most recent call last):
File "C:\pro\py\37d\lib\site-packages\numpy\core\__init__.py", line 17, in <module>
from . import multiarray
File "C:\pro\py\37d\lib\site-packages\numpy\core\multiarray.py", line 14, in <module>
from . import overrides
File "C:\pro\py\37d\lib\site-packages\numpy\core\overrides.py", line 7, in <module>
from numpy.core._multiarray_umath import (
ImportError: Module use of python37.dll conflicts with this version of Python.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\pro\py\37d\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import core
File "C:\pro\py\37d\lib\site-packages\numpy\core\__init__.py", line 47, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy c-extensions failed.
- Try uninstalling and reinstalling numpy.
- If you have already done that, then:
1. Check that you expected to use Python3.7 from "C:\pro\py\37d\python_d.exe",
and that you have no directories in your PATH or PYTHONPATH that can
interfere with the Python and numpy version "1.17.2" you're trying to use.
2. If (1) looks fine, you can open a new issue at
https://github.com/numpy/numpy/issues. Please include details on:
- how you installed Python
- how you installed numpy
- your operating system
- whether or not you have multiple versions of Python installed
- if you built from source, your compiler versions and ideally a build log
- If you're working with a numpy git repository, try `git clean -xdf`
(removes all files not under version control) and rebuild numpy.
Note: this error has many possible causes, so please don't comment on
an existing issue about this - open a new one instead.
Original error was: Module use of python37.dll conflicts with this version of Python.
```
This could be related to this https://github.com/cython/cython/issues/3185
username_1: We would have to disect the build.log. I think you need to run something like `pip install --no-clean numpy` then search for the commands used to link numpy. I was referring to the LIB env variable. You should be linking to python37_d.dll, not python37.dll
username_0: I'm pretty sure distutils handles this appropriately
username_1: The error message when importing `_multiarray_umath` seems to indicate something is wrong with linking `_multiarray_umath`:
```
Original error was: Module use of python37.dll conflicts with this version of Python
```
Otherwise why would you be getting that error? Something is wrong otherwise, as you say, it would have been handled appropriately. But it wasn't. The place to find the link command used is in the build logs.
username_2: Closing due to inactivity. If the issue persists please reopen with the current information about the system/installation process.
Status: Issue closed
|
Azure/azure-sdk-for-python | 533507570 | Title: [textanalytics] Determine if detect_languages should return a single language or list of language
Question:
username_0: Currently the service will return only one language even with multi-language text input. It seems a little unclear that we return a `list[DetectedLanguage]` but that list will only ever have a single item.
We could change this to return a single language, but if the service ever pursues returning multiple languages per text input, it would be a breaking service API change to make it back into a collection.
Answers:
username_0: This will change for preview 2 of the service. Detected languages will return a single language.
Status: Issue closed
username_0: Closed with #9630 |
saltstack/salt | 88405448 | Title: Minion not connecting to saltmaster
Question:
username_0: [root@cubietruck ~]# salt --versions-report
Salt: 2015.5.2
Python: 2.7.10 (default, May 30 2015, 02:17:57)
Jinja2: 2.7.3
M2Crypto: 0.22
msgpack-python: 0.4.6
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.11
ioflo: Not Installed
PyZMQ: 14.6.0
RAET: Not Installed
ZMQ: 4.1.1
Mako: Not Installed
[root@cubietruck ~]# uname -a
Linux cubietruck 4.0.5-1-ARCH #1 SMP Mon Jun 8 19:03:28 MDT 2015 armv7l GNU/Linux
[root@cubietruck ~]#
root@machine:~# salt-call --versions-report
Salt: 2015.5.0
Python: 2.7.9 (default, Apr 2 2015, 15:33:21)
Jinja2: 2.7.3
M2Crypto: 0.21.1
msgpack-python: 0.4.2
msgpack-pure: Not Installed
pycrypto: 2.6.1
libnacl: Not Installed
PyYAML: 3.11
ioflo: Not Installed
PyZMQ: 14.4.1
RAET: Not Installed
ZMQ: 4.0.5
Mako: 1.0.0
Debian source package: 2015.5.0+ds-1utopic1
root@machine:~# uname -a
Linux machine 3.19.0-20-generic #20-Ubuntu SMP Fri May 29 10:10:47 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@machine:~#
From the minion:
root@machine:~# salt-minion -l debug
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: machine
[DEBUG ] Configuration file path: /etc/salt/minion
[INFO ] Setting up the Salt Minion "machine"
[DEBUG ] Created pidfile: /var/run/salt-minion.pid
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Attempting to authenticate with the Salt Master at 192.168.1.87
[DEBUG ] Initializing new SAuth for ('/etc/salt/pki/minion', 'machine', 'tcp://192.168.1.87:4506')
[INFO ] SaltReqTimeoutError: after 60 seconds. (Try 1 of 7)
[INFO ] SaltReqTimeoutError: after 60 seconds. (Try 2 of 7)
[INFO ] SaltReqTimeoutError: after 60 seconds. (Try 3 of 7)
^C[WARNING ] Stopping the Salt Minion
[WARNING ] Exiting on Ctrl-c
[INFO ] The salt minion is shut down
root@machine:~#
Salt-key is not showing the new minion.
However with tcpdump I do see traffic entering port 4506 on the saltmaster.
Is this a regression that I read about from some time back?
Answers:
username_1: @username_0, is there a firewall on the master that is blocking traffic from the minion?
username_0: @username_1 No there is not:
[root@cubietruck ~]# iptables -L -n
iptables v1.4.21: can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
[root@cubietruck ~]#
username_2: @username_0 can you telnet into port 4506 from the minion?
It seems to me there must be something environment-specific going on, because there are many thousand minions running on 2015.5.2 these days without problem.
username_0: @username_2 Yes, I can:
```console
me@minion:~$ telnet 192.168.1.87 4506
Trying 192.168.1.87...
Connected to 192.168.1.87.
Escape character is '^]'.
^]
telnet> q
Connection closed.
me@minion:~$
```
I also reinstalled the master and the minion and removed the directories in /var/ & /etc/ manually. Didn't have an effect.
I could do more test, it's not a prod environment, if you have any.
username_2: That is....very strange. Can you please include your minion and master config?
username_0: Minion
----
```console
me@minion:~$ cat /etc/salt/minion
##### Primary configuration settings #####
##########################################
# This configuration file is used to manage the behavior of the Salt Minion.
# With the exception of the location of the Salt Master Server, values that are
# commented out but have an empty line after the comment are defaults that need
# not be set in the config. If there is no blank line after the comment, the
# value is presented as an example and is not the default.
# Per default the minion will automatically include all config files
# from minion.d/*.conf (minion.d is a directory in the same directory
# as the main minion config file).
#default_include: minion.d/*.conf
# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start.
master: 192.168.1.87
# If multiple masters are specified in the 'master' setting, the default behavior
# is to always try to connect to them in the order they are listed. If random_master is
# set to True, the order will be randomized instead. This can be helpful in distributing
# the load of many minions executing salt-call requests, for example, from a cron job.
# If only one master is listed, this setting is ignored and a warning will be logged.
#random_master: False
# Set whether the minion should connect to the master via IPv6:
#ipv6: False
# Set the number of seconds to wait before attempting to resolve
# the master hostname if name resolution fails. Defaults to 30 seconds.
# Set to zero if the minion should shutdown and not retry.
# retry_dns: 30
# Set the port used by the master reply and authentication server.
#master_port: 4506
# The user to run salt.
#user: root
# Specify the location of the daemon process ID file.
#pidfile: /var/run/salt-minion.pid
# The root directory prepended to these options: pki_dir, cachedir, log_file,
# sock_dir, pidfile.
#root_dir: /
# The directory to store the pki information in
#pki_dir: /etc/salt/pki/minion
# Explicitly declare the id for this minion to use, if left commented the id
# will be the hostname as returned by the python call: socket.getfqdn()
# Since salt uses detached ids it is possible to run multiple minions on the
# same machine but with different ids, this can be useful for salt compute
# clusters.
#id:
# Append a domain to a hostname in the event that it does not exist. This is
[Truncated]
#tcp_keepalive_cnt: -1
# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1
###### Windows Software settings ######
############################################
# Location of the repository cache file on the master:
#win_repo_cachefile: 'salt://win/repo/winrepo.p'
###### Returner settings ######
############################################
# Which returner(s) will be used for minion's result:
#return: mysql
me@minion:~$
```
username_2: Maybe I should have had you grep for uncommented lines. As far as I can see you're not setting any config values in the master, and you're only setting `master: ` in the minion, correct? Now I'm even more mystified.
username_0: @username_2 Correct.
username_0: @username_2 Well, since you were even baffled, I had a closer look. I found it very strange too.
```console
[root@cubietruck ~]# ps -ef|grep -i salt
root 2974 1 0 Jun15 ? 00:00:06 /usr/bin/python2 /usr/bin/salt-master
root 2981 2974 2 Jun15 ? 02:32:56 /usr/bin/python2 /usr/bin/salt-master
root 2982 2974 0 Jun15 ? 00:00:00 /usr/bin/python2 /usr/bin/salt-master
root 2983 2974 0 Jun15 ? 00:00:00 /usr/bin/python2 /usr/bin/salt-master
root 2984 2974 0 Jun15 ? 00:00:00 /usr/bin/python2 /usr/bin/salt-master
root 2989 2984 0 Jun15 ? 00:00:09 /usr/bin/python2 /usr/bin/salt-master
root 2990 2984 0 Jun15 ? 00:00:09 /usr/bin/python2 /usr/bin/salt-master
root 2991 2984 0 Jun15 ? 00:00:09 /usr/bin/python2 /usr/bin/salt-master
root 2994 2984 0 Jun15 ? 00:00:09 /usr/bin/python2 /usr/bin/salt-master
root 2997 2984 0 Jun15 ? 00:00:09 /usr/bin/python2 /usr/bin/salt-master
root 3000 2984 99 Jun15 ? 3-20:16:53 /usr/bin/python2 /usr/bin/salt-master
root 16114 16090 0 20:47 pts/0 00:00:00 grep -i salt
[root@cubietruck ~]# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 178/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 176/sshd
tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 2982/python2
tcp 0 0 0.0.0.0:4506 0.0.0.0:* LISTEN 3000/python2
tcp6 0 0 :::5355 :::* LISTEN 178/systemd-resolve
tcp6 0 0 :::22 :::* LISTEN 176/sshd
udp 0 0 0.0.0.0:5355 0.0.0.0:* 178/systemd-resolve
udp6 0 0 :::5355 :::* 178/systemd-resolve
[root@cubietruck ~]#
[root@cubietruck master]# strace -fp 3000
Process 3000 attached with 7 threads
[pid 3010] epoll_wait(18, <unfinished ...>
[pid 3009] epoll_wait(16, <unfinished ...>
[pid 3008] epoll_wait(14, <unfinished ...>
[pid 3007] epoll_wait(12, <unfinished ...>
[pid 3006] epoll_wait(10, <unfinished ...>
[pid 3005] epoll_wait(8, <unfinished ...>
[pid 3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid 3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] clock_gettime(CLOCK_MONOTONIC, {336708, 972193035}) = 0
[pid 3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid 3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] clock_gettime(CLOCK_MONOTONIC, {336708, 974342915}) = 0
[pid 3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid 3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] clock_gettime(CLOCK_MONOTONIC, {336708, 977157754}) = 0
<snip>
.......
</snip>
[pid 3000] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, 0) = 0 (Timeout)
[pid 3000] poll([{fd=19, events=POLLIN}], 1, 0) = 0 (Timeout)
[pid 3000] poll([{fd=20, events=POLLIN}], 1, 0) = 0 (Timeout)
^C
Process 3005 detached
Process 3006 detached
Process 3007 detached
Process 3008 detached
[Truncated]
[pid 206] <... poll resumed> ) = 0 (Timeout)
[pid 206] poll([{fd=19, events=POLLIN}, {fd=20, events=POLLIN}], 2, -1
Process 206 detached
<detached ...>
Process 211 detached
Process 212 detached
Process 213 detached
Process 214 detached
Process 215 detached
Process 216 detached
[root@cubietruck ~]# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
minion
Rejected Keys:
[root@cubietruck ~]#
```
So the reboot made everything work again, although it has been rebooted before. I'm not sure what the cause for the weird output from strace was before the reboot, but I don't think it has anything to do with saltstack, since it works correctly now.
Status: Issue closed
username_2: Very strange! Glad you got it working, though! Keep us posted if it happens again. |
cypress-io/cypress | 595361040 | Title: Unable to connect to server in different environment
Question:
username_0: I am trying to run a cypress test pointing to different environment.
1. When I run my local server and then run cypress, i.e cypress is now pointing `http://localhost:xxxx`. It is working fine.
Problem:
I need to run the cypress command without running my local server. Since I have multiple application that needs to run these commands.
local url: `http://localhost:xxxx/app1/` & `http://localhost:xxxx/app2/`
staging url:
application 1: http://test.test.com/app1/
application 2: https://test.test.com/app2
both application 1 and application 2 are different but have the same domain and need to have cypress tests running.
I have tried switching the baseurl as shown in https://docs.cypress.io/api/plugins/configuration-api.html#Switch-between-multiple-configuration-files
It did not work. When I do cypress run --env configFile=staging, it shows me below error
<img width="512" alt="Screen Shot 2020-04-06 at 3 12 35 PM" src="https://user-images.githubusercontent.com/32438432/78596054-1fb9b080-7819-11ea-8ec5-32ffdcaa4981.png">
Please let me know what can be the potential issue.
I assume if i move BDD into CI job it will give me the same error.
Thanks in Advance
Answers:
username_1: The server defined in your `baseUrl` is not running - so if you visit the `baseUrl` itself outside of Cypress - it will not open. You need to have your server defined as the `baseUrl` up and running before cypress runs.
I can't be sure which server it's trying to run since it's not specified in the error message. And I can't be sure if you've defined the baseUrl switching correctly since you didn't include that code.
Make sure your server is running - we recommend using these npm modules to help wait for the server.
- wait-on https://www.npmjs.com/package/wait-on
- start-server-and-test https://github.com/bahmutov/start-server-and-test
Issues in our GitHub repo are reserved for potential bugs or feature requests. This issue will be closed since it appears to be neither a bug nor a feature request.
We recommend questions relating to how to use Cypress be asked in our [community chat](https://gitter.im/cypress-io/cypress). Also try searching our [existing GitHub issues](https://gitter.im/cypress-io/cypress/issues), reading through our [documentation](https://docs.cypress.io), or searching [Stack Overflow](https://stackoverflow.com/questions/tagged/cypress) for relevant answers.
Status: Issue closed
|
ibis-project/ibis | 296733768 | Title: More informative RelationError in case of overlapping keys
Question:
username_0: 'Joined tables have overlapping names ...' -> 'Joined tables `table1` and `table2` have overlapping names ...'
Answers:
username_1: https://github.com/ibis-project/ibis/blob/master/ibis/expr/operations.py#L1623
@username_0 is `table1` and `table2` a fixed text?
Status: Issue closed
username_2: This will go away when we de-`materialize` ourselves |
dotnet/roslyn | 116178649 | Title: Require semicolon after embedded statements
Question:
username_0: e.g. this is allowed but should not be
```C#
foreach (var proc in Process.GetProcesses())
. Console.WriteLine(proc.ProcessName)
```
Answers:
username_0: FYI @username_2
username_1: @username_2, thoughts?
username_2: Yes, semicolon should be required, since foreach is not an expression.
Status: Issue closed
username_3: Fixed with #17668 |
jlippold/tweakCompatible | 539933036 | Title: `IconState` working on iOS 13.3
Question:
username_0: ```
{
"packageId": "me.nepeta.iconstate",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "me.nepeta.iconstate",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/me.nepeta.iconstate/",
"iOSVersion": "13.3",
"packageVersionIndexed": true,
"packageName": "IconState",
"category": "Tweaks",
"repository": "Nepeta (MIRROR)",
"name": "IconState",
"installed": "0.1.0",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "me.nepeta.iconstate",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Preserve custom icon layouts between reboots (If you can't see the depiction, please use Cydia or Zebra).",
"latest": "0.1.0",
"author": "Nepeta",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
WoWManiaUK/Blackwing-Lair | 604753348 | Title: [Quest] The Forest Heart - Ashenvale
Question:
username_0: Quest: https://www.wowhead.com/quest=13796/the-forest-heart
**What is happening:**
1. Killing the Severed Keeper / Severed Druid does not drop Untainted Spirits.
2. The Forest Heart is also not present in the area.
**What should happen:**
1. Killing the Severed Keeper / Severed Druid will drop Untainted Spirits. 7 of this Untainted Spirit are required in order to create the Power of Nature.
2. The Forest Heart is present in the area in order to be harvested.
Answers:
username_1: bq next update |
Dyalog/link | 618213690 | Title: Normalise results of Add, Break, Create, Export, Fix, Import, List, Notify, Refresh
Question:
username_0: At the moment they output unspecified strings.
Answers:
username_0: Create ought to return the ⎕SE.Link.Links namespace, with its nice display form. Errors on load should go through the warning channel (see issue #125)
username_0: and should also decide which problems ⎕SIGNAL errors, and which problems are described in the returned text
username_0: Trailing slash is now reserved for future extension
Status: Issue closed
username_0: All fixed in link v2.1-beta1 : https://github.com/Dyalog/link/commit/278a25e3341d82d033f2b3b0cfaa4b55965e13fd
Error messages do provide newline-separated lists of incriminated names. |
sp614x/optifine | 657783976 | Title: Presets
Question:
username_0: Have the ability to use/create custom presets, i.e. a performance/fps based preset where everything is optimized for smooth gameplay, or a Max quality preset that instantly sets all settings to be quality based. You would be able to name and create your own presets, and have them saved for later use.
Answers:
username_1: this is not issue
username_0: Wait you're right. I can delete this, but do you know where people can suggest changes to Optifine? |
waynecz/dadda-translate-crx | 314400423 | Title: 生词本 css 的问题
Question:
username_0: 
Answers:
username_1: 还有,生词本里的中文颜色比较灰,和白底一起,比较难看清楚
Status: Issue closed
username_2: [v1.0.1 已修复](https://github.com/username_2/dadda-translate-crx/commit/deb2f96650f08b849bcb4089a5436e3a906c7713) |
kubernetes/test-infra | 314290477 | Title: 'blunderbuss' plugin doesn't work if reviewers are missing in OWNERS
Question:
username_0: After investigating why 'blunderbuss' plugin wasn't working for the kube-deploy repo, we realized that the issue is that the OWNERS file had "approvers" but no "reviewers" section. After adding "reviewers", the issue got fixed.
This issue is to suggest that, in absence of "reviewers", "approvers" are used instead rather than failing to request reviews.
cc @spiffxp @kubernetes/kube-deploy-maintainers
Answers:
username_1: /kind feature
/area prow
/cc @username_2 @fejta
username_2: @grantr Is working on this. It sounds like the plan is to either allow blunderbuss to be configured to use the approvers list as the reviewers list, or automatically detect that a file has no reviewers and assign approvers instead. |
adaltas/node-csv | 652170810 | Title: difference in file size - No changes during transformation
Question:
username_0: I have the below code where I am reading from a CSV and writing to another CSV, most of my cases csv can contain thousands of records.
I want to transform data before writing to another file, but as a test, I ran the code and see that there are slight differences between source and destination files without event changing anything in the data.
```
for(const m of metadata) {
tempm = m;
fname = path;
const pipelineAsync = promisify(pipeline);
if(m.path) {
await pipelineAsync(
fs.createReadStream(m.path),
csv.parse({delimiter: '\t', columns: true}),
csv.transform((input) => {
return Object.assign({}, input);
}),
csv.stringify({header: true, delimiter: '\t'}),
fs.createWriteStream(fname, {encoding: 'utf16le'})
)
let nstats = fs.statSync(fname);
tempm['transformedPath'] = fname;
tempm['transformed'] = true;
tempm['t_size_bytes'] = nstats.size;
}
}
```
I see that for example,
file a: the source file size is `895631` while after copying destination file size is `898545`
file b: the source file size is `51388` while after copying destination file size is `52161`
file c: the source file size is `13666` while after copying destination file size is `13587`
But when i do not use transform, the sizes match, for example this code produces exactly same file sizes on both source and dest
```
for(const m of metadata) {
tempm = m;
fname = path;
const pipelineAsync = promisify(pipeline);
if(m.path) {
await pipelineAsync(
fs.createReadStream(m.path),
/*csv.parse({delimiter: '\t', columns: true}),
csv.transform((input) => {
return Object.assign({}, input);
}),
csv.stringify({header: true, delimiter: '\t'}),*/
fs.createWriteStream(fname, {encoding: 'utf16le'})
)
let nstats = fs.statSync(fname);
tempm['transformedPath'] = fname;
tempm['transformed'] = true;
tempm['t_size_bytes'] = nstats.size;
}
}
```
Can any one please help in identifying what options i need to pass to csv transformation, so that the copy happens correctly.
I am doing this test to ensure, i am not losing out any data in large files.
Thanks.
Answers:
username_1: It may be a CRLF terminator issue. What is output of `files *.csv` ? Do you see something like the following?
```
150-md5.csv: ASCII text
150.csv: ASCII text
27.csv: ASCII text
275.csv: ASCII text
52.csv: ASCII text
Raw_gene_counts_matrix.csv: UTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminators
genes.csv: UTF-8 Unicode (with BOM) text, with CRLF line terminators
md52.csv: ASCII text
md5s.csv: ASCII text, with CRLF line terminators
#
```
Try `diff --strip-trailing-cr ` to Strip trailing carriage return on input (to see if files are identical except for CR terminator).
```
username_2: Closing due to lack of activity, feel free to re-open with a reproducible sample
Status: Issue closed
|
tafli/MeritServer | 201560499 | Title: Should return error messages as some kind of json
Question:
username_0: When POSTing a new transaction for an invalid user the request results in a 400 Bad Request as expected.
The error message is sent as plain text.
I think it would be beter to return some kind of json. This can also be reflected in the swagger as an separate "error model". |
strimzi/strimzi-kafka-operator | 420015544 | Title: Failed to set memory requirements in kafka spec
Question:
username_0: I changed the kafka spec to include the following:
```
spec:
kafka:
resources:
requests:
cpu: 500m
memory: 3500Mi
```
And I got the following error:
```2019-03-12 14:20:34 ERROR AbstractAssemblyOperator:176 - Reconciliation #85(timer) Kafka(strimzi/strimzi-cluster): createOrUpdate failed
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://k8ssetupdev-c13169a8.hcp.westeurope.azmk8s.io/apis/apps/v1/namespaces/strimzi/statefulsets. Message: StatefulSet in version "v1" cannot be handled as a StatefulSet: v1.StatefulSet.Spec: v1.StatefulSetSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.VolumeMounts: []v1.VolumeMount: Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: unable to parse quantity's suffix, error found in #10 byte of ...|"3670016K"}},"volume|..., bigger context ...|s":{},"requests":{"cpu":"500m","memory":"3670016K"}},"volumeMounts":[{"mountPath":"/var/lib/kafka/da|.... Received status: Status(apiVersion=v1, code=400, details=null, kind=Status, message=StatefulSet in version "v1" cannot be handled as a StatefulSet: v1.StatefulSet.Spec: v1.StatefulSetSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.VolumeMounts: []v1.VolumeMount: Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: unable to parse quantity's suffix, error found in #10 byte of ...|"3670016K"}},"volume|..., bigger context ...|s":{},"requests":{"cpu":"500m","memory":"3670016K"}},"volumeMounts":[{"mountPath":"/var/lib/kafka/da|..., metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=BadRequest, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:812) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:382) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.common.operator.resource.AbstractResourceOperator.internalCreate(AbstractResourceOperator.java:165) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.operator.resource.StatefulSetOperator.internalCreate(StatefulSetOperator.java:271) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.operator.resource.StatefulSetOperator.internalCreate(StatefulSetOperator.java:39) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.common.operator.resource.AbstractResourceOperator.lambda$reconcile$0(AbstractResourceOperator.java:91) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:273) ~[cluster-operator-0.11.1.jar:0.11.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [cluster-operator-0.11.1.jar:0.11.1]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]```
Answers:
username_1: @username_2 Any idea? This looks like some issue when converting the values between units.
@username_0 While this looks like a bug, I think you should be able to try to use `Gi` - e.g. `memory: 3Gi` or `memory: 4Gi` ... I'm not sure it would take decimals to set 3.5Gi, but integers normally work fine.
username_2: Using K as the suffix should be fine (see, e.g. [docs](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory)). The cluster operator does reformat the memory given in the resource, but it converts it to a plain number, not a number in kilobytes, so I cannot explain how the number "3670016K" got into the request.
Looking at the exception message in detail it says `v1.Container.VolumeMounts: []v1.VolumeMount: Resources: v1.ResourceRequirements.Requests` which looks like the resources is somehow being interpreted as a child of a volume mount, though it's unclear how that could happen assuming the cluster operator is the only thing touching the statefulset. @username_0 could you post the full JSON of the statefulset just to confirm what the structure that it's trying to parse actually is?
username_0: I have made a git repo containing some minimal setup that gives the error:
https://github.com/username_0/strimzi-issue
Running it with 3.5Gi gives the following error:
```
2019-03-13 16:42:04 ERROR AbstractAssemblyOperator:176 - Reconciliation #2(watch) Kafka(default/strimzi-cluster): createOrUpdate failed
java.lang.IllegalArgumentException: Invalid memory suffix: .5Gi
at io.strimzi.api.kafka.model.Quantities.memoryFactor(Quantities.java:75) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.api.kafka.model.Quantities.parseMemory(Quantities.java:28) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.api.kafka.model.CpuMemory.memoryAsLong(CpuMemory.java:46) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.model.ModelUtils.resources(ModelUtils.java:187) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.model.KafkaCluster.getContainers(KafkaCluster.java:853) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.model.KafkaCluster.generateStatefulSet(KafkaCluster.java:655) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.operator.assembly.KafkaAssemblyOperator$ReconciliationState.kafkaStatefulSet(KafkaAssemblyOperator.java:1485) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.strimzi.operator.cluster.operator.assembly.KafkaAssemblyOperator.lambda$createOrUpdate$41(KafkaAssemblyOperator.java:206) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.Future.lambda$compose$1(Future.java:265) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.tryComplete(FutureImpl.java:125) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.complete(FutureImpl.java:86) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.handle(FutureImpl.java:151) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.handle(FutureImpl.java:18) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.tryComplete(FutureImpl.java:125) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.complete(FutureImpl.java:86) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.Future.lambda$map$3(Future.java:328) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.tryComplete(FutureImpl.java:125) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.complete(FutureImpl.java:86) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.handle(FutureImpl.java:151) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.handle(FutureImpl.java:18) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.FutureImpl.setHandler(FutureImpl.java:79) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:289) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:339) ~[cluster-operator-0.11.1.jar:0.11.1]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [cluster-operator-0.11.1.jar:0.11.1]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [cluster-operator-0.11.1.jar:0.11.1]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) [cluster-operator-0.11.1.jar:0.11.1]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) [cluster-operator-0.11.1.jar:0.11.1]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [cluster-operator-0.11.1.jar:0.11.1]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
```
username_0: @username_2 We don't even have a statefulset at the point it is failing right now. Hopefully my repo with an example can help you guys further
username_1: I can confirm this ... it's interesting ... `500Mi` seems to get converted to `524288K`. `K` should be a valid unit according to the docs, but my OpenShift 3.9 rejects it as invalid suffix (=unit). `1.1Gi` is also interesting. First of all, it seems we do not support the decimal point here. But also `1.1Gi` is `1181116006,4` bytes ... and obviously there is no `0,4` bytes :-). My OpenShfit 3.9 will translate it as `1181116006400m`. I think this is something we should be able to deal with ideally.
Status: Issue closed
username_3: Hi,
I ran into the same issue using Strimzi 0.17.0. This is the Kafka CR:
```
spec:
kafka:
resources:
limits:
cpu: '2'
memory: 2.85Gi
```
The StatefulSet can be created initially but the reconciliation fails afterwards:
```
...
resources:
limits:
cpu: '2'
memory: 3060164198400m
```
```
java.lang.IllegalArgumentException: Invalid memory suffix: m
at io.strimzi.operator.cluster.operator.resource.Quantities.memoryFactor(Quantities.java:79)
```
I expected this to be working as this issue has been closed with a fix.
username_1: @username_3 I will have a look. TBH, these conversions are a bit mess since Kubernetes sometimes decide to convert things differently than you would expect. Maybe as a workaround you can try `3Gi` or maybe `2850Mi` or something like that.
username_1: @username_3 I finally got to this and opened #3729 to fix this. |
circe/circe | 225366974 | Title: None
Question:
username_0: This is addressed in the current master of coursier since [this commit](https://github.com/coursier/coursier/commit/13da5e871f28aaa380d63f0268e9b2585b624ba7), which removes the POMs from the `update` report.
There's still possibly an issue on the scalajs side though IMO (it should filter artifacts - sbt itself filters them with the `classpathTypes` setting for example), but you should be fine from the circe repo. |
blakeblackshear/frigate | 1113287146 | Title: [Support]: Integrate freestanding Frigate install with HA (for automation, etc.)
Question:
username_0: ### Describe the problem you are having
I have installed Frigate in a standalone docker container. Is there a method to incorporate with Home Assistant after the fact? Such as automation and entity controls?
Thanks!
### Version
0.9.4
### Frigate config file
```yaml
Not Needed
```
### Relevant log output
```shell
Not Needed
```
### FFprobe output from your camera
```shell
Not Needed
```
### Frigate stats
_No response_
### Operating system
Debian
### Install method
Docker Compose
### Coral version
CPU (no coral)
### Network connection
Wired
### Camera make and model
WyzeCam V3
### Any other information that may be helpful
_No response_
Answers:
username_1: https://docs.frigate.video/integrations/home-assistant |
playgameservices/play-games-plugin-for-unity | 184592976 | Title: Can't Connect to Google Play
Question:
username_0: The green box that says connecting shows up and then the green loading circle spins for a bit and then nothing. My app is not currently on the marketplace, but I've added my email to testing. Why is my email not authenticating.
Answers:
username_1: Same problem is here :(
username_2: which version of plugin you are using?
username_0: The newest version
username_1: Yes......... sorry for late replying :)
username_1: the latest one
username_3: I have the same problem it used to works on my friends computer and mine, but after pulling it to my own branch it doesnt work anymore. We have the exact same setup and SDK android / and java sdk. Alle settings are the same.
Its driving me crazy... Before pulling it to my branch, i opened it on the development branch, and the source control ( sourcetree ) immediatly gave me two files in unstaged area. See picture. This might be the cause of the sign in problems, but i have no idea what these files do.


If someone can explain whats going on..because i just cant find the answers for this problem, after hours of updating en reinstalling attempts & google searching for people with the same problems.
username_4: The green box that says connecting shows up and then the green loading circle spins for a bit and then nothing.
I have the same problem with last version of plugin :(
I read the logcat of my device and only thing that i can found that is suspect is this:
```
11-04 10:02:53.652: W/PopupManager(5250): You have not specified a View to use as content view for popups. Falling back to the Activity content view. Note that this may not work as expected in multi-screen environments
11-04 10:02:53.669: W/GamesServiceBroker(1512): Client connected with SDK 8487000, Services 9877230, and Games 37240032
11-04 10:02:53.880: I/art(574): Explicit concurrent mark sweep GC freed 49089(2MB) AllocSpace objects, 16(649KB) LOS objects, 33% free, 14MB/21MB, paused 1.994ms total 178.187ms
11-04 10:02:53.889: I/SignInActivity(5250): Transition from 7 to 8
11-04 10:02:53.909: W/PopupManager(5250): You have not specified a View to use as content view for popups. Falling back to the Activity content view. Note that this may not work as expected in multi-screen environments
11-04 10:02:53.917: W/DataBroker(1512): No player ID found when refreshing
11-04 10:02:53.927: W/GamesServiceBroker(1512): Client connected with SDK 8487000, Services 9877230, and Games 37240032
11-04 10:02:54.225: I/SignInActivity(5250): Transition from 8 to 6
11-04 10:02:54.244: W/PopupManager(5250): You have not specified a View to use as content view for popups. Falling back to the Activity content view. Note that this may not work as expected in multi-screen environments
11-04 10:02:54.264: W/GamesServiceBroker(1512): Client connected with SDK 8487000, Services 9877230, and Games 37240032
11-04 10:02:54.310: I/SignInActivity(5250): Transition from 6 to 7
11-04 10:02:54.330: W/PopupManager(5250): You have not specified a View to use as content view for popups. Falling back to the Activity content view. Note that this may not work as expected in multi-screen environments
11-04 10:02:54.349: W/GamesServiceBroker(1512): Client connected with SDK 8487000, Services 9877230, and Games 37240032
11-04 10:02:54.360: I/SignInActivity(5250): Transition from 7 to 8
11-04 10:02:54.378: W/PopupManager(5250): You have not specified a View to use as content view for popups. Falling back to the Activity content view. Note that this may not work as expected in multi-screen environments
11-04 10:02:54.386: W/DataBroker(1512): No player ID found when refreshing
11-04 10:02:54.398: W/GamesServiceBroker(1512): Client connected with SDK 8487000, Services 9877230, and Games 37240032
```
This log message is repeated many and many times
Anyone can help us?
username_4: no one can help?
username_3: What helped my team was: The first person who made the keystore.debug key made a folder and put her key in that folder ( in the root of our game folder). Now we all use this same key to connect to the services.
Perhaps that will help you.
username_4: MMM i think that my problem is different from yours and it's the same of the first post:
```
username_0 commented 13 days ago
The green box that says connecting shows up and then the green loading circle spins for a bit and then nothing.
```
username_5: I expirienced exact same issue. I use plugin version 0.9.35 and Unity 5.3.2f1.
Then after I tried out all that internet can suggest to me, regarding my issue, I installed Unity 5.4.3 f1 and get my working authentication.
Hope it helps.
username_4: mmm i have update my Unity to 5.4.3 f1 and the issue remain..
The loading screen of Google Play Games is infinite into a loop
username_6: Got Unity 5.4.3f1 and plugin version 0.9.35. Can't connect to the services (the green box is not showing). Any fix for that?
username_7: Can you share the log of not connecting?
username_5: Can't really right now, I'm done with that project.
--
Best, <NAME>.
Status: Issue closed
username_7: OK - please open another issue with the log if you encounter it again. |
youhackme/Toggle | 223121783 | Title: Symfony\Component\Debug\Exception\FatalThrowableError in GET /scrape/tf/theme
Question:
username_0: ## Error in Toggle
**Symfony\Component\Debug\Exception\FatalThrowableError** in **GET /scrape/tf/theme**
Type error: Too few arguments to function containsInList(), 1 passed in /Users/Shared/devenv/stella-telecom/toggle.me/app/Scrape/Themeforest/Theme.php on line 170 and exactly 2 expected
[View on Bugsnag](https://app.bugsnag.com/toggle/toggle/errors/58f8dea2c7219a0018ff049b?event_id=58f8dea2c7219a0018ff049a&i=gh&m=ci)
## Stacktrace
app/Http/helpers.php:53 - containsInList
app/Scrape/Themeforest/Theme.php:170 - App\Scrape\Themeforest\Theme::App\Scrape\Themeforest\{closure}
app/Scrape/Themeforest/Theme.php:176 - App\Scrape\Themeforest\Theme::App\Scrape\Themeforest\{closure}
app/Repositories/DbScrapeRepositoryAbstract.php:73 - App\Repositories\DbScrapeRepositoryAbstract::chunk
app/Scrape/Themeforest/Theme.php:193 - App\Scrape\Themeforest\Theme::extractThemeAlias
app/Http/Controllers/ThemeController.php:43 - App\Http\Controllers\ThemeController::scrapeThemeAlias
[View full stacktrace](https://app.bugsnag.com/toggle/toggle/errors/58f8dea2c7219a0018ff049b?event_id=58f8dea2c7219a0018ff049a&i=gh&m=ci)
Answers:
username_0: fixed
Status: Issue closed
|
Quitch/Queller-AI | 128678833 | Title: Will sometimes take much longer than necessary routes to attack
Question:
username_0: Queller will sometimes send its troops on looping flanking maneuvers rather than engage along more logical paths.
Status: Issue closed
Answers:
username_0: Attack behaviour of this type is hardcoded and outside the scope of what a mod can change. |
mstssk/sw2dts | 165558023 | Title: YAML Support
Question:
username_0: Any plans for YAML support?
Answers:
username_1: Hi @username_0
YAML support looks easy to me. (ref http://swagger.io/specification/ )
So I will support YAML at later version.
refs #7
username_1: I released YAML support feature at v2.1.1.
thanks!
Status: Issue closed
|
thomaspark/glyphsearch | 125867841 | Title: no `master` branch?
Question:
username_0: This project only have `gh-pages` branch, which is confusing for potential contributors.
Answers:
username_0: Still think this should be taken into consideration as `master` branch is default in all 99.99% of all GitHub projects with ot without GitHub pages. The fix is easy and generic:
```
[remote "origin"]
url = https://github.com/<github_user>/<repo_name>
fetch = +refs/heads/*:refs/remotes/origin/*
push = refs/heads/master:refs/heads/master
push = refs/heads/master:refs/heads/gh-pages
```
Source: [Stackoverflow: "Github: Mirroring gh-pages to master"](http://stackoverflow.com/questions/5807459/github-mirroring-gh-pages-to-master)
username_1: Was using `gh-pages` for personal convenience, but given potential and current contributors, I've added a `master` branch as default. I'm fine with merging but mirroring pushes looks handy.
Status: Issue closed
username_0: Great! |
wcdean/go-time-cloud | 117265138 | Title: 18594 - Install and configure CouchDB with Elasticsearch - Ubuntu, Debian
Question:
username_0: Testing and editing for 18594 - Install and configure CouchDB with Elasticsearch - Ubuntu, Debian
Answers:
username_1: Failed.
step: Change the ownership permission of CouchDB in different directory with recursive order.
sudo mkdir -p /usr/local/var/log/couchdb
error: chmod: cannot access `/usr/localsudo': No such file or directory
chmod: cannot access `chown': No such file or directory
chmod: cannot access `couchdb:couchdb': No such file or directory
gdcloud@unassigned-hostname:/usr/local/src/apache-couchdb-1.5.0$ sudo mkdir -p /usr/local/var/log/couchdb
sudo: unable to resolve host unassigned-hostname |
newrelic/newrelic-telemetry-sdk-java | 888622396 | Title: CVE in telemetry-http-okhttp dependency
Question:
username_0: Hello,
There is a CVE (CVE-2020-29582) in the `kotlin-stdlib` version 1.3.72 package that is fixed in version 1.4.21. The latest New Relic `telemetry-http-okhttp` package ([0.12.0](https://github.com/newrelic/newrelic-telemetry-sdk-java/tree/main/telemetry-http-okhttp)) still has a dependency on `kotlin-stdlib` version 1.3.72 based on the `mvn dependency:tree` output below. Could `telemetry-http-okhttp` be updated to use version 1.4.21 of the `kotlin-stdlib` package?
```
[INFO] \- com.newrelic.telemetry:telemetry-http-okhttp:jar:0.12.0:compile
[INFO] \- com.squareup.okhttp3:okhttp:jar:4.8.0:compile
[INFO] +- com.squareup.okio:okio:jar:2.7.0:compile
[INFO] | \- org.jetbrains.kotlin:kotlin-stdlib-common:jar:1.3.70:compile
[INFO] \- org.jetbrains.kotlin:kotlin-stdlib:jar:1.3.72:compile
[INFO] \- org.jetbrains:annotations:jar:13.0:compile
```
Answers:
username_1: The dependency on kotlin-stdlib comes from the `com.squareup.okhttp3:okhttp` dependency.
Currently the `com.squareup.okhttp3:okhttp` dependency is version 4.8.0.
Version 4.8.0 uses Kotlin version 1.3.72
To address the CVE we need to raise the Kotlin version to 1.4.21.
The highest Kotlin version used by a 4.x update of the `com.squareup.okhttp3:okhttp` dependency is 1.4.10 which would not address the CVE.
The latest alpha version (5.x) of the `com.squareup.okhttp3:okhttp` dependency uses Kotlin version 1.4.21 which would address the CVE. When the 5.x version is released we should update
username_2: We checked in on this again. The Okhttp3 project still needs to update to kotlin 1.4.21. Looks like it is planned for version 5, which is still in Alpha.
Unclear why the issue has been closed:
https://github.com/square/okhttp/issues/6219
https://mvnrepository.com/artifact/com.squareup.okhttp3/okhttp |
GoogleCloudDataproc/initialization-actions | 667235166 | Title: No module named 'ml.dmlc.xgboost4j.scala.spark' error for Rapids init action
Question:
username_0: I created a cluster with the following cmd: (I copied the Rapid init action to my own GCS bucket)
```
export PROJECT=$(gcloud info --format='value(config.project)')
export CLUSTER_NAME=jupyter-gpu-cluster-beta4
export GCS_BUCKET=${PROJECT}-datalake
export REGION=us-central1
export ZONE=us-central1-f
export RAPIDS_SPARK_VERSION=2.x
export RAPIDS_VERSION=1.0.0-Beta4
gcloud beta dataproc clusters create $CLUSTER_NAME \
--region $REGION \
--zone $ZONE \
--image-version 1.4-ubuntu18 \
--master-machine-type n1-standard-4 \
--worker-machine-type n1-highmem-4 \
--worker-accelerator type=nvidia-tesla-t4,count=1 \
--optional-components=ANACONDA,JUPYTER \
--initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh,gs://${GCS_BUCKET}/init-actions/rapids/rapids/rapids.sh \
--metadata gpu-driver-provider=NVIDIA \
--metadata rapids-runtime=SPARK \
--metadata spark-version=2.x \
--metadata spark-rapids-version=Beta4 \
--metadata install-gpu-agent=true \
--bucket $GCS_BUCKET \
--properties "spark:spark.dynamicAllocation.enabled=false,spark:spark.shuffle.service.enabled=false,spark:spark.submit.pyFiles=/usr/lib/spark/jars/xgboost4j-spark_${RAPIDS_SPARK_VERSION}-${RAPIDS_VERSION}.jar" \
--enable-component-gateway \
--subnet=default \
--scopes https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/cloud-platform
```
I tried to run this notebook - https://github.com/NVIDIA/spark-xgboost-examples/blob/spark-2/examples/notebooks/python/mortgage-gpu.ipynb
For the cell:
```
from ml.dmlc.xgboost4j.scala.spark import XGBoostClassificationModel, XGBoostClassifier
from ml.dmlc.xgboost4j.scala.spark.rapids import GpuDataReader
```
I get the following error:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-845f7cab574b> in <module>
----> 1 from ml.dmlc.xgboost4j.scala.spark import XGBoostClassificationModel, XGBoostClassifier
2 from ml.dmlc.xgboost4j.scala.spark.rapids import GpuDataReader
ModuleNotFoundError: No module named 'ml.dmlc.xgboost4j.scala.spark'
```
This used to work so not sure what the issue is now.
Answers:
username_1: @username_2 may you take a look?
username_2: Hey Tahir,
These updates to the README to include `RAPIDS_SPARK_VERSION` were added on July 15th. Requiring the spark version metadata when building with Spark 2.x was technically a breaking change, so I reverted this as well as the need for other manual config, and merged this on July 22nd. There should now be no difference in user setup when creating a Dataproc 1.x vs 2.0 cluster.
https://github.com/GoogleCloudDataproc/initialization-actions/issues/788 may be a blocker, but can you try using the updated init. action with the following command?
```
export CLUSTER_NAME=<cluster_name>
export GCS_BUCKET=<your bucket for the logs and notebooks>
export REGION=<region>
gcloud dataproc clusters create $CLUSTER_NAME \
--region $REGION \
--image-version 1.4-ubuntu18 \
--master-machine-type n1-standard-4 \
--worker-machine-type n1-highmem-16 \
--worker-accelerator type=nvidia-tesla-t4,count=1 \
--optional-components=ANACONDA,JUPYTER,ZEPPELIN \
--initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh,gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh \
--metadata gpu-driver-provider=NVIDIA \
--metadata rapids-runtime=SPARK \
--bucket $GCS_BUCKET \
--enable-component-gateway
```
username_2: https://github.com/GoogleCloudDataproc/initialization-actions/pull/790 should fix the problem :) thanks for pointing this out and sorry for the trouble.
You can also safely remove all of the following from your cluster create command as they're now included in the init action:
```
--metadata spark-version=2.x \
--metadata spark-rapids-version=Beta4 \
--properties "spark:spark.dynamicAllocation.enabled=false,spark:spark.shuffle.service.enabled=false,spark:spark.submit.pyFiles=/usr/lib/spark/jars/xgboost4j-spark_${RAPIDS_SPARK_VERSION}-${RAPIDS_VERSION}.jar"
```
username_0: Great yes that seems to have fixed this issue.
username_2: Awesome!
Status: Issue closed
|
jlippold/tweakCompatible | 301621861 | Title: `FiveIconDockXI` working on iOS 11.0.3
Question:
username_0: ```
{
"packageId": "com.yourepo.kiiimo.fiveicondockxi",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.yourepo.kiiimo.fiveicondockxi",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/com.yourepo.kiiimo.fiveicondockxi/",
"iOSVersion": "11.0.3",
"packageVersionIndexed": false,
"packageName": "FiveIconDockXI",
"category": "Supports iOS 11",
"repository": "A KiiiMO GC - ARABIC ",
"name": "FiveIconDockXI",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.yourepo.kiiimo.fiveicondockxi",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.4",
"shortDescription": "Ø§Ø¶Ø§ÙØ© 5 Ø£ÙÙÙÙØ§Øª ÙÙ Ø´Ø±ÙØ· Ø§ÙØ¯ÙÙ Ø§ÙØ³ÙÙÙ Five icon dock for iOS 11.",
"latest": "0.0.1-20",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
dotnet/project-system | 253737654 | Title: When running mstests from VS, and test assembly is referencing core project build is always happening
Question:
username_0: - We have a unit tests project in solution $/DevDiv/Offcycle/WPT/WebToolsExtensions/Main/ProjectSystem.sln (you would need to sync whole Main folder and run tools\build.cmd from Dev Cmd line first. Then Open solution in VS and build it several times after some changes and without changes etc) referencing core project ProjectSystem.Web
- everytime we click on a test and run it ProjectSystem.Web is being built. Note: if parallel builds are turned ON build fails mosty of the time #2762 and we can not run tests at all, had to turn on parallel builds.
However why anything is being built if nothing changed in ProjectYstem.Web? Restore on build happening, yes, but it should not change anything to invalidate the project etc ..
Answers:
username_1: Can you do the following:
1. Build the solution once
2. Turn on Tools -> Options -> Projects and Solutions -> .NET Core -> Logging Level -> Verbose?
3. Clear the output window
3. Build ProjectSystem.Web
4. Copy the output window results and paste them here.
username_0: Restoring NuGet packages...
To prevent NuGet from restoring packages during build, open the Visual Studio Options dialog, click on the Package Manager node and uncheck 'Allow NuGet to download missing packages during build.'
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web.15.0, Configuration: Debug Any CPU ------
Microsoft.VisualStudio.ProjectSystem.Web.15.0 -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.15.0.dll
FastUpToDate: Project information is older than current project version, skipping check. (Microsoft.VisualStudio.ProjectSystem.Web)
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web, Configuration: Debug Any CPU ------
C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1987,5): warning MSB3277: Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
Microsoft.VisualStudio.ProjectSystem.Web -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.dll
Done building project "Microsoft.VisualStudio.ProjectSystem.Web.csproj".
========== Build: 2 succeeded, 0 failed, 4 up-to-date, 0 skipped ==========
username_0: This was done when parallel builds are off
username_0: Restoring NuGet packages...
To prevent NuGet from restoring packages during build, open the Visual Studio Options dialog, click on the Package Manager node and uncheck 'Allow NuGet to download missing packages during build.'
1>------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web.15.0, Configuration: Debug Any CPU ------
2>------ Build started: Project: Microsoft.VisualStudio.Web.Publish.Contracts, Configuration: Debug Any CPU ------
2>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): error MSB3021: Unable to copy file "Z:\dd\WTE\Main\\references\Dev15\CPS\Microsoft.VisualStudio.ProjectSystem.VS.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.VS.dll". Access to the path 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.VS.dll' is denied.
1>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): error MSB3021: Unable to copy file "Z:\dd\WTE\Main\\references\Dev15\MSBuild\Microsoft.Build.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.Build.dll". Access to the path 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.Build.dll' is denied.
1>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): error MSB3021: Unable to copy file "Z:\dd\WTE\Main\\references\Dev15\Microsoft.VisualStudio.ImageCatalog.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.ImageCatalog.dll". Access to the path 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.ImageCatalog.dll' is denied.
1>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): error MSB3021: Unable to copy file "Z:\dd\WTE\Main\\references\Dev15\Microsoft.VisualStudio.Shell.15.0.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.Shell.15.0.dll". Access to the path 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.Shell.15.0.dll' is denied.
1>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): error MSB3021: Unable to copy file "Z:\dd\WTE\Main\\references\Dev15\Microsoft.VisualStudio.Shell.Design.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.Shell.Design.dll". Access to the path 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.VisualStudio.Shell.Design.dll' is denied.
1>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(4222,5): warning MSB3026: Could not copy "C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Build.Tasks.Core.dll" to "Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.Build.Tasks.Core.dll". Beginning retry 1 in 1000ms. The process cannot access the file 'Z:\dd\WTE\Main\\bin\dev15\Debug\Microsoft.Build.Tasks.Core.dll' because it is being used by another process.
3>FastUpToDate: Project information is older than current project version, skipping check. (Microsoft.VisualStudio.ProjectSystem.Web)
3>------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web, Configuration: Debug Any CPU ------
3>C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1987,5): warning MSB3277: Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
3>Microsoft.VisualStudio.ProjectSystem.Web -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.dll
3>Done building project "Microsoft.VisualStudio.ProjectSystem.Web.csproj".
========== Build: 1 succeeded, 2 failed, 3 up-to-date, 0 skipped ==========
username_0: this one is when parallel builds are on (with 4 default value )
username_2: @username_3 do you know what the state of this is?
username_3: @username_0 I haven't been able to reproduce this because I have had various issues with getting this solution building using the current dogfood build. There are a number of fixes that are going in to the 15.6 release that will likely fix this.
Could you do the following slightly modified instructions from above?
1. Build the solution *twice*
2. Turn on Tools -> Options -> Projects and Solutions -> .NET Core -> Logging Level -> Verbose
3. Clear the output window
4. Build ProjectSystem.Web
5. Copy the output window results and paste them here.
username_0: Followed steps, output is below. Issue still repro for me everytime in recent builds.
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web.15.0, Configuration: Debug Any CPU ------
Microsoft.VisualStudio.ProjectSystem.Web.15.0 -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.15.0.dll
FastUpToDate: Adding project file inputs: (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: Adding import inputs: (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Shared.DotNet.Sdk.Props.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.DotNet.Sdk.Props.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Settings.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.CodeAnalysis.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Dependencies.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Dependencies.Versions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\Sdk\Sdk.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common\Shared\Microsoft.VisualStudio.Web.Extensions.Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj.nuget.g.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Imports\Microsoft.Common.props\ImportBefore\Microsoft.NuGet.ImportBefore.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\NuGet\15.0\Microsoft.NuGet.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.DefaultItems.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Microsoft.NETCoreSdk.BundledVersions.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.SupportedTargetFrameworks.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.CSharp.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Shared.DotNet.Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.DotNet.Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\Sdk\Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.BeforeCommon.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.DefaultAssemblyInfo.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.DefaultOutputPaths.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.TargetFrameworkInference.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.RuntimeIdentifierInference.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.NuGetOfflineCache.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.CSharp.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.CSharp.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Roslyn\Microsoft.CSharp.Core.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\Managed\Microsoft.CSharp.DesignTime.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\Managed\Microsoft.Managed.DesignTime.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj.user' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.CurrentVersion.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\v15.0\CodeAnalysis\Microsoft.CodeAnalysis.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.WinFX.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.WinFx.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Data.Entity.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Data.Entity.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Xaml.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Xaml.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.WorkflowBuildExtensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.WorkflowBuildExtensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\v15.0\TeamTest\Microsoft.TeamTest.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.targets\ImportAfter\Microsoft.Docker.ImportAfter.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Sdks\Microsoft.Docker.Sdk\build\Microsoft.Docker.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.targets\ImportAfter\Microsoft.NET.Build.Extensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\Microsoft.NET.Build.Extensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\Microsoft.NET.Build.Extensions.NETFramework.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
[Truncated]
FastUpToDate: Checking PreserveNewest file 'Z:\dd\WTE\Main\src\ProjectSystem\Components\ComponentsImages.imagemanifest': (Microsoft.VisualStudio.ProjectSystem.Components)
FastUpToDate: Write 10/25/2017 3:55:23 PM: 'Z:\dd\WTE\Main\src\ProjectSystem\Components\ComponentsImages.imagemanifest'. (Microsoft.VisualStudio.ProjectSystem.Components)
FastUpToDate: Output file write 10/25/2017 3:55:23 PM: 'Z:\dd\WTE\Main\bin\dev15\Debug\ComponentsImages.imagemanifest'. (Microsoft.VisualStudio.ProjectSystem.Components)
FastUpToDate: Project is not up to date. (Microsoft.VisualStudio.ProjectSystem.Components)
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Components, Configuration: Debug Any CPU ------
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.VisualStudio.Threading" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis.Workspaces" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
Microsoft.VisualStudio.ProjectSystem.Components -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Components.dll
Done building project "Microsoft.VisualStudio.ProjectSystem.Components.csproj".
------ Build started: Project: Microsoft.VisualStudio.Web.Publish.Contracts, Configuration: Debug Any CPU ------
Microsoft.VisualStudio.Web.Publish.Contracts -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.Web.Publish.Contracts.dll
FastUpToDate: Project information is older than current project version, skipping check. (Microsoft.VisualStudio.ProjectSystem.Web)
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web, Configuration: Debug Any CPU ------
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.VisualStudio.Threading" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis.Workspaces" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
Microsoft.VisualStudio.ProjectSystem.Web -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.dll
Done building project "Microsoft.VisualStudio.ProjectSystem.Web.csproj".
========== Build: 6 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
username_0: Here is the output when i try to run unit tests after i did build whole solution twice and built ProjectSystem.Web:
------ Build started: Project: Microsoft.VisualStudio.ProjectSystem.Web.15.0, Configuration: Debug Any CPU ------
Microsoft.VisualStudio.ProjectSystem.Web.15.0 -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.ProjectSystem.Web.15.0.dll
FastUpToDate: Adding project file inputs: (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: Adding import inputs: (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Shared.DotNet.Sdk.Props.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.DotNet.Sdk.Props.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Settings.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.CodeAnalysis.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Dependencies.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.Dependencies.Versions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\Sdk\Sdk.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common\Shared\Microsoft.VisualStudio.Web.Extensions.Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj.nuget.g.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Imports\Microsoft.Common.props\ImportBefore\Microsoft.NuGet.ImportBefore.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\NuGet\15.0\Microsoft.NuGet.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.DefaultItems.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Microsoft.NETCoreSdk.BundledVersions.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.SupportedTargetFrameworks.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.CSharp.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Shared.DotNet.Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\Extensions.DotNet.Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\Sdk\Sdk.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.Sdk.BeforeCommon.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.DefaultAssemblyInfo.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.DefaultOutputPaths.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.TargetFrameworkInference.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.RuntimeIdentifierInference.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files\dotnet\sdk\2.1.4\Sdks\Microsoft.NET.Sdk\build\Microsoft.NET.NuGetOfflineCache.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.CSharp.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.CSharp.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Roslyn\Microsoft.CSharp.Core.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\Managed\Microsoft.CSharp.DesignTime.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\Managed\Microsoft.Managed.DesignTime.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'Z:\dd\WTE\Main\src\Shared\Common\Microsoft.VisualStudio.Web.Extensions.Common.csproj.user' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.CurrentVersion.props' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\v15.0\CodeAnalysis\Microsoft.CodeAnalysis.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.NETFramework.CurrentVersion.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.WinFX.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.WinFx.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Data.Entity.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Data.Entity.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Xaml.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.Xaml.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.WorkflowBuildExtensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Microsoft.WorkflowBuildExtensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\VisualStudio\v15.0\TeamTest\Microsoft.TeamTest.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.targets\ImportAfter\Microsoft.Docker.ImportAfter.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Sdks\Microsoft.Docker.Sdk\build\Microsoft.Docker.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Microsoft.Common.targets\ImportAfter\Microsoft.NET.Build.Extensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\Microsoft.NET.Build.Extensions.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
FastUpToDate: 'C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\Microsoft\Microsoft.NET.Build.Extensions\Microsoft.NET.Build.Extensions.NETFramework.targets' (Microsoft.VisualStudio.Web.Extensions.Common)
[Truncated]
FastUpToDate: 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.6\System.dll' (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Adding Built outputs: (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Shared\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Debug\Microsoft.VisualStudio.Web.Extensions.Common.CPS.dll' (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Latest write timestamp on input is 1/24/2018 11:30:19 AM on 'Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.Web.Extensions.Common.dll'. (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Earliest write timestamp on output is 1/24/2018 11:30:20 AM on 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Shared\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Debug\Microsoft.VisualStudio.Web.Extensions.Common.CPS.dll'. (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Adding input reference copy markers: (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common\Shared\Microsoft.VisualStudio.Web.Extensions.Common\Debug\Microsoft.VisualStudio.Web.Extensions.Common.csproj.CopyComplete' (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.ProjectSystem.Web.15.0\Debug\Microsoft.VisualStudio.ProjectSystem.Web.15.0.csproj.CopyComplete' (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Adding output reference copy marker: (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Shared\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Debug\Microsoft.VisualStudio.Web.Extensions.Common.CPS.csproj.CopyComplete' (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Latest write timestamp on input marker is 1/24/2018 11:33:37 AM on 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common\Shared\Microsoft.VisualStudio.Web.Extensions.Common\Debug\Microsoft.VisualStudio.Web.Extensions.Common.csproj.CopyComplete'. (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Write timestamp on output marker is 1/24/2018 11:31:31 AM on 'Z:\dd\WTE\Main\obj\dev15\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Shared\Microsoft.VisualStudio.Web.Extensions.Common.CPS\Debug\Microsoft.VisualStudio.Web.Extensions.Common.CPS.csproj.CopyComplete'. (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
FastUpToDate: Project is not up to date. (Microsoft.VisualStudio.Web.Extensions.Common.CPS)
------ Build started: Project: Microsoft.VisualStudio.Web.Extensions.Common.CPS, Configuration: Debug Any CPU ------
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.VisualStudio.Threading" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
C:\Program Files (x86)\Microsoft Visual Studio\d15.6stg\27116\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2051,5): warning MSB3277: Found conflicts between different versions of "Microsoft.CodeAnalysis.Workspaces" that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed.
Microsoft.VisualStudio.Web.Extensions.Common.CPS -> Z:\dd\WTE\Main\bin\dev15\Debug\Microsoft.VisualStudio.Web.Extensions.Common.CPS.dll
Done building project "Microsoft.VisualStudio.Web.Extensions.Common.CPS.csproj".
========== Build: 3 succeeded, 0 failed, 3 up-to-date, 0 skipped ==========
username_0: To simplify your investigation maybe we need to help you set up our enlistment so you could build solution your self?
username_4: We are also facing the same issue in WPF project. Using VS 15.5.5 by default "Don't call MSBuild if a project appears to be up to date" is true, but still my NetStandard2.0 projects are rebuilding on running unit test, even after rebuilding all projects in solution. Here is the output:
1>FastUpToDate: Project information is older than current project version, skipping check. (X)
1>------ Build started: Project: X, Configuration: Debug Any CPU ------
2>FastUpToDate: Project information is older than current project version, skipping check. (Y)
2>------ Build started: Project: Y, Configuration: Debug Any CPU ------
3>FastUpToDate: Project information is older than current project version, skipping check. (Z)
3>------ Build started: Project: Z, Configuration: Debug Any CPU ------
========== Build: 3 succeeded, 0 failed, 20 up-to-date, 0 skipped ==========
Any ways to make sure that after rebuilding and running test will not build core projects again?
Thanks,
username_2: @username_3 - any ideas from the log above?
username_3: OK, I found one problem. The XAML items in the project don't have a FullPath property, which we expected to be there and so we throw an exception. I'll submit a fix for that, but the project still always rebuilds because a project it depends on, Web.Publish.Contracts is always out of date. That's a csproj project, so I'll have to dig into why that project isn't up to date on the csproj side of things.
username_3: @username_2 I thought this had been moved to 15.7. The fix in the managed project system is pretty simple if we want to take it for 15.6. What do you think?
username_2: @username_3 Can you elaborate on what triggers this issue, what workarounds there are, and whether this is a regression? I'm very concerned about taking any more changes for 15.6 than we have to, so my inclination would be to move to 15.7 at this point.
username_3: This is not a regression and there is no workaround. What triggers this issue is a XAML item showing up in a CPS project. Those items don't seem to have a FullPath property set. I'm not sure if there are other item types that don't have this, we'd have to check. I think at this point it probably makes sense to push to 15.7.
username_2: @username_3 this should be closed now, rigtht?
username_3: The CPS problem is fixed but there is still some kind of CSProj problem. Since that code hasn't been touched in a long time, it's likely a longstanding bug. I'll take a look in 15.8.
username_3: Closing this since the CPS side is fixed. If there are still problems, we should file a CSProj-side bug.
Status: Issue closed
|
mruby/mruby | 269239086 | Title: MRB_DISABLE_STDIO broken
Question:
username_0: 93f5f225772c398be6e409da3d3ef0f07ffbe1cf broke `MRB_DISABLE_STDIO`. I assume the following code wasn't meant to be left in:
https://github.com/mruby/mruby/blob/93f5f225772c398be6e409da3d3ef0f07ffbe1cf/src/vm.c#L154-L156
Answers:
username_1: Oops sorry.
Status: Issue closed
|
BSData/horus-heresy | 729386074 | Title: Mechanicum issues
Question:
username_0: Triaros Conveyor
Maulter bolt cannon is missing "twin linked" in the weapon profile, but the rule does appear in the "rules selection" bit.
FAQ Changes:
Flare shield mentions the Vultarax. this unit no longer has that option, and i believe no other monstrous creatures do? so that bit could be removed :)
spelling errors:
change "karacnos motar battery" to Karacnos Mortar battery
change "lightning blaster sentinal" to Lightning-blaster sentinel
Answers:
username_1: Thanks, will try to do them tonight.
username_1: Took a little while but managed to sort those bits out.
Only one I didn't do is slim down the description on the Lightning-Blaster Sentinels. We tend to use the rules as written in the book, so it is still going to be them.
Status: Issue closed
|
ZhukV/AppleApnPush | 639021515 | Title: Apple\ApnPush\Protocol\Http\Sender\Exception\HttpSenderException: cURL Error [58]: unable to set private key file
Question:
username_0: Apple\ApnPush\Protocol\Http\Sender\Exception\HttpSenderException: cURL Error [58]: unable to set private key file: 'public/certificate.pem' type PEM in file /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/apple/apn-push/src/Protocol/Http/Sender/CurlHttpSender.php on line 43
Stack trace:
1. Apple\ApnPush\Protocol\Http\Sender\Exception\HttpSenderException->() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/apple/apn-push/src/Protocol/Http/Sender/CurlHttpSender.php:43
2. Apple\ApnPush\Protocol\Http\Sender\CurlHttpSender->send() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/apple/apn-push/src/Protocol/HttpProtocol.php:135
3. Apple\ApnPush\Protocol\HttpProtocol->doSend() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/apple/apn-push/src/Protocol/HttpProtocol.php:91
4. Apple\ApnPush\Protocol\HttpProtocol->send() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/apple/apn-push/src/Sender/Sender.php:45
5. Apple\ApnPush\Sender\Sender->send() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/app/Http/Controllers/PostController.php:369
6. App\Http\Controllers\PostController->sendPushVOIP() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Controller.php:54
7. call_user_func_array() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Controller.php:54
8. Illuminate\Routing\Controller->callAction() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/ControllerDispatcher.php:45
9. Illuminate\Routing\ControllerDispatcher->dispatch() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Route.php:219
10. Illuminate\Routing\Route->runController() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Route.php:176
11. Illuminate\Routing\Route->run() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Router.php:680
12. Illuminate\Routing\Router->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:30
13. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Middleware/SubstituteBindings.php:41
14. Illuminate\Routing\Middleware\SubstituteBindings->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
15. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
16. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Middleware/ThrottleRequests.php:58
17. Illuminate\Routing\Middleware\ThrottleRequests->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
18. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
19. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:104
20. Illuminate\Pipeline\Pipeline->then() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Router.php:682
21. Illuminate\Routing\Router->runRouteWithinStack() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Router.php:657
22. Illuminate\Routing\Router->runRoute() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Router.php:623
23. Illuminate\Routing\Router->dispatchToRoute() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Router.php:612
24. Illuminate\Routing\Router->dispatch() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:176
25. Illuminate\Foundation\Http\Kernel->Illuminate\Foundation\Http\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:30
26. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/fideloper/proxy/src/TrustProxies.php:57
27. Fideloper\Proxy\TrustProxies->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
28. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
29. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php:21
30. Illuminate\Foundation\Http\Middleware\TransformsRequest->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
31. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
32. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php:21
33. Illuminate\Foundation\Http\Middleware\TransformsRequest->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
34. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
35. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php:27
36. Illuminate\Foundation\Http\Middleware\ValidatePostSize->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
37. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
38. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/CheckForMaintenanceMode.php:62
39. Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:163
40. Illuminate\Pipeline\Pipeline->Illuminate\Pipeline\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
41. Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:104
42. Illuminate\Pipeline\Pipeline->then() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:151
43. Illuminate\Foundation\Http\Kernel->sendRequestThroughRouter() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:116
44. Illuminate\Foundation\Http\Kernel->handle() /home/wxfmq8yyb5i3/public_html/staging.fluff.notesdb.co/index.php:55
Answers:
username_1: Please provide absolute path to you certificate or try to use JWT token.
Status: Issue closed
|
bazelbuild/bazel | 126790732 | Title: Add html titles to documentation pages
Question:
username_0: While this may seem like a minor thing, it gets pretty annoying clicking around madly trying to find the right page when your browser window has half a dozen tabs just labelled "Bazel". Ideally the `<title>` would be something like "Page Title - Bazel".
Answers:
username_1: @davidzchen can you take a look?
Status: Issue closed
username_0: Thanks! |
jgraph/drawio | 565980777 | Title: Embed Other Diagrams
Question:
username_0: As a user
I desire being able to reuse content from one tab
as components in other tabs
such that if I update the original, it would update all places that have referenced it
(differing from symbols).
I'd love it even more if I could reference diagrams in the tabs of other diagrams. Now that would really ensure that corporate system models are up to date!
Thank you! |
C2FO/vfs | 429813517 | Title: [Go Modules] Module definition should be updated
Question:
username_0: From reading the documentation on go modules, when we released a `v2` we should have updated our import paths to have a `v2` as well as updating the `go.mod` to have a `/v2` at the end of the module declaration.
I talked to some gophers on slack and they proposed the solution of releasing a v3 with the correct import paths and module definition.<issue_closed>
Status: Issue closed |
kstep/regindex | 63208172 | Title: improvements
Question:
username_0: So I love the experiment, and I just wanted to suggest a couple things that might increase its chances of succeeding.
Firstly, you might want to consider using just the standard `regex` crate without the macro. This is because `regex!` is a compiler plugin, which means it will not work on the beta/stable channels of Rust 1.0.
Secondly, to increase the ergonomics of indexing, I'd recommend that you change your `ReIdx::new` constructor to take a `&str`, which would then call the `Regex::new` constructor for the user. e.g.,
```rust
impl ReIdx {
fn new(re: &str) -> ReIdx {
ReIdx(Regex::new(re).unwrap())
}
}
```
Or even better:
```rust
fn new<'a, S: IntoCow<'a, str>>(re: S) -> ReIdx {
ReIdx(Regex::new(&re.into_cow())
}
```
Then you could index like `"hello"[ReIdx::new(r"el")]`. You might go even further and define a short macro, `rei!` (or something): `"hello"[rei!(r"el")]`.
Answers:
username_1: Erm. Compiling the regular expression every time isn't cool, so I hope there will be some ReIdx::from_regex or something.
On a side note, a no-brainer version would've looked `"hello"[regex!(r"el")]`, but that's probably not possible to implement outside of the regex crate. Both `"hello"[ReIdx::new(r"el")]` and `"hello"[rei!(r"el")]` fall short of being as fluent as the Ruby version, IMHO. J-:
username_0: `regex!` will not work on the beta/stable channels of Rust 1.0.
Alternate constructors seem very appropriate.
username_2: @username_0 do you know when `regex!` macro will be stabilized? I feels like a loss to be unable to compile regexps in compile time.
username_0: @username_2 Of course it's a loss! :-) Stabilization on a schedule isn't free---we have to make sacrifices somewhere. The `regex!` macro is a compiler plugin, and the API for compiler plugins currently depends on compiler internals, and there's just no reasonable way to stabilize that API before `1.0`.
I don't know the time table. Compiler plugins are really nice to have, but it will take some work to turn them into a stable feature.
username_2: Ok, check out version 0.2.0, please. @username_0, it wasn't quite possible to implement `new<T: IntoCow>(s: T)` method the way you wanted without possible panic due to the fact `Regex::new()` returns `Result`. I added a macro `ri` instead (I decided to make the name shorter), so one can index in the following way:
```rust
let a = &"abbcccdddd"[ri!["bc+"]];
```
This is as far eye-candy as possible without bothering `regex` crate.
username_1: Looks like you don't need `#![plugin(regex_macros)]` on the second line. Nor `regex_macros = "*"` in Cargo.toml.
(On a side note, I was going to check that out but I won't now because I'm going to use `regex!`, `json!`, Maud etc. and stay on Nightly till the macro system becomes a part of the stable builds).
username_2: After a little thought (bf732a39) I decided to add convenience `impl Index<Result<ReIdx, regex::Error>>`, so you don't have to unwrap regex result to index. A downside of this approach is you will just get empty string for `Err(_)` variant.
username_2: @username_1, you are right, now that I don't use `regex!` in tests any more I don't need to `use` it. |
Azure/kubernetes-volume-drivers | 636831150 | Title: container not starting, error in kubelet "Failed to mount device /dev"
Question:
username_0: **What happened**:
Pod is not starting up, I am seeing errors as below:
on the node /var/log/blobfuse-driver.log :
```
Thu Jun 11 08:06:32 UTC 2020 INFO: tmp-path not specified, use default path: /tmp/blobfuse/
Thu Jun 11 08:06:32 UTC 2020 EXEC: mkdir -p /var/lib/kubelet/pods/56e9105e-06fb-4a8a-8b1b-57b3ba7f8687/volumes/azure~blobfuse/pv-blobfuse-flexvol
Thu Jun 11 08:06:32 UTC 2020 INF: AZURE_STORAGE_SAS_TOKEN is set
Thu Jun 11 08:06:32 UTC 2020 INF: export storage account - export AZURE_STORAGE_ACCOUNT=azeuwstotestingkuber01
Thu Jun 11 08:06:32 UTC 2020 EXEC: blobfuse /var/lib/kubelet/pods/56e9105e-06fb-4a8a-8b1b-57b3ba7f8687/volumes/azure~blobfuse/pv-blobfuse-flexvol --container-name=azeuwstotestingkuber01 --tmp-path=/tmp/blobfuse/ -o allow_other
Thu Jun 11 08:06:32 UTC 2020 ERROR: { "status": "Failure", "message": "Failed to mount device /dev/ at /var/lib/kubelet/pods/56e9105e-06fb-4a8a-8b1b-57b3ba7f8687/volumes/azure~blobfuse/pv-blobfuse-flexvol, accountname:azeuwstotestingkuber01, error log:Thu Jun 11 08:06:32 UTC 2020 EXEC: blobfuse /var/lib/kubelet/pods/56e9105e-06fb-4a8a-8b1b-57b3ba7f8687/volumes/azure~blobfuse/pv-blobfuse-flexvol --container-name=azeuwstotestingkuber01 --tmp-path=/tmp/blobfuse/ -o allow_other " }
```
kubectl describe show this output:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/nginx-flex-blobfuse to aks-prodnode-29454676-vmss000001
Warning FailedMount 3s (x5 over 11s) kubelet, aks-prodnode-29454676-vmss000001 MountVolume.SetUp failed for volume "pv-blobfuse-flexvol" : invalid character 'F' looking for beginning of value
```
In my case, too, /tmp/blobfuse did not exist and I created it manually on all nodes.
**What you expected to happen**:
**How to reproduce it**:
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`): 1.16.9
- OS (e.g. from /etc/os-release): Ubuntu 16.04
- Kernel (e.g. `uname -a`): 4.15.0-1083-azure
- Install tools:
- Others:
Answers:
username_1: same issue as https://github.com/Azure/kubernetes-volume-drivers/issues/58
username_0: Except for the fact, that the reporter of issue #58 was able to get it working. In my case, creating the host volume did not help.
username_1: I would suggest use https://github.com/kubernetes-sigs/blobfuse-csi-driver, this flexvolume driver is now in maintenance mode since we are switching to csi driver.
username_0: I'd love to do that but I've read that the status for the csi driver is beta. Do you have any timeline for this to change? We have policies in place that keep us from using beta flagged software.
username_1: GA time for blobfuse CSI driver could be late this year.
For this flexvolume issue, follow troubleshooting tips here: https://github.com/Azure/kubernetes-volume-drivers/tree/master/flexvolume/blobfuse#tips
username_0: after rebuild from scratch, driver is working as expected. can’t reproduce error any more. must have been something odd on the first try.
Status: Issue closed
|
modelica/ModelicaStandardLibrary | 374459681 | Title: Very bad behaviour of two Blocks.Logical icons (LessThreshold and GreaterThreshold)
Question:
username_0: The icon of LessThreshold is a sign "<". When the component is used as is, it is meaningful.
If the component flipped horizontally, the same icon "<" is displayed, which indicates greater instead of less!
Also GreaterThreshold is affected by this issue. The issue can be solved using a drawing for the less or greater symbol instead of a character.
Less, Greater, LessEqual and GreaterEqual are unaffected, since their icons are already drawings.
Answers:
username_1: Left over from #745. Will prepare a pull request.
username_0: Thank you for implementing my suggestion.
After some usage, and getting other people involved, I came to the conclusion that the icon is still confusing. The issue is that when we use the symbol "<" we want to have two terms In Greater and Less somehow we have them, in GreaterThreshold, and LessThreshold, even with the modification i suggested we still don't have.
I made some tests with both Dymola and Openmodelica, and using the text "Th" as a placeholder for threshold seems good enough, and well coordinated with the existing Less and Greater icons.
The annotation code for LessThreshold that I tested (for me satisfactorily) is as follows (excluding Documentation):
annotation (Icon(coordinateSystem(preserveAspectRatio=true,
extent={{-100,-100},
{100,100}}), graphics={
Line(points={{-36,20},{-82,0},{-36,-20}},
thickness=0.5), Text(
extent={{-26,48},{46,-48}},
lineColor={0,0,0},
textString="Th",
horizontalAlignment=TextAlignment.Left)}),
[...]
@username_1, do you agree with this further enhancement? If you do, obviously also GreaterThreshold should be changed accordingly.
username_1: Forwarding to library officers @AHaumer and @MartinOtter .
Status: Issue closed
|
skycoin/skybian | 695116419 | Title: Consider updating using new image instead of new binaries
Question:
username_0: **Feature description**
On updating, download a new image and flash it instead of downloading binaries.
**Is your feature request related to a problem? Please describe.**
Currently, the updating process is performed by downloading updated versions of Skywire binaries and substitution of current binaries by the updated ones. However, the Armbian image and system packages are not updated.
**Describe the solution you'd like**
Consider flashing the whole image instead of updating just binaries.
**Possible implementation**
On update, a visor might download a new image, flash it rewriting the current visor binary (the process in RAM remains alive), and then reboot the board. After reboot, new visor binaries would start.
Config files might need to be saved. Visor might save them in memory and write to disk after flashing an image. It would be better to have a separate system and data partitions. In this case, only the system image may be rewritten on an update, and files from data partition (e.g. configs) would remain untouched. |
davidchall/topas2numpy | 934221455 | Title: Data axes are incorrectly ordered
Question:
username_0: The NumPy arrays inside a `BinnedResult.data` property have the axes in reverse order. The first axis should be Z, then Y, then X as per convention.
TOPAS generates `.csv` in such a way that 'Z' is incremented first, then Y then X. So when `np.genfromtxt` or `np.loadtxt` are used naively, the axes are in reverse order.
I haven't tested this but I think a quick fix is `arr = arr.reshape(tuple(reversed(arr.shape)), order='F')` after loading the data with `np.loadtxt`. A more efficient implementation would read the `*.csv` in reverse order after consuming the header.
Status: Issue closed
Answers:
username_0: Closed because it looks like this was an intentional decision. |
phetsims/rosetta | 245153498 | Title: What should the license be for this repo?
Question:
username_0: Please review the guidelines in https://github.com/phetsims/tasks/issues/875 and see what license this repo should have. I thought it might need to be classified as MIT, but wasn't sure.
Answers:
username_1: I reviewed the guidelines, and I agree that the MIT license seems more appropriate since I don't think we have any concerns about getting back any modified code.
username_1: Done, assigning back to @username_0 to either close or continue the discussion at his discretion.
username_0: Thanks, closing.
Status: Issue closed
|
ndphillips/FFTrees | 276135627 | Title: > 2 classes algorithm
Question:
username_0: Goal: Create an FFTrees building algorithm that allows for classifying more than 2 classes.
*Method A*
The basic idea is to, for each cue / class combination, calculate a decision threshold that maximises a *joint* function of classification accuracy and percentage of cases classified. For example, a threshold that classifies 80% of class A with an accuracy of 90%, might be preferred to a threshold that classifies 70% of class A with an accuracy of 95%.
For example, in the plot below, I show the relationship between classification proportions, and classification accuracy, for many different thresholds. The best value should be one that has a relatively high classification proportion, and accuracy.

Answers:
username_1: A very interesting and important goal indeed! Just a quick note on our discussion:
As I find it tricky to evaluate the consequences of a 3x3 contingency table, I'd first suggest trying to use the existing algorithm to classify any category A versus all _others_ (e.g., categories B and C) at every level of a tree. This would still result in a FFT structure (with one exit at every node), be a straightforward extension of the current algorithms, and should scale linearly with the number of categories.
It may be necessary to include some requirement to use every category at some point (e.g., on the last node), but it may also be informative if a category that is being considered never gets selected as a best possible exit. Actually, this would allow for an algorithm that starts out with _N_ candidate categories, but yields FFTs with fewer ones — another way of being frugal.
Furthermore, it seems plausible that — as the number of categories increases — the conditional algorithm may be more successful than the unconditional one. But as this would mainly yield benefits in fitting, it remains an empirical question to what extent such FFTs constructed by different algorithms would succeed or suffer in prediction.
Regardless of which algorithm works, extending FFTrees to more than 2 categories will be a real game changer, as it allows for many additional applications.
Status: Issue closed
username_0: Don't think we'll do this as it would be high effort and I think low reward. |
4AllDigital/DruDockCli | 240443733 | Title: App status command fails
Question:
username_0: HEALTHCHECK
-----------
Fatal error: Call to undefined method Docker\Drupal\Application::dockerHealthCheck() in phar:///usr/local/bin/drudock/src/Command/App/StatusCommand.php on line 34
Answers:
username_1: resolved.
Status: Issue closed
username_1: ```
HEALTHCHECK
-----------
------------------------------- ------------------------
Container Name Status
------------------------------- ------------------------
dockerd7gityfc_nginx_1 Up 15 seconds
dockerd7gityfc_mysql_1 Up 16 seconds (health
dockerd7gityfc_redis_1 Up 16 seconds
dockerd7ppdevelopment_php_1 Up 3 minutes
dockerd7ppdevelopment_nginx_1 Up 4 minutes
dockerd7ppdevelopment_mysql_1 Up 4 minutes (healthy)
drudock-proxy Up 2 hours
------------------------------- ------------------------
```
example output ^^^^ |
delphilite/AriaNgWke | 739772816 | Title: Where do wke.dll come from?
Question:
username_0: Hi,
Interesting project. Where do the Wke.dll's come from?
They do not seem to be part of the miniblink or Wke4Delphi repositories.
Thanks
Rael
Answers:
username_1: 1. https://github.com/weolar/miniblink49/releases
2. rename node_v8_4_8.dll or miniblink_x64.dll to Wke.dll
Status: Issue closed
|
EddyVerbruggen/nativescript-plugin-firebase | 278842136 | Title: Firestore nested collections
Question:
username_0: hello eddy,
first of all thx for this amazing plugin!
I am trying to use the new firestore feature. The firebase docs are saying that yon can nest a collection under a document ref. Using your plugin the nesting doesn´t work. The collection is always being created under the root of the database.
Here my example code
`let userDoc = firestore.collection('users').doc(user.uid);
userDoc.set({email: user.email});
userDoc.collection('worktimes').doc(data.key).set({
date: data.value.date,
reverseOrderDate: data.value.reverseOrderDate,
workTimeEnd: data.value.workTimeEnd,
workTimeStart: data.value.workTimeStart,
workingMinutesBrutto: data.value.workingMinutesBrutto,
workingMinutesNetto: data.value.workingMinutesNetto,
workingMinutesOverTime: data.value.workingMinutesOverTime,
workingMinutesPause: data.value.workingMinutesPause
});`
The data comes from an api call. Any ideas?
Th<NAME>
Answers:
username_0: no one?
Status: Issue closed
username_0: hello eddy,
first of all thx for this amazing plugin!
I am trying to use the new firestore feature. The firebase docs are saying that yon can nest a collection under a document ref. Using your plugin the nesting doesn´t work. The collection is always being created under the root of the database.
Here my example code
`let userDoc = firestore.collection('users').doc(user.uid);
userDoc.set({email: user.email});
userDoc.collection('worktimes').doc(data.key).set({
date: data.value.date,
reverseOrderDate: data.value.reverseOrderDate,
workTimeEnd: data.value.workTimeEnd,
workTimeStart: data.value.workTimeStart,
workingMinutesBrutto: data.value.workingMinutesBrutto,
workingMinutesNetto: data.value.workingMinutesNetto,
workingMinutesOverTime: data.value.workingMinutesOverTime,
workingMinutesPause: data.value.workingMinutesPause
});`
I´m expecting the worktimes data under the user document but in the firebase console there is a collection "worktimes" at the root level.
The data comes from an api call. Any ideas?
Th<NAME>
username_0: Accidentally closed
username_1: Looks like a bug, thanks for reporting it!
username_0: I think I found the problem, in firebase.android.ts you are creating a collection from the root db instance everytime. If I don´t chain the collection.doc.collection calls instead get the collection with a deeplink (users/${user.uid}/worktimes) it works.
We need to find a way to determan if the collection is a chained one or the root one. If chained generate the path relative to the document otherwise generate the path from the root db.
Status: Issue closed
username_1: Found a way to make it behave properly! With 5.0.3 you will be able to do for instance:
```typescript
const streetsCollectionRef: firestore.DocumentReference = firebase.firestore()
.collection("cities")
.doc("SF")
.collection("streets");
```
or even deeper:
```typescript
const docRef: firestore.DocumentReference = firebase.firestore()
.collection("cities")
.doc("SF")
.collection("streets")
.doc("QZNrg22tkN8W71YC3qCb"); // id of some doc in the 'streets' collection
``` |
dotnet/templating | 210388853 | Title: "dotnet new" can generate invalid code if directory name is not a valid C# namespace identifier
Question:
username_0: _From @mikeharder on February 23, 2017 21:52_
## Steps to reproduce
```
docker run -it --rm microsoft/dotnet:1.0.3-sdk-msbuild-rc4
mkdir new
cd new
dotnet new console
dotnet restore
dotnet build
```
## Expected behavior
Builds successfully.
## Actual behavior
```
Program.cs(3,11): error CS1001: Identifier expected [/new/new.csproj]
Program.cs(3,11): error CS1514: { expected [/new/new.csproj]
Program.cs(3,11): error CS0116: A namespace cannot directly contain members such as fields or methods [/new/new.csproj]
Program.cs(4,1): error CS1022: Type or namespace definition, or end-of-file expected [/new/new.csproj]
```
The root cause is `dotnet new` uses the directory name as the namespace in the generated code, **without** first checking if the directory name is a valid namespace identifier.
```
using System;
namespace new
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}
}
```
## Environment data
`dotnet --info` output:
```
.NET Command Line Tools (1.0.0-rc4-004883)
Product Information:
Version: 1.0.0-rc4-004883
Commit SHA-1 hash: fa2c9e025c
Runtime Environment:
OS Name: debian
OS Version: 8
OS Platform: Linux
RID: debian.8-x64
Base Path: /usr/share/dotnet/sdk/1.0.0-rc4-004883
```
_Copied from original issue: dotnet/cli#5825_
Answers:
username_1: As a workaround, you can specify the -n switch to give a name that isn't a C# keyword. We'll beef up the name cleansing logic.
username_2: This issue was last touched some years ago. We are working on a new delivery road map. Please reopen if this is something we want & we'll properly assess its' priority compared to other work aimed at improving the overall templating UX.
Status: Issue closed
|
moodlehq/moodle-ci-runner | 502989927 | Title: Add missing services to reduce (down to 0) the skipped tests
Question:
username_0: This issue is about to reduce the number of skipped tests with phpunit. Namely:
- Solr (#12). Was commented out because there were some problems with our php images, now solved https://github.com/moodlehq/moodle-php-apache/pull/58
- Memcached (#13).
- MongoDB (#14).
- mlbackend-python. Now that the python backend can run via server... we can add it to the runner. Worth discussing about adopting and automating the managing of that image with @dmonllao .
Basically, that's all we need to get all tests passing. Let's use this issue as epic to control it.
Ciao :-)
Answers:
username_0: Have just added 4 PRs that should be merged in order:
#11 - tiny change to better detect the script directory. Same approach we use in a lot of local_ci scripts, enable you to wrap it in other, caller scripts...
#12 - enable solr.
#13 - enable memcached.
#14 - enable mongodb
Ciao :-)
username_1: All looking good and merged \o/
Win! Nice work Eloy.
Status: Issue closed
username_0: This issue is about to reduce the number of skipped tests with phpunit. Namely:
- Solr (#12). Was commented out because there were some problems with our php images, now solved https://github.com/moodlehq/moodle-php-apache/pull/58
- Memcached (#13).
- MongoDB (#14).
- mlbackend-python. Now that the python backend can run via server ([MDL-66004](https://tracker.moodle.org/browse/MDL-66004))... we can add it to the runner. Worth discussing about adopting and automating the managing of that image with @dmonllao .
Basically, that's all we need to get all tests passing. Let's use this issue as epic to control it.
Ciao :-)
username_0: Thanks for merging, @username_1 !
If you don't mind, I'm going to keep this open to take a look to the pending stuff:
- mlbackend-python. Now that the python backend can run via server (MDL-66004)... we can add it to the runner. Worth discussing about adopting and automating the managing of that image with @dmonllao .
- uuid
- unoconv
And linking this with https://github.com/moodlehq/moodle-docker/issues/110 that is aiming for exactly the same, but from moodle-docker.
username_0: With the 6 PRs above all our (non-EOL) images should get uuid tests passing.
So, remaining:
- mlbackend-python
- unoconv
Ciao :-) |
Hruodland/buildserver | 79045759 | Title: install.yml: location of java keystore issue on linux
Question:
username_0: This works but maybe it is possible to detect the location of the keystore first to make it portable and
push it upstream.:
command: "keytool -list -alias dev -keystore /etc/pki/java/cacerts -storepass <PASSWORD> -noprompt"
Status: Issue closed
Answers:
username_0: changes upstream included and tested ok. |
thuehlinger/daemons | 104750178 | Title: Failure to find pid files when status or stop is specified without a number argument
Question:
username_0: If I start a daemon with '-n 5', I get 5 processes and 5 pid files.
However, if I then simply type "status", no processes are found. Consequently, "stop" fails to work as well.
If I run "stop -n 5", (or -n 100 for that matter), daemons does correctly find the processes.
Is this the intended behavior? Presumably 'stop' and 'status' should both work without a number being specified.
Answers:
username_1: We ran into that, when "script/delayed_job stop" and "script/delayed_job status" stopped working.
username_2: +1 on delayed_job stop and delayed_job status no longer working with version 1.2.3. Delayed_job works properly after reverting back to daemons v1.1.9.
username_3: Looks like this is the commit that broke delayed_jobs
https://github.com/username_4/daemons/commit/55382e5a8a6baa8c01612f3d39578c967685cfcb
username_4: I feel that this should be fixed in delayed_jobs, not in daemons. Before 1.2.3, the .pid file matching in daemons was too loose (glob for `process_name*.pid`). This has been corrected in 1.2.3. If delay_jobs is using process names with ascending numbers for `start`, it should also use them for `status` and `stop`.
username_5: yes, also just run into this via delayed_job. Looking at the respective changes, it appears as if the issue is is just a pid file naming convention?
DelayedJob still uses: `delayed_job.1.pid`, `delayed_job.2.pid` etc.
This is now a miss-match on the glob in daemons 1.2.3+ https://github.com/username_4/daemons/blob/master/lib/daemons/pidfile.rb#L34
as daemons now expects (the very ugly IMHO!) `delayed_job_num1.pid`, `delayed_job_num2.pid` etc.
So if I'm reading this right, if DelayedJob switched its forced filename convention to match the form now expected by daemons ... it should all just work. @username_4 does that make sense? I'll try to test that theory now..
username_5: @username_4 ran a little test - see https://github.com/evendis/delayed_job/commit/87a396a2682063e12749847e576d163fce5e8f64
which seems to confirm this is just a pid-file naming convention disagreement.
The commit switches delayed job to use the new convention for daemons 1.2.3+ (`<base>_num<i>`), and solves the start/stop issue.
However, changing the process/pid file naming convention for DelayedJob is probably not a very good idea. I know it is very common for scripts to be looking for delayed_job processes, and since it is a very old gem in wide use, there is an unknown amount of churn that would be created by changing the process name format.
Perhaps a better approach would be to expose the globing format as a configurable option in daemons. It could default to the new `<base>_num<i>`, but downstream projects like DelayedJob could poke in their own format (without resorting to monkey patches) and thus preserve their own legacy compatibility. What do you think? I can offer a PR if you think this approach has merit.
Status: Issue closed
username_4: @username_5 Thanks for the testing and foremost for the pull request. I think your analysis and fix is reasonable. I have just merged it and I am about to prepare a release 1.3.0 which also includes your other merge request.
username_6: I'm using daemons 1.3.1 and delayed_job 4.1.5 and still having this issue. Is there something i'm missing? |
file-icons/atom | 267135992 | Title: Uncaught Error: spawn UNKNOWN
Question:
username_0: [Enter steps to reproduce:]
1. I've created a new java file
2. Saved it
**Atom**: 1.21.1 ia32
**Electron**: 1.6.15
**OS**: Windows 10 Insider Build 16299.19
**Thrown From**: [file-icons](https://github.com/file-icons/atom) package 2.1.13
### Stack Trace
Uncaught Error: spawn UNKNOWN
```
At internal/child_process.js:313
Error: spawn UNKNOWN
at exports._errnoException (util.js:1022:11)
at ChildProcess.spawn (internal/child_process.js:313:11)
at exports.spawn (child_process.js:399:9)
at Object.exports.fork (child_process.js:75:10)
at new Task (~/AppData/Local/atom/app-1.21.1/resources/app/src/task.js:37:52)
at Function.module.exports.Task.once (~/AppData/Local/atom/app-1.21.1/resources/app/src/task.js:19:20)
at System.sendRequests (/packages/file-icons/node_modules/atom-fs/lib/system.js:88:21)
at timeoutID._ (/packages/file-icons/node_modules/atom-fs/lib/system.js:51:42)
```
### Commands
```
5x -8:36.3.0 core:backspace (input.hidden-input)
-8:34.2.0 core:move-up (input.hidden-input)
-8:34 docblockr:parse-enter (input.hidden-input)
-8:34 editor:newline (input.hidden-input)
-8:28.4.0 core:backspace (input.hidden-input)
-8:25.2.0 core:move-left (input.hidden-input)
-8:18.4.0 core:backspace (input.hidden-input)
-8:16.9.0 intentions:highlight (input.hidden-input)
-8:16.7.0 core:save (input.hidden-input)
-4:26.2.0 tree-view:add-file (span.name.icon.icon-repo)
2x -4:17 core:backspace (input.hidden-input)
-4:14.3.0 core:confirm (input.hidden-input)
-4:13.2.0 intentions:highlight (input.hidden-input)
-4:13.1.0 core:paste (input.hidden-input)
-0:17.7.0 intentions:highlight (input.hidden-input)
-0:17.5.0 core:save (input.hidden-input)
```
### Non-Core Packages
```
busy-signal 1.4.3
docblockr 0.11.0
epitech-norm 0.3.4
epitech-norm-checker 0.3.1
fancy-bracket-matcher 2.1.0
file-icons 2.1.13
header-42 0.3.3
intentions 1.1.5
linter 2.2.0
linter-gcc 0.7.1
linter-htmlhint 1.3.4
linter-javac 1.9.4
linter-markdown 5.2.0
linter-ui-default 1.6.10
markdown-themeable-pdf 1.2.0
minimap 4.29.7
monokai 0.24.0
nord-atom-syntax 0.9.1
nord-atom-ui 0.11.0
norminette-linter 0.3.1
pdf-view 0.59.0
```
Status: Issue closed
Answers:
username_1: Not related to `file-icons`. Going by the first few lines of that stack trace, I'd say it's something to do with your system or installation:
~~~
Error: spawn UNKNOWN
at exports._errnoException (util.js:1022:11)
at ChildProcess.spawn (internal/child_process.js:313:11)
at exports.spawn (child_process.js:399:9)
at Object.exports.fork (child_process.js:75:10
~~~ |
AutoMapper/AutoMapper | 107778453 | Title: ViewModel to LLBL Entity Mapping Errors
Question:
username_0: I am mapping from a ViewModel to LLBL Entity, and it is causing errors. It worked in v3.3.1. However, after the upgrade to 4.0.3. I am not mapping the ID (identity int PK) and a CreatedDate (auto-generated current date, non-null) field. I got the following error when I did not have the ID:
Missing type map configuration or unsupported mapping.
Mapping types:
ContactViewModel -> Int32
Then when I added the ContactId but not the date field, I got the following:
Missing type map configuration or unsupported mapping.
Mapping types:
ContactViewModel -> IValidator
Arborwearv2.Web.Models.View.ContactViewModel -> SD.LLBLGen.Pro.ORMSupportClasses.IValidator
Destination path:
ContactEntity
Source value:
Arborwearv2.Web.Models.View.ContactViewModel
Answers:
username_1: A failing test would help.
username_2: Same issue with 4.0.4 after upgrade from 3.x
username_2: I made VS'13 solution which demonstrates the problem, please let me know when I can remove the file
http://particlefusion.org/autoMapper_mapping_issue.zip
Thank you!
username_1: The link is dead (for me at least). But anyway, we don't need the whole solution. A gist with a failing test and no external dependencies (except AutoMapper) would do.
username_2: Strange, works for me. Re-uploaded to expire box and tested: http://expirebox.com/download/dac066f5ab1dbcc3e0f2c6fc92f69059.html
LLBLGen entities require external dlls and, imho, solution is the easiest way to start with.
Thank you!
username_1: It worked for me on develop. You can try with the MyGet build.
username_2: It does work with 4.1.0-ci1007. So I guess I will just wait for the 4.1 release then. Thank you!
username_1: @username_0 Can you close your issue?
Status: Issue closed
username_0: OK - but when is the 4.1 schedule release?
username_1: I guess @username_3 makes a new release when he's good and ready :) But issues we already have a fix for are not useful.
username_3: As soon as I fix the build server. It's having a problem with Xamarin licensing ATM.
>
username_2: Thank you :)
username_0: @username_1 Understood, wasn't holding out on closing it, just hadn't been able to verify, and was traveling. |
MicrosoftDocs/mixed-reality | 308307012 | Title: Samsung HMD Odyssey
Question:
username_0: My controllers are constantly ether turning off or loosing connection to my headsett. This happens when both controllers are infront of my headset. fresh batteries doesnt do anything. What the hell is wrong?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7e0a0135-d443-a905-bd6a-5f6700afb4e4
* Version Independent ID: 27962279-8d07-cc59-a870-a58f37df7142
* Content: [Give us feedback - Mixed Reality](https://docs.microsoft.com/en-us/windows/mixed-reality/give-us-feedback#feedback)
* Content Source: [mixed-reality-docs/give-us-feedback.md](https://github.com/MicrosoftDocs/mixed-reality/blob/master/mixed-reality-docs/give-us-feedback.md)
* Service: **unspecified**
* GitHub Login: @username_1
* Microsoft Alias: **mazeller**
Answers:
username_1: Hey @username_0, the developer documentation is specifically to instruct developers, so I won't be much help. For help with troubleshooting your controllers there are a couple good options:
- there's a thorough Troubleshooting article that is kept up-to-date by our support team and engineers at https://docs.microsoft.com/en-us/windows/mixed-reality/enthusiast-guide/troubleshooting-windows-mixed-reality. Start with that guide. If that doesn't help, you can comment on that article.
- another great option is posting in our support forums, where our support team and engineers are constantly checking and answering questions: https://forums.hololens.com/categories/troubleshooting.
- the other option is getting live support through support.microsoft.com.
Matt
Status: Issue closed
|
webpack/enhanced-resolve | 328134871 | Title: Infinite loop and crash using symbolic links under Windows (PR included)
Question:
username_0: <!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of webpack. -->
# Bug report
<!-- Please ask questions on StackOverflow or the webpack Gitter. -->
<!-- https://stackoverflow.com/questions/ask?tags=webpack -->
<!-- https://gitter.im/webpack/webpack -->
<!-- Issues which contain questions or support requests will be closed. -->
**What is the current behavior?**
Infinite loop and runtime crash.
**If the current behavior is a bug, please provide the steps to reproduce.**
Under Windows, create a new module and use "npm link" to link it in your project, "cd" to the symlink, then create your bundle with Webpack.
<!-- A great way to do this is to provide your configuration via a GitHub repo. -->
<!-- Best provide a minimal reproduceable repo with instructions -->
<!-- Repos with too many files or long configs are not suitable -->
<!-- Please only add small snippets of code directly into the issue -->
<!-- https://gist.github.com is a good place for longer code snippets -->
<!-- If your issue is caused by a plugin or loader file the issue on the plugin/loader repo instead. -->
**What is the expected behavior?**
Creation of my bundle without any error.
<!-- "It should work" is not a good explaination -->
<!-- Explain how exactly you expecting it to behave -->
**Other relevant information:**
webpack version: 4.8.3
Node.js version: 10.1.0
Operating System: Windows 10, 64 bits
Additional tools: [email protected]
**Do you have a PR for us ?**
[#150](https://github.com/webpack/enhanced-resolve/pull/150)<issue_closed>
Status: Issue closed |
terascope/teraslice | 283028968 | Title: Assets doesn't handle version correctly
Question:
username_0: It doesn't consider 0.0.11 or 0.0.12 to be greater than 0.0.10
Answers:
username_1: If you're not using something like [semver](https://www.npmjs.com/package/semver) you should probably consider it.
username_0: It's possible this issue may not be the version but I'm definitely seeing cases where a new version of an asset is published but a stored job continues to use an older version. It appears that maybe the validation on creation of the new execution isn't working correctly.
Just has this situation with an asset versioned 0.0.2 and then 0.0.3 was published. The job was stopped and started but continued with the older asset. The stored job has the old asset_id as an explicit ID.
Something that occurs to me is that this job was updated via PUT. Doing that again I see the stored job gets the correct asset_id for 0.0.3 but it's storing the asset_id in the job instead of leaving it as the name. With that the execution will always use the old version because the asset_id will never change. I think this ties in with @username_1's comments in #634. The job should be stored as submitted, not as validated.
username_0: This should be resolved with the fixes for #634 in v0.28.0.
Status: Issue closed
username_0: It doesn't consider 0.0.11 or 0.0.12 to be greater than 0.0.10
username_0: This issue isn't fixed. Updating existing jobs still converts asset names into IDs.
Status: Issue closed
username_0: Closing to defer to #634 which is more specific. |
angular/angular | 268752015 | Title: keypress on firefox, backspace is handled
Question:
username_0: <!--
PLEASE HELP US PROCESS GITHUB ISSUES FASTER BY PROVIDING THE FOLLOWING INFORMATION.
ISSUES MISSING IMPORTANT INFORMATION MAY BE CLOSED WITHOUT INVESTIGATION.
-->
## I'm submitting a...
<!-- Check one of the following options with "x" -->
<pre><code>
[ ] Regression (a behavior that used to work and stopped working in a new release)
[x ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting -->
[ ] Feature request
[ ] Documentation issue or request
[ ] Support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question
</code></pre>
## Current behavior
<!-- Describe how the issue manifests. -->
Backspace is handled on a keypress in firefox
## Expected behavior
<!-- Describe what the desired behavior would be. -->
Firefox should not handle the keypress event
## Minimal reproduction of the problem with instructions
http://plnkr.co/edit/JDBpuNOQ7kZV80OjvNRv?p=preview
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. -->
## Environment
<pre><code>
Angular version: 4.4.6
<!-- Check whether this is still an issue in the most recent Angular version -->
Browser:
- [ ] Chrome (desktop) version XX
- [ ] Chrome (Android) version XX
- [ ] Chrome (iOS) version XX
- [ x] Firefox version 56.0.1 (64-bit)
- [ ] Safari (desktop) version XX
- [ ] Safari (iOS) version XX
- [ ] IE version XX
- [ ] Edge version XX
Answers:
username_1: The html spec says that keypress is valid for keys that produce a character. A browser's interpretation of the spec cannot be worked around by Angular, but you can use keydown or keyup.
username_1: #19919 from two days ago is the same thing (except about keypress on backtick) and I linked to Google saying some time ago that this is a correct interpretation of the spec.
username_0: @username_1 Thanks, I was using keypress because I only wanted to handle the event when a key produced a character. I can easily work around it.
Should Angular not capture the keypress event and only pass the event onto the component when the chars are valid for the HTML spec?
username_2: Angular only capture what's emitted, please try google it first:
+ https://stackoverflow.com/questions/31580493/backspace-key-issue-in-firefox
+ https://stackoverflow.com/questions/14050086/firefox-keydown-backspace-issue
+ https://stackoverflow.com/questions/4843472/javascript-listener-keypress-doesnt-detect-backspace
+ https://stackoverflow.com/questions/32972258/firefox-keypress-bug-cant-use-backspace-with-only-letter-input-script
Status: Issue closed
|
sstsimulator/sst-core | 1066486662 | Title: Core should stop reporting that an element library wasn't found when dlopen fails
Question:
username_0: ## Issue
`libmyelement.so` is sitting in the correct folder for core to find it. I run `sst my_input.py` and the error message I get is
```
FATAL: SST Core: can't find requested component 'myelement.mycomponent'
Error: unable to find "myelement" element library
```
How to reproduce
1. Compile element with a missing symbol
2. Try to run sst and use that element
IFF you set `SST_CORE_DL_VERBOSE=1` you will get some helpful info, but we really need to stop reporting "unable to find" when dlopen fails. It implies that the .so file wasn't found instead of indicating that something was wrong with the .so file.
Answers:
username_1: This has mislead me in the past as well. Core should report something less specific than "unable to find", since the library was indeed found. |
OHDSI/OMOP-Standardized-Vocabularies | 289732833 | Title: plasme typo in lab concept name - instead of plasma
Question:
username_0: Thanks to Themis study - I saw a typo in concept name
here:
http://www.ohdsi.org/web/atlas/#/concept/3001784

it is LOINC concept of
https://r.details.loinc.org/LOINC/12841-3.html?sections=Comprehensive
the LOINC name does not have this typo.

Is there a bug in importing LOINC? How come the typo is in Athena?
Answers:
username_1: Thanks for reporting. We'll take a look on way of LOINC name extraction and fix this
username_2: closed as a duplicate
Status: Issue closed
|
ytti/oxidized-web | 494247040 | Title: [Feature Request]: Possible to add some Label to top of page
Question:
username_0: 
Hello,
We running multiple oxidized instances and wondering if possible to add some label to the top of the webpage to distinguish between them?
It could be next to the "Oxidized" word/logo ¯\\\_(ツ)\_/¯
Thanks,
Dave |
uccser/codewof | 497406982 | Title: Answered questions picked for questions of the day
Question:
username_0: Sometimes questions that a user has already answered (on another day) show up under their questions for the day on their dashboard. If we are going to show them questions they have already answered after some minimum time limit (e.g. one or two months) then they should not be formatted as a question that has been answered, i.e. there should be no tick in the checkbox and the background should not be highlighted.
It would also be nice if the questions shown for the questions of the day could change over at midnight. |
graphql-python/graphene | 836947833 | Title: Graphene Enum input returns EnumMeta instead of enum option value
Question:
username_0: Using Enum on input of mutation the value given to the mutation is not the value of the enum.
I expect only the value of the enum choise to be returned.
Using this for example on DjangoModelFormMutation breaks simplicift and workflow.
- Version: v3.0.0b7
- Platform: django, linux
See image below as in kwargs it is returned EnumMeta instead of value correspondent on enum.

Answers:
username_1: See https://github.com/graphql-python/graphene/issues/1277 and consider closing this.
Status: Issue closed
username_0: I see, I don't understand the need for it to return the enum but if it is designed that way I will live with it. I used a mixin to get the value if any input is enum. |
Esri/hub.js | 349216896 | Title: implementation of promise all/hash settled
Question:
username_0: I've already [wanted this for fetching users](https://github.com/Esri/hub.js/issues/40#issue-340679258), and the upcoming implementation of creating initiatives that we're going to port from the Hub relies heavily on [allSettled](http://www.emberjs.com.cn/api/classes/RSVP.html#method_allSettled) and [hashSettled](http://www.emberjs.com.cn/api/classes/RSVP.html#method_hashSettled).
Do we just make [RSVP](https://github.com/tildeio/rsvp.js/) a dependency of this library? Will that end up being de-duped in Ember apps?
Or do we bring in a [standalone implementation](https://www.npmjs.com/search?q=promise%20settled).
Answers:
username_0: cc @dbouwman
username_0: I guess we _didn't_ end up needing those for creating initiatives.
username_0: We need this for `searchEvents()` now too, and not having it was the cause of a subtle bug that had @jPurush scratching her head.
username_0: Looks pretty easy to implement, I'd say start w/ this:
https://stackoverflow.com/a/39031032/656010
username_0: this will be less important after #135 |
airqo-platform/AirQo-frontend | 608288691 | Title: Blank Pages on loading Demo App Links
Question:
username_0: - What were you trying to achieve?
To test out the demo app links:
1. Locate: https://locate-dot-airqo-frontend.appspot.com/dashboard
2. Analytics: https://analytics-dot-airqo-250220.uc.r.appspot.com/
- What are the expected results?
Register and Login Buttons
**Analytics:**

**Locate:**

- What are the received results?

- What are the steps to reproduce the issue?
Just trying accessing the links for the 2 sites.
1. Locate: https://locate-dot-airqo-frontend.appspot.com/dashboard
2. Analytics: https://analytics-dot-airqo-250220.uc.r.appspot.com/
- In what environment did you encounter the issue?
Browser environment
Answers:
username_0: To resolve this issue, we just need to remove a single line from our codebase before deploying. Steps to resolve the issue:
**The Solution:**
1. `cd AirQo-frontend/analytics/src/`
2. Open `store.js`
3. Remove the following line:
`window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__()`

4. After editing, your `store.js` file should now look like below:

5. And then revisit the links accordingly.
- Locate: https://locate-dot-airqo-frontend.appspot.com/dashboard
- Analytics: https://analytics-dot-airqo-250220.uc.r.appspot.com/
Status: Issue closed
|
SMRFoundation/NodeXLBasic | 275027696 | Title: Twitter Search Import Tops Out under 100 Tweets?
Question:
username_0: I noticed the last few times (since Wednesday 11/4) I've tried to import through twitter search, the results are extremely incomplete. As an example, when attempting to import #dayofrage by Basic Network limited to 17,900 Tweets only 21 tweets come back. However, when just searching Twitter itself hundreds of tweets come back.
Do you know what would cause this disparity in numbers?
~E
#### This work item was migrated from CodePlex
CodePlex work item ID: '64993'
Vote count: '1'
Answers:
username_0: [Nasrim@11/24/2015]
I had the same problem. I believe this is due to connection slowness or geographic location.
I was trying to access Twitter search via NodeXL from Lebanon with a 2 Mb/s speed. This returned less than a 100 tweets.
When I used a US server remotely with a 20 Mb/s+ connection, I received thousands of tweets.
hope this helps,
N. |
home-assistant/iOS | 850405823 | Title: Home Assistant app blurry on macOS
Question:
username_0: **Device model, version and app version**
Model Name: Mac Mini (M1, 2020)
macOS Version: macOS Big Sur 11.2.1
App Version: 2021.4 (2021.113)
Safari Version: 14.0.3
**Home Assistant Core Version**
core-2021.3.4
--
**Describe the bug**
On macOS, the Home Assistant app is blurry, as it seems to render everything at a pixel density different from other apps. When I compare the app side-by-side with Safari, text, icons, and images are more blurry. The issue is present on @1x displays and @2x displays (e.g. MacBook Air).
**To Reproduce**
Open the Home Assistant app in macOS and place it side-by-side with Home Assistant in Safari. Note the comparative sharpness of text.
**Expected behavior**
Text should render similarly in the Home Assistant app when compared to text throughout the system, as well as icons and images. The application should not appear more blurry than the rest of the system.
**Screenshots**

Safari on the left and Home Assistant on the right.
**Additional context**
n/a
Answers:
username_1: Fixed in #1600.
Status: Issue closed
username_2: 
Left is the MacOS app and right is a Chrome Webapp (Basically just chrome without the address bar)
It seems that there is still something off with the text in the MacOS app.
@username_1
username_0: @username_2 the issue I submitted was an app-wide resolution problem (note the blurry icons) and this, as far as I can see, was resolved. Your example appears to be a minor difference in rendering and anti-aliasing. Can you compare this app with Safari? Chrome is a different browser engine, likely with different text rendering.
username_2: The text rendering is the same across different browsers. Only the MacOS App is doing something wierd. |
nteract/papermill | 360123806 | Title: Adding Zeppelin support
Question:
username_0: Hi, we are considering adopting Papermill for parameterizing and running our notebooks, but the main thing stopping us is the lack of support for Zeppelin. Our notebooks are a mix of Jupyter and Zeppelin, and having the ability to run both with the same library would be invaluable.
I was wondering if that is something that has been discussed before, and if this is something that would be a good fit for Papermill?
If this is something that would be of interest, I would be happy to try contributing something there.
Answers:
username_1: So from what I understand, this is slightly tricky because of the way that Zeppelin thinks about a notebook. Specifically because it is a jar file, it can be somewhat tricky to introspect and therefore tricky to parameterise.
It would likely require creating a library like `nbformat` for Zeppelin to plug into what we're currently doing with `nbformat` to parameterise our notebooks.
Under the hood there is a `note.json` (at least as of 3 years ago: http://fedulov.website/2015/10/16/export-apache-zeppelin-notebooks/) which seems to have a somewhat regular structure. I was not able to track down a spec for that file, so we may have no guarantees about what we can expect to find there.
Nonetheless that might be a place to start.
username_2: Hi and welcome to the :tada: @username_0 !
I think there's definitely room in papermill for processing zepplin notebooks. As M mentioned, it definitely operates in a different format than Jupyter so it'd require a few components to get some abstraction upgrades.
The first abstraction that needs adjusting is the node formatting. We'd need something to load the `note.json` into `nbformat` or an nbformat-like object for processing. Then parameterization would then need to be able to apply to both notebook formats in a similar manner -- or we'd need parameterization be more abstract if nbformat-like memory store is out. This might require upgrading parameterization to a more plug-in play pattern like we do with other components of papermill either way.
Then we'd want to extend https://github.com/nteract/papermill/pull/204 with an `--engine=zepplin` to wrap a zepplin executor. This will add some java dependency for this particular engine, but that's ok and we can just raise an exception if the JRE isn't available inside the engine.
And finally we'd need to figure out how to handle the iorw patterns for a non-jupter document. This one would require a little more thought, but I don't see any reason we couldn't solve it there too. |
amyy27/reclassification_hw | 152535031 | Title: Review
Question:
username_0: Nice work - I was trying to get you to compare the output of using greengenes in `classify.seqs`, but it looks like you tested the effects of using different alignment references. Regardless, good job. |
hankinsoft/SQLPro | 962314887 | Title: Crash when comparing databases
Question:
username_0: Attempting to compare databases results in a crash immediately after the comparison window displays. Crash report attached:
[SQLPro crash when comparing databases.txt](https://github.com/username_1/SQLPro/files/6942420/SQLPro.crash.when.comparing.databases.txt)
Answers:
username_1: Thank you - I've reproduced and am submitting a fix to the App Store now.
username_1: Hi - Just wanted to check in and see if the updated build sorted the issue you were seeing?
username_1: Ohhhh, sorry about that - I'll be push a new website version update hopefully tomorrow (also FYI, the license key you have will also work on the App Store version).
username_0: I Didn't know that, I'll just grab it from the AppStore and get right back to you.
username_1: Ahhhh my bad. I deal with my other product that use free download + IAP and forgot SQLite was an upfront purchase.
I've pushed an updated website build which can be downloaded at:
https://sqlprostudio.s3.us-east-1.amazonaws.com/sqlite/SQLProSQLite.2021.72.app.zip
That should sort the crashing issue. Apologies for the delay.
username_0: No need to apologize; the update has fixed the crash and database comparison works perfectly, thanks!
username_1: Awesome. I'm going to close the issue. If you run into anything else, please let me know.
Status: Issue closed
|
Alfresco/alfresco-ng2-components | 677541976 | Title: Request for additional TreeView features
Question:
username_0: **Current behaviour:**
N/A
**Expected behavior:**
N/A
**Steps to reproduce the issue:**
N/A
**Component name and version:**
N/A
**Browser and version:**
N/A
**Node version (for build issues):**
N/A
**New feature request:**
Would like the Tree View to support the following:
- Optional rendering of document nodes within tree
- Ability to override/supplement content for a tree branch (e.g. perhaps additional text such as doc description, etc)
- Ability to set the current tree node (as opposed to just the root node)
- Ability to override node icons (e.g. indicate that folders/doc is sensitive)
- Ability to specify node actions via context menu |
H571531/git.Samarbeid | 312874956 | Title: Oblig 5 Oppgave 2 b)
Question:
username_0: Implemetering i klient: Oppgave 2
- [ ] antall noder n
- [ ] den minimale teoretiske høyden (her vil det være naturlig å lage en metode som regner ut
denne verdien for en gitt verdi av n).
- [ ] den maksimale teoretiske høyden
- [ ] minste høyde i løpet av kjøringene
- [ ] største høyde i løpet av kjøringene
- [ ] gjennomsnittlig høyde av alle kjøringene<issue_closed>
Status: Issue closed |
cerner/splunk-pickaxe | 299386124 | Title: Process the Splunk objects with ERB to allow for dynamic configuration
Question:
username_0: I have a situation where the the Splunk index name used between two different environments is different (for example "my_index_dev" and "my_index_staging"). I would like to update this project to process the Splunk object files using ERB so that I can adjust the Splunk index referenced in the searches using an environment variable.
I have implemented this approach in my fork:
https://github.com/username_0/splunk-pickaxe/commit/2fc0e481ef9294d0163394848203d026a3bf30d5
I'm also willing to consider alternative approaches.
Answers:
username_1: This looks good. Would you want some way of defining environment variables in the environment config? https://github.com/cerner/splunk-pickaxe/blob/master/example-repo/.pickaxe.yml#L5
username_0: Good idea. I'll update the implementation to make all of the variables in the current environment available to the ERB processing.
Status: Issue closed
username_1: Done by #13 |
EuropeanRespiratorySociety/api-ers | 276681263 | Title: [Cloud CMS] add interests
Question:
username_0: Interests are added as a feature in cloud cms
We can get those features by api request -> #3
Answers:
username_0: Interests are added as a feature in cloud cms
We can get those features by api request -> #3
username_0: Available properties
```
"diseases": [
"Airway diseases",
"Interstitial lung diseases",
"Paediatric respiratory diseases",
"Pulmonary vascular diseases"
],
"methods": [
"Endoscopy and interventional pulmonology",
"Pulmonary function testing"
],
```
Status: Issue closed
|
nodecg/nodecg | 338694778 | Title: Rewrite browser tests using Puppeteer
Question:
username_0: The main benefit of Selenium/Sauce Labs is that it lets you test multiple browsers easily. However, we only test Chrome. Therefore, we get all of the drawbacks of Selenium and few of the benefits. Puppeteer would be a better fit for us. |
jquense/yup | 597105975 | Title: setValue not trigger validate.
Question:
username_0: **Describe the bug**
1. Submit and get your errors.
2. Action run setValue.
3. Value is valid but error message not hidden.
4. Submit again all thing as well.
.I think duplex when setValue must validate and setError with it, or API for it not avaiable.
Thank you
Answers:
username_1: Please follow the issue template. If this is formula related please open an issue on the project thanks.
Status: Issue closed
|
jhipster/generator-jhipster | 355917254 | Title: Impossible to upgrade : can't andwer a statistic usage question (console blocked)
Question:
username_0: <!--
- Please follow the issue template below for bug reports.
- If you have a support request rather than a bug, please use [Stack Overflow](http://stackoverflow.com/questions/tagged/jhipster) with the JHipster tag.
- For bug reports it is mandatory to run the command `jhipster info` in your project's root folder, and paste the result here.
- Tickets opened without any of these pieces of information will be **closed** without any explanation.
-->
##### **Overview of the issue**
So I've a project generated by Jhipster 5.1.0 and I tried to upgrade this project to the last JHipster version. Everithing works well until one of the last steps specified in the documentation :
`Re-generate the application using the jhipster --force --with-entities command.`
From this step it asks a question and the process seems to be blocked (I waited for several minutes) and of course when I answer the question nothing happens
```? May JHipster anonymously report usage statistics to improve the tool over time n
? (Y/n)```
Here is the complete verbose :
```
jhipster upgrade --force --verbose
Using JHipster version installed globally
Executing jhipster:upgrade
Options: force: true, verbose: true
Welcome to the JHipster Upgrade Sub-Generator
This will upgrade your current application codebase to the latest JHipster version
Looking for latest generator-jhipster version...
yarn info v1.7.0
5.2.1
Done in 0.54s.
√ New generator-jhipster version found: 5.2.1
info git rev-parse -q --is-inside-work-tree
true
√ Git repository detected
info git status --porcelain
info git rev-parse -q --abbrev-ref HEAD
jhipster_upgrade
info git rev-parse -q --verify jhipster_upgrade
b83f47708ec92129746071d3997e51c4c79bb39d
info git checkout -q jhipster_upgrade
√ Checked out branch "jhipster_upgrade"
Updating generator-jhipster to 5.2.1 . This might take some time...
info yarn add [email protected] --dev --no-lockfile --ignore-scripts
yarn add v1.7.0
[1/5] Validating package.json...
[2/5] Resolving packages...
[3/5] Fetching packages...
info [email protected]: The platform "win32" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[4/5] Linking dependencies...
warning " > @fortawesome/[email protected]" has incorrect peer dependency "@angular/common@^5.0.0".
warning " > @fortawesome/[email protected]" has incorrect peer dependency "@angular/compiler@^5.0.0".
warning " > @fortawesome/[email protected]" has incorrect peer dependency "@angular/core@^5.0.0".
warning " > @fortawesome/[email protected]" has incorrect peer dependency "@angular/platform-browser@^5.0.0".
warning " > @fortawesome/[email protected]" has incorrect peer dependency "@angular/platform-browser-dynamic@^5.0.0".
warning " > @fortawesome/[email protected]" has incorrect peer dependency "rxjs@^5.5.2".
warning " > [email protected]" has unmet peer dependency "popper.js@^1.12.9".
warning " > [email protected]" has incorrect peer dependency "@angular/core@^5.0.0".
warning " > [email protected]" has incorrect peer dependency "@angular/router@^5.0.0".
[Truncated]
copy/paste the result here.
The `.yo-rc.json` file generated in the root folder is mandatory for bug reports. This will help us to replicate the scenario.
You should remove any sensitive information like the rememberMe key or the jwtSecretKey key.
-->
##### **Entity configuration(s) `entityName.json` files generated in the `.jhipster` directory**
<!--
If the error is during an entity creation or associated with a specific entity.
If you are using JDL, please share that configuration as well.
-->
##### **Browsers and Operating System**
<!-- What OS are you on? is this a problem with all browsers or only IE8? -->
I'm on Windows 7
- [x] Checking this box is mandatory (this is just to show you read everything)
<!-- Love JHipster? Please consider supporting our collective:
👉 https://opencollective.com/generator-jhipster/donate -->
Status: Issue closed
Answers:
username_1: I'm closing this as it's a duplicate from #8156 (there's an answer there!)
But if you can wait until this afternoon, see https://groups.google.com/forum/?hl=en#!topic/jhipster-dev/uSI0ryoK4P0 -> there should be a new release with a fix for this (as well as tons of other fixes)
username_0: Thanks ! I'll check that after |
adobe/aem-core-wcm-components | 956447811 | Title: Provide bundled scripts
Question:
username_0: ## Feature Request
AEMaaCS ships with bundled scripts (including the one from Core WCM Components) while AEMonPrem requires manual installation of the Core WCM Components. That release only ships with regular resource scripts instead of bundled scripts though.
Both for performance reasons and also to make it easier to write bundled extended components (https://issues.apache.org/jira/browse/SLING-10689) the Core Components release should (at least optionally) provide the precompiled HTL scripts in a bundle
Bundled scripts are described at https://github.com/apache/sling-org-apache-sling-servlets-resolver#how. The necessary metadata can be generated with the help of https://sling.apache.org/components/scriptingbundle-maven-plugin/index.html.
Answers:
username_0: @username_1 Can you come up with a PR to generate the bundle containing precompiled scripts (at least during release)?
username_1: I'm not sure if this is something we need to implement, as the Core Components are already precompiled in AEMaaCS, like all the scripts coming from `/libs`. When using the Core Components on-prem or on AMS, there's no need to precompile them, as those deployments do not officially support precompiled bundled scripts.
username_0: Why are precompiled bundles not supported on AEM 6.5.x? There is nothing mentioned in https://experienceleague.adobe.com/docs/experience-manager-core-components/using/developing/archetype/precompiled-bundled-scripts.html related to that and IMHO they should be theoretically supported since servlets resolver 2.7.0 (https://sling.apache.org/documentation/bundles/scripting.html#bundled-scripts) while AEM 6.5.10 ships with 2.7.10.
Nevertheless there seem to be some bugs in precompiled bundles on AEM 6.5.10 currently:
1. Order seems to be inverted (resource scripts ranked higher than bundled scripts)
2. Output flushing does not work (a simple "Hello World" does not end up in the response body)
@username_1 Can you clarify the support for precompiled bundles in 6.5.x?
Status: Issue closed
|
openvax/neoantigen-vaccine-pipeline | 293724245 | Title: Add MultiQC
Question:
username_0: [MultiQC](http://multiqc.info/) aggregates analysis logs from other tools (like FastQC, [FastQ Screen](http://www.bioinformatics.babraham.ac.uk/projects/fastq_screen/) and many others) into a pretty multi-sample report.
Quick Start:
```
conda install -c bioconda multiqc
multiqc .
```
Answers:
username_1: Will start with fastqc |
stfc/PSyclone | 270961717 | Title: valid fortran names should be created from named invoke labels
Question:
username_0: Fortran call and subroutine names must obey certain rules. For example, they must start with a character and should not contain unsupported characters. At the moment we do not check for this so invalid names can be specified. For example ...
```
name="_stuff"
name="1_stuff"
name=" stuff"
name="st(u)ff"
name="st*(^%$#@"
```
We need to specify a regex which corresponds to a valid fortran name and raise an exception if the label does not conform to this rule. Something like ...
`[a-zA-Z][a-zA-Z_]*`
but we can look at the standard and/or the parser implementations to get the appropriate regex.
Answers:
username_0: Fixed by Andy in PR #97. Closing this issue
Status: Issue closed
|
rogerfar/rdt-client | 897979937 | Title: Sometimes Radarr or Sonarr can't import due to looking for folder with exact same name as file
Question:
username_0: I have this issue, but only on occasion and not consistently in both Sonarr and Radarr:
When a file is downloaded it is located in a folder that was created in my sonarr or radarr subfolders. It is usually named very similar to the folder but not exactly. This is often not a problem for imports but sometimes this happens:
Import failed, path does not exist or is not accessible by Radarr: /xxxxxxxxx/radarr/NAMEOFFILE.mkv/. Ensure the path exists and the user running Radarr has the correct permissions to access this file/folder
My problem is that Radarr is looking for a FOLDER with the name the file. If I change the folder to be the EXACT SAME name as the file (including the file extension) then it finds the file. In this case, the example above, I have a file named NAMEOFFILE.mkv. If I change the folder the file is in to be named NAMEOFFILE.mkv, then the import will proceed. This doesn't happen all the time, but it happens enough to where I have a python script I can run that will change the name of the folder to the file name of the file inside the folder to fix the issue.
Any ideas? Thanks!
Answers:
username_1: Are you running the latest version? There was an issue that newer torrents would get renamed in RealDebrid, thus the path was wrong when downloaded. But this should have been fixed a while ago already.
username_0: Just double checked, yup running 1.7.4
username_2: Same happening here, im on 1.7.4 also
Error|DownloadedEpisodesImportService|Import failed, path does not exist or is not accessible by Sonarr: /downloads/sonarr/A.Million.Little.Things.S03E15.720p.WEB.H264-STRONTiUM.mkv/. Ensure the path exists and the user running Sonarr has the correct permissions to access this file/folder
If i rename the actual directy to above, sonarr imports successfully. Happens on a variety of different shows but not all.
username_1: Does it consistently happen with the same torrent? If so can you send it to me through a pastebin?
username_2: I want to say it works intermittently with the same torrent...I downloaded it a few more times and it was always creating the XXXX.mkv/ directory. However, while I was doing this sonarr was being wonky, I think because the torrent got placed in the blacklist.
I tried a different torrent(to see if sonarr was still being wonky) for the same show and it created a directory named 'XXXXX[rarbg]' and it downloaded and imported correctly.
Next time it happens I'll capture the logs and attempt a retry with the same torrent again.
username_3: Fwiw, I see this pretty regularly as well.
username_3: It looks like the folder name is perhaps created incorrectly? Looking at the qbt API endpoint created by rdt-client, the 'content_path' matches the torrent 'name', but the folder created strips the file extension present in the name. (At least the name shown)
Either that, or the file extension inside the torrent is getting added to the torrent name somewhere after folder creation?
I expect this happens pretty much every time with single file torrents for me to be honest, so should be pretty reproducible?
username_1: I ran in this again now as well, but first time since a few weeks...
What I think happens is that the torrent somehow gets renamed along the way, that's why it's almost impossible to reproduce because as soon as RDT has the torrent, it won't rename it again.
I think I'll do 2 things: as soon as it has a torrent name, never rename it again, and force the extra path to use the torrent name, not the name of the downloaded RAR file.
Would what work?
username_1: Actually I think I see it now, there's another property in the API for OriginalFileName which was used wrongly before, on the next release this should be fixed.
username_1: This is fixed in 1.7.6
Status: Issue closed
|
aws/aws-cli | 87722577 | Title: Pip install fails AWS CLI 1.7.34 wants simpleJSON in python 2.6
Question:
username_0: ```
Traceback (most recent call last):
File "/usr/local/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/local/lib/python2.6/site-packages/awscli/clidriver.py", line 16, in <module>
import botocore.session
File "/usr/local/lib/python2.6/site-packages/botocore/session.py", line 27, in <module>
import botocore.config
File "/usr/local/lib/python2.6/site-packages/botocore/config.py", line 18, in <module>
from botocore.compat import six
File "/usr/local/lib/python2.6/site-packages/botocore/compat.py", line 119, in <module>
import simplejson as json
ImportError: No module named simplejson
```
Answers:
username_1: What version of pip are using and what OS are you using?
username_0: I can repro the problem on my local machine using the system installed python2.6 in /usr/bin
I set up a virtual env...
$ pip --version
pip 1.5.6 from /Users/daviusername_0/.virtualenvs/awscli_test/lib/python2.6/site-packages (python 2.6)
Mac OSX 10.9.5
$ uname -a
Darwin ip-192-168-60-85.ec2.internal 13.4.0 Darwin Kernel Version 13.4.0: Wed Mar 18 16:20:14 PDT 2015; root:xnu-2422.115.14~1/RELEASE_X86_64 x86_64
username_2: Confirmed, the issue is that the whl files were not specifying the conditional dependencies properly. We'll just need to re-upload new whl files to fix this. I'll update once this is done.
The workaround for anyone running into this on python2.6 is to manually install the dependencies for now `pip install simplejson ordereddict argparse`.
Status: Issue closed
username_2: Should be fixed now. The latest release also added support for installing from .whl files which has a few issues we need to fix. For the time being, I've pulled the .whl files out so we'll just use sdists (.tar.gz), which is what we were using previously.
Let me know if you're still running into any issues. |
coronasafe/care | 669417847 | Title: maps- visibility of roads (Hotspots.coronasafe)
Question:
username_0: At present the visibility of the internal roads, which appears to be the base layer of the maps is not visible once the local body is selected. It is needed to understand actual containment zones. If possible, the road structure underlying is visible, it would be of help.
Also, is it possible to add a layer, where we can ourselves draw over the present layer of local bodies.
Answers:
username_1: @bodhish shift issue to hotspot maps.
username_2: @username_0
First of all, future issues related to hotspot map can be created here https://github.com/coronasafe/kerala-map
1. Visibility of internal roads has been improved

2. As for the drawing feature, I'm currently working on it. Any further discussion can be made here https://github.com/coronasafe/kerala-map/issues/13
Status: Issue closed
|
michaelschwarz/Ajax.NET-Professional | 1104377750 | Title: Google Chrome Issue with doStateChange in core.ashx
Question:
username_0: Google Chrome cannot be depended upon to send a `xhr.statusText` of `"OK"` when `xhr.status == 200`.
This results in intermittent false fails in Chrome browser.
I suggest removing the check for `"OK"` in the line `if(this.xmlHttp.status == 200 && this.xmlHttp.statusText == "OK") {` and instead just use the `xhr.status`.
Info:
- https://github.com/axios/axios/issues/1501
- https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/statusText
Answers:
username_1: Hi @username_0,
thanks a lot for your feedback, and I will add this shortly after having some tests done. But I agree that status should be enough here.
Status: Issue closed
|
runelite/runelite | 324637456 | Title: RuneLite on Ubuntu (Gallium OS)
Question:
username_0: I am having an issue and I don't know if anyone else has faced. Using Runelite in Gallium OS on my Chromebook I am having issues after loading up the client.
The screen is black, but the sound plays. I can click in the relative position of the sound button and mute it. So it seems the game is loading up, but nothing is actually displaying inside the client.
I do not think it is a resource issue, as I can load up OSB on on here with no issues, but I would much rather use Runelite because of the resource issues on chromebooks.
Any suggestions?
Answers:
username_1: Hi,
Could you try disable hardware acceleration and see if that resolves the issue?
https://github.com/runelite/runelite/wiki/Disable-Hardware-Acceleration
username_0: That did it! Thanks! |
nicolasrll/framework-from-scratch | 547382022 | Title: Step 4 : get out $content of if and else
Question:
username_0: 4.1 Go through a function that correspond to the name of the page provided in url
4.2 Thinking :
4.2.1 : To group function in commun (ex: front / admin / article display ou delete)
4.2.2 How to know witch function to call if it's only controller is transmitted in url<issue_closed>
Status: Issue closed |
dotnet/efcore | 811419549 | Title: What rules does Microsoft.EntityFrameworkCore.Analyzers bring?
Question:
username_0: ## Ask a question
I'm trying to decide if I should include Microsoft.EntityFrameworkCore.Analyzers in my project but don't know what rules it includes. Is this documented somewhere or should I just look in the source code? Would be nice if they were documented (if they aren't).
EF Core version: 5.0.3
Target framework: .NET 5.0
Operating system: Windows 10 Pro
IDE: Visual Studio 2019 16.8
Answers:
username_1: The only analyzer currently provided is one which checks that you're not using any EF Core internal APIs; in EF Core almost all APIs are public, but some APIs are placed under Internal namespaces or are annotated with [EntityFrameworkInternal] to mark them as not for typical external use. The analyzer will flag usage of these as a warning.
We can document this, but I don't think there's much value unless we bring in more useful diagnostics...
username_0: Thank you for your response @username_1. I personally think it would still be useful as I like to know what is being pulled into my project but totally understand if you don't do it. Thanks again for your time. |
expressjs/session | 186138543 | Title: session.reload causes session resave at the end of the request
Question:
username_0: To avoid race conditions, whenever we change the session, we do `session.reload` & `session.save`.
We also use a custom synchronous session store, so session changes are atomic.
But I noticed that the session is saved again at the end of the request. This leads to the following sequence:
1. session.reload
2. session.save
3. session.save
Any parallel session changes between step 2 and 3 are overwritten by the save in step 3.
This leads again to race conditions.
It seems that session methods are not wrapped after `session.reload` and any further saves are not registered by _express-session_. So it thinks the session is not saved and saves it at the end of the request.<issue_closed>
Status: Issue closed |
SeldonIO/seldon-core | 546907195 | Title: seldonio/seldon-core-s2i-python3-tf-gpu:0.15 image default python version is 2.7.17
Question:
username_0: ```
Answers:
username_1: Thanks for submitting the issue. I can confirm that there is indeed a problem with the `seldon-core-s2i-python3-tf-gpu`.
I was able to pin it to the fact that our regular [python images](https://github.com/SeldonIO/seldon-core/blob/master/wrappers/s2i/python/Dockerfile.tmpl) uses as their base e.g. `python:3.7` whereas the [gpu image](https://github.com/SeldonIO/seldon-core/blob/master/wrappers/s2i/python/Dockerfile.gpu.tmpl) uses `nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04`.
In `python` images both `pip` and `pip3` point to the same executable and on Ubuntu 18.04 unfortunately `pip` and `pip3` refers to python 2 and python 3 respectively.
While we work on the solution for this as the workaround you could build your own base image (base it on our GPU image) and `pip3 install` whatever you need. This way you will make sure that it gets installed into the Python 3 site-packages. You can look at the [outlier example](https://github.com/SeldonIO/seldon-core/tree/master/examples/outliers/alibi-detect-combiner) where we do something similar (see [Dockerfile](https://github.com/SeldonIO/seldon-core/blob/master/examples/outliers/alibi-detect-combiner/Dockerfile) and [Makefile](https://github.com/SeldonIO/seldon-core/blob/master/examples/outliers/alibi-detect-combiner/Makefile)).
username_1: @axsaucedo As we build this image as [python3](https://github.com/SeldonIO/seldon-core/blob/c687012257c475471a42ec3d2c7d31662bfe9f83/wrappers/s2i/python/Makefile#L9) why do we even install Python 2 there?
https://github.com/SeldonIO/seldon-core/blob/c687012257c475471a42ec3d2c7d31662bfe9f83/wrappers/s2i/python/Dockerfile.gpu.tmpl#L7
As a fix to this we could not install Python 2 and just symlink `pip` and `python` to `pip3` and `python3` - making it exclusively Python 3 image. As neither of those is present in the base image this shouldn't break anything critical.
Other solution would be to setup some environmental variables and modify [assemble](https://github.com/SeldonIO/seldon-core/blob/master/wrappers/s2i/python/s2i/bin/assemble) script accordingly to use `pip3` instead of `pip`.
username_1: I open a PR that solves it using symlinks.
Status: Issue closed
username_1: @username_0 The fix for this issue has been merged to master.
Updated image has been uploaded to docker hub as `seldonio/seldon-core-s2i-python3-tf-gpu:0.16-SNAPSHOT` and will be part of `0.16` release.
Please, give it a shot and reopen the issue in case it does not solve your problem.
username_0: Looks good to me. Thanks for taking care of this! |
hecrj/wgpu_glyph | 709145479 | Title: Add examples for text-selection, underlines, etc.
Question:
username_0: I'm working on using wgpu_glyph in the context of a code editor, in particular the ability to show that text has been selected. This is proving more difficult than I anticipated, but seems like a common use case, so once I figure it out I intend to make a PR to add examples for how to go about this. If anyone has already figured this out, I would love to hear how they went about it.
Answers:
username_1: I do have a solution, but it is far from ideal. I wanted to look into adding support for this trough code-changes in wgpu_glyph itself, but haven't had time yet.
This is a simplified version of how I ended up doing it. After each draw I can use self.last_bounds + the index of the glyph to
highlight a rect that covers it like a text selection.
Note: To handle whitespace I ended up replacing it with a "_" and making the color invisible. When I used regular whitepace
it does not draw anything and the bound rect was missing causing the indexes to be wrong.
```rust
fn my_draw_text(&mut self, ...) {
let layout = ...;
let section = ...;
// Avoid re-allocating new vec's every time by storing them internally
self.last_glyphs.clear();
self.last_bounds.clear();
// Get all the glyphs for the current section to calculate their bounds. Due to
// mutable borrow, this must be stored first.
self.last_glyphs
.extend(self.brush.glyphs_custom_layout(&s, &layout).cloned());
// Calculate the bounds of each glyph
self.last_bounds
.extend(self.last_glyphs.iter().map(|glyph| {
let bounds = &fonts[glyph.font_id.0].glyph_bounds(&glyph.glyph);
Rect::new(
Vec2::new(bounds.min.x, bounds.min.y),
Vec2::new(bounds.max.x, bounds.max.y),
)
}));
// Queue the glyphs for drawing
self.brush.queue_custom_layout(s, &layout);
}
```
username_0: This is helpful, thank you for posting your example! |
caojiangxia/caojiangxia.github.io | 448217043 | Title: BPalgorithm-反向传播算法 | caojiangxia
Question:
username_0: https://username_0.github.io/BP/#more
BP算法这个算法是用来更新神经网络中的参数的,对于理解神经网络的运作可以说是至关重要。在我刚接触神经网络的时候,第一时间感觉神经网络是玄学,你就搭一个网络之后给一些训练数据,就能有好的效果了???这种过于智能的方法与我原本对算法的理解产生了极大的偏差。但是, 抱歉,BP算法是真的可以为所欲为。确实只需要搭一个网络结构,再给一些数据,它能更新参数,而且理论上两层的神经网络加上非线性激活函数就能够拟合 |
Tillerino/Tillerinobot | 191368344 | Title: move translations to resource files
Question:
username_0: In my opinion, anything translations related should be moved to text-based resource files for easier editing/translating. That way people translating do not have to worry about things like "a string is a thing between double quotes" or whatever. Just translate a freakin file and done.
Usually this would be done with something like i18n, but I feel like that is probably not gonna work for this, for multiple reasons:
* At multiple places it is not just a string, but a collection of stuff where it picks a random one. Even at some place the first part is fixed and the second part is random
* Tsundere isnt exactly "just" an translation, it does lots of other crap... Noticeably the fake recomendations.
* Probably other stuff I havent thought about.
As a basic start this should be simple, just a txt-file per language with a key-value pair per line where the value is a string that can be passed to `String.format()` to include runtime info. For example:
```
@Override
public String unknownCommand(String command) {
return "Unknown command \"" + command
+ "\". Type !help if you need help!";
}
```
becomes
```
UnknownCommand=Unknown command "%s". Type !help if you need help!
```
Also we need to include a thing somewhere at the start of the file which has the name of the language, aka the thing you type in chat to enable that language. I propose a reserved `Name` key (which you should always put at the start of the file for clarity). This key can be defined multiple times (like for example de English one is called "English" and "Default", dont wanna break the current system ^^)
Of course empty lines and lines started with a pre-defined "comment identifier" (`#`? `//`? both?) will be ignored when reading.
One file is set as the "default" one (so that'll be english) and if a file does not have a key defined, it'll use the value of the default one. So no ugly `return new Default().someMessage()` anymore.
Waay simpler to read, no worries about escaping shit, just text.. (well, if you ever wanna use an `%` you need to write `%%` but that wont happen i guess :stuck_out_tongue:)
Now to the "random" part, I guess thats simple: as soon as a key is defined multiple times it means, pick one at random. If the first part of a message is static, and the second part is random: it is simply two messages.
***First question: Do we need a Random instance per lang or a global one, or something else? In the current implementation there are a lot of Random instances scattered around everywhere...***
Now about the welcomeUser message: Those are just multiple messages, the what-message-when logic should be moved out of the language stuff anyway.
***Second question: What to do with multi-line messages?*** Take the last else-if-clause of the welcomeUser-message: it has 3 lines. Implementing multiple lines by defining a key multiple times is already reserved for the random-feature, so maybe adding a `_#` suffix with a "line number"? I dunno, I feel like I should not worry about which line I am, altho it is needed if you want multiline responses with random support. No idea yet on how to solve.
Now to the next proplem: Tsundere translations! From what I can see the biggest difference is it has "more" messages. Like, the default translations has things where it responds with nothing. Also we need a system for the fake recomendations, dont want to have that in both tsundere translations files ofc. Maybe the fake recomendatiosn should be moved out of the language-stuff altogeteher? But ofc that doesnt solve the problem :cry: since the fakes are part of the `invalidChoice` message...
***Third question: how to solve fake recomendations being intergrated into the system...***
***Random question to end with: what the heck is that changed thing?***
I probably forgot other stuff....
Answers:
username_1: Maybe the best solution would be to externalize all the strings in the default translation and keep the possibility to override methods in a class implementation. I wouldn't try to "generalize" all those Tsundere shenannigans. Basic translations should be simple, freaky translations can mean that you need to code stuff.
The other thing that is huge about coded languages is build time verification (including format strings with findbugs). As soon as you externalize all that, you need to make sure that everything is tested at build time.
A third thing is that the translation must be instantiable very fast. It is constructed all the time. Loading a resource file every time you load it is suboptimal.
The changed field tells the `UserDataManager` if the language object has changed, i.e. if it needs to be saved. `UserData` is saved lazily, so this is pretty important.
username_0: About resource loading, it should be done only on app startup, like once. But I can see the point you make about build time verification... I think I'm just gonna try some stuff and see where it goes.
What I would want is something like this:
```
// getLang doesnt actually construct anything, thats already done, its basically a lookup
Language lang = LanguageManager.getLang("Nederlands");
lang.getMessage("welcomeMessage", "oliebol"); // "Welkom terug, oliebol."
lang = LanguageManager.getLang("Default");
lang.getMessage("welcomeMessage", "oliebol"); // "Welcome back, oliebol."
```
where `getLang` probably should throw if its not avaiable, `getMessage` maybe as well, not sure (it could just return the key like most i18n implementations does afaik). I do agree that this is less optimal then compile time checks that every message is actually there, then just assuming its there in resource files, but I do believe it is worth it by gaining the much easier way of editing said resource files.
The `LanguageManager.getLang` thing could just go in `UserDataManager`s `getLanguage`. How it'll work with setting n stuff, I'll figure out as I go.
About the changed field, only places i see that being used are the Tsundere languages, is that only to preserve the "random" status or something? Or what is actually going on there? |
JackFCH/MI-349-html-final-project-pitchboard | 521888942 | Title: Project: Pitchboard
Question:
username_0: ## Final Project Pitchboard
@username_1 Can you take a look at this? It's [hosted here](https://username_0.github.io/MI-349-html-final-project-pitchboard/index.html) and meets the following criteria:
- [x] Pitchboard is delivered as an HTML document
- [x] Pitchboard has basic CSS styles and a layout
- [x] Pitchboard includes three comparable sites
- [x] Pitchboard includes three personas that represent the core audience or user base
- [x] Pitchboard includes a long elevator pitch
- [x] Pitchboard includes a short elevator pitch
<!-- ADD YOUR OWN NOTES, IF ANY, BELOW THIS LINE -->
Answers:
username_1: You need more detail in each of the sections. More detail in the short and long pitches. More specific personas, and links and details about your comps. Please spend more time on this and resubmit. |
zloirock/core-js | 1103478748 | Title: feature: EventTarget
Question:
username_0: `EventTarget` does not exist in Node < 16. [Example](https://github.com/testing-library/user-event/issues/817).
Some existing polyfills:
- https://github.com/benlesh/event-target-polyfill#readme
- https://github.com/mattkrick/event-target-polyfill
Answers:
username_1: I'll think makes it sense or not, thanks.
username_0: It's listed as experimental starting in Node 14.5 but I cannot find the flag to enable it (in Node 14.18). |
GoogleCloudPlatform/functions-framework-nodejs | 452459863 | Title: Google docs still refer to deprecated functions emulator
Question:
username_0: Pages like this https://cloud.google.com/functions/docs/emulator
still refer to deprecated functions emulator
Can we push for them to update to point to the functions framework?
Answers:
username_1: Thanks for the feedback -- we have an angry red box at the top of that page now. I suspect we still mention the emulator elsewhere and we'll progressively remove those references.
Status: Issue closed
username_0: Great thx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.