repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
TRemigi/the-fyxx-web | 749255260 | Title: Create Models
Question:
username_0: ### - User
* firstName {}
* lastName {}
* email {}
* password {}
* cart []
* faveArtists []
* favePieces []
### - Artist
* firstName {}
* lastName {}
* email {}
* password {}
* bio {}
*pieces [_ArtPiece.schema_]
* favoritedBy [_User.schema_]
### - ArtPiece
* artist {_Artist.schema_}
* image {}
* media {}
*price {} |
codalab/codalab-worksheets | 818024124 | Title: [FEATURE REQUEST] Add a checkbox before allowing to delete a worksheet
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Saw in slack (codalab channel) that a user accidentally deleted an entire worksheet instead of bundles, this can be a great cost on the user end to recreate the worksheet.
**Describe the solution you'd like**
Add a checkbox to the delete worksheet dialog and disable the worksheet delete button in the dialog until the checkbox is checked. This would add a barrier and user will have an extra chance to see that it's a worksheet being deleted, not bundle.
**Describe alternatives you've considered**
The freeze/unfreeze feature mentioned in https://github.com/codalab/codalab-worksheets/pull/3295 would be helpful, but still would be nice to have more friction in the worksheet deletion flow, so we reduce accident deletion as much as possible. |
exoscale/egoscale | 338984354 | Title: exo firewall remove --all says 0
Question:
username_0: ```
% exo firewall remove my-cluster --all
[+] sure you want to delete all 0 firewall rules [yN]: y
13fb8b54-824a-432c-8b72-a99a3128df18
4a14f26c-a6b0-4042-a8aa-8e281a27acc2
e8521cd7-7851-4f89-b59f-590ac81ed422
7d22e6b2-0075-4b65-a4ee-682efbe2a1bc
78891d34-c586-45a9-8fd5-dcc32b2158cc
```<issue_closed>
Status: Issue closed |
microsoft/accessibility-insights-windows | 814447861 | Title: An element's IsOffScreen property must be false when its clickable point is on screen[BUG]
Question:
username_0: According to MSDN:
If the element has a clickable point that can cause it to be focused, it is considered to be onscreen even when a portion of the element is off screen.
The main thing in the statement above is "that can cause it to be focused". But there is no way how an element can be focused if it is hidden. So, I see an incorrect implementation of this rule.
```
<Window x:Class="IsOffscreenIssue.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" Title="MainWindow"
Height="450" Width="800">
<Border x:Name="TableBorders" Grid.Column="0" Grid.Row="5" BorderThickness="1" Margin="10" BorderBrush="DarkBlue" CornerRadius="5" Visibility="Collapsed">
<StackPanel Margin="5">
<TextBlock Text="Tables" FontSize="12" FontWeight="Bold" Margin="0,0,0,5"/>
<ListView AutomationProperties.Name="ListView">
<ListView.View>
<GridView AutomationProperties.Name="TableGridView">
<GridView.Columns>
<GridViewColumn Header="Name" DisplayMemberBinding="{Binding Path=TableName}"/>
<GridViewColumn Header="Size" DisplayMemberBinding="{Binding Path=TableSize}"/>
</GridView.Columns>
</GridView>
</ListView.View>
</ListView>
</StackPanel>
</Border>
</Window>
```
In above code when the grid has more data which is not fitted in screen and we will see scrollbar for the same , so for all the rows which is hidden and cant be seen in screen we are getting IsOffScreen property must be false when its clickable point is on screen Issue.
Accessibility Insight tool mustn't report that error for hidden rows.
Answers:
username_1: Hello, thank you for using Accessibility Insights. We are happy to assist you with this issue, but we will need a little more information to proceed. Can you do the following:
1. While in Live Inspect mode of Accessibility Insights for Windows, hover over your app. The Details pane on the right side should populate with various properties and their corresponding values
2. In Properties settings (the settings gear is on the right side of "Properties"), ensure that "Include all properties that have values" is checked
3. Find the value in the Properties table for "FrameworkId"
What is the value for "FrameworkId" from (3)?
username_0: @username_1 I have attached below screenshots for your reference.


FrameworkId is WPF
username_1: @username_0, thank you for providing the extra information. We will look into this issue and update it accordingly.
username_0: @username_1 any update on this please.
username_1: @username_0, we will triage this issue as a team, but you can ignore the error for now. We've recognized it as something that end users can't resolve. |
jlippold/tweakCompatible | 306351625 | Title: `CocoaTop` not working on iOS 11.0
Question:
username_0: ```
{
"packageId": "ru.domo.cocoatop",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "ru.domo.cocoatop",
"deviceId": "iPhone7,2",
"url": "http://cydia.saurik.com/package/ru.domo.cocoatop/",
"iOSVersion": "11.0",
"packageVersionIndexed": false,
"packageName": "CocoaTop",
"category": "Utilities",
"repository": "apt.thebigboss.org",
"name": "CocoaTop",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "ru.domo.cocoatop",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.0.6",
"shortDescription": "Process Viewer for iOS GUI",
"latest": "2.0.1",
"author": "Domo",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": ""
}
```<issue_closed>
Status: Issue closed |
recharts/recharts | 175139628 | Title: Aniamtion
Question:
username_0: Small typo here: https://github.com/recharts/recharts/blob/dd7b4b19892a22b7bbfec022f701656b67ef8ec6/src/util/AnimationDecorator.js#L5
Answers:
username_1: @username_0 I don't what's the problem?
username_0: The spelling of `AniamtionDecorator`.
username_0: BTW let me know if you want me to open a PR for this.
username_1: @username_0 Got it! Thank you!
Status: Issue closed
|
mawww/kakoune | 292119019 | Title: Install error on Manjaro/Arch
Question:
username_0: I'm trying to install Kakoune on Manjaro Linux, using AUR.
I have an option to install version `r5975.baf3d82b-1` (last update: 2018-01-02 20:18), but when I run the install I get this error:
```
Resume: 317 tests, 4 failures
make: *** [Makefile:101: test] Error 4
==> ERROR: A failure occurred in check().
Aborting...
==> ERROR: Makepkg was unable to build kakoune-git.
```
1. What is going on?
2. Should AUR package be updated?
3. Any way to bypass this in the mean time and successfully install Kakoune?
Answers:
username_1: You should probably be asking for help on the AUR: https://aur.archlinux.org/packages/kakoune-git/
I have no problems building and installing it on Arch, but it looks like you are hitting failures while running `check()` in the PKGBUILD. My wild guess is that 4 of the tests may have some dependency not listed in the PKGBUILD that you happen to not have installed. Make sure that you report which of the tests are failing so someone can help track down why.
The "forced" workaround is to edit the PKGBUILD to just remove the `check()` function, but that may result in issues with using kakoune.
username_2: Try compiling the project my hand, see if you can reproduce it (and potentially get a more verbose error).
Note that I'm compiling nightly packages for Archlinux on my own server, if you need a package quickly:
https://ironzorg.fr/kakoune-nightly/
username_0: This was successful.
username_3: I think I already had this error of the test suite here, it happens when the test suite runs with a non utf8 locale, leading to some text not being outputted correctly.
username_2: I think we can close this issue now that the problem has been resolved.
username_4: @username_0 Hi. Do you mind closing this issue which appears to be resolved? Thanks.
Status: Issue closed
|
zaproxy/zaproxy | 224544253 | Title: ProxyDisclosure Rule References Issue
Question:
username_0: It would be great if the ProxyDisclosure scan rule had some actually references instead of `???`. Or the `getReference()` method should be reworked.
<sub>Ref:
https://github.com/zaproxy/zap-extensions/blob/alpha/src/org/zaproxy/zap/extension/ascanrulesAlpha/resources/Messages.properties#L109
https://github.com/zaproxy/zap-extensions/blob/alpha/src/org/zaproxy/zap/extension/ascanrulesAlpha/ProxyDisclosureScanner.java#L158-L160
</sub>
Status: Issue closed
Answers:
username_1: Fixed in zaproxy/zap-extensions#864.
username_1: Released in version 19 of Active scanner rules (alpha) add-on. |
1SSI/Devoirs | 280097466 | Title: Pour mardi 12 décembre 2017
Question:
username_0: Compétences attendues pour le contrôle avec des dérivées:
- Calculer l'ordonnée d'un point sur une courbe représentative d'une fonction dont on connait l'abscisse.
- Calculer une expression pour la dérivée d'une fonction:
- De référence: constante, affine, puissance, inverse et racine carrée,
- En utilisant les formules de dérivée d'une somme, d'un produit par une constante, d'un produit, d'un inverse et d'un quotient.
- Calculer l'équation réduite d'une droite passant par deux points dont on connait les coordonnées.
- Calculer l'équation réduite d'une tangente.
- Retrouver la limite du taux de variation entre x et x+h lorsque h tend vers 0 pour les fonctions inverses et racine carrée.
- Décider si une droite est tangente à une courbe.
Answers:
username_1: Je ne sais toujours pas pourquoi j'ai fais ça mais ... [tout ce que je sais, c'est que je l'ai fais](https://cdn.discordapp.com/attachments/371704486395379724/389899648816644108/image.jpg) |
aws-amplify/amplify-cli | 473197513 | Title: So how do I actually authenticate and access my graphql api inside a lambda function
Question:
username_0: ** Which Category is your question related to? **
Amplify function and api
** What AWS Services are you utilizing? **
Amplify (So AppSync and Lambda)
** Provide additional details e.g. code snippets **
So I've added a new function in my existing amplify project using [following this blog post](https://aws.amazon.com/blogs/mobile/amplify-framework-adds-support-for-aws-lambda-functions-and-amazon-dynamodb-custom-indexes-in-graphql-schemas/) but instead of `storage` I wanted to access my GraphQL API so I chose `api` and went through the rest of the CLI setup. After pushing it generated a lambda function with 4 environment variables:
- GRAPHQLAPIENDPOINTOUTPUT
- GRAPHQLAPIIDOUTPUT
- ENV
- REGION
So how do I actually connect now to my GraphQL endpoint and start querying? Do I just add apollo client? I'm not sure how to authenticate and access. The blog's only example is using storage and no further documentation for accessing an Amplify GraphQL API.
Answers:
username_1: One option is to add a "service" user to cognito and login inside your lambda before calling the graphql endpoint. There is a good writeup here: https://www.floom.app/blog/aws-appsync-with-lambda
Status: Issue closed
username_0: closing in favor of #1678 .. which will be solved in a cleaner way using secondary auth option (i.e. Cognito main and IAM secondary auth) with #1916 |
duzun/hQuery.php | 328115634 | Title: Is there a way to get the image full path?
Question:
username_0: How to get the full path of image from this?
`<meta content="/logo-og.png" data-react-helmet="true" property="og:image" />`
Answers:
username_1: Given the document `$doc` and the node `$m`:
```php
$doc->url2abs($m->attr('content'));
```
By default `href` and `src` attributes are converted to full path urls.
username_1: Here is an example in [hQuery.Test.php](https://github.com/username_1/hQuery.php/blob/master/tests/hQuery.Test.php#L230..L237).
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 879721985 | Title: From cannot be restarted after submit. in prod
Question:
username_0: Observed behavior: Once a form is submitted and a Veteran choses to restart the form it cannot be submitted. This has been observed in production only - in lower environments the form can be restarted.
Desired Behavior: A Veteran should be able to submit a new FSR if they chose restart per requirements - DMC has a method to sort based on submit date to work with the latest submitted form.
Answers:
username_1: unable to reproduce
username_2: This is still in limbo due to not being able to reproduce...
username_2: Can re-open if this issue comes back up should be handled on the FE with a recent update just cannot reproduce with mock data on a localhost/dev environment.
Status: Issue closed
|
davidtheclark/react-aria-modal | 521003616 | Title: Disable focusTrap on `AriaModal`
Question:
username_0: There's to no option to disable the `focus-trap` on the modal.
There's the `focusTrapPaused`, but that only pauses it, doesn't deactivate.
Answers:
username_1: It definitely would be useful to have a way to disable the focus trap. Maybe something simple like this:
```
focusTrapOptions={{
active: false
}}
```
However, this currently does not work as focus-trap-react [only acknowledges specific props](https://github.com/davidtheclark/focus-trap-react/blob/master/src/focus-trap-react.js#L20).
One use case is a designer mode where a preview of a modal, using react-aria-modal, is displayed on page. In this case, a focus trap would be undesirable. |
wix/react-native-navigation | 184764336 | Title: How to add 3rd analytics ?
Question:
username_0: ### Issue Description
I want add some analytics into my project , and analytics must use onCreate(),onResume(),onPause(). Where can I insert it? in MainApplication.java or MainActivity.java?
* React Native Navigation version: 2.0.0-experimental.133
* React Native version: 0.31
* Platform(s) (iOS, Android, or both?): android
* Device info (Simulator/Device? OS version? Debug/Release?): all
Answers:
username_1: Which analytics package are you using? You should be able to just add your code in MainApplication.java like the examples from FB analytics:
https://github.com/facebook/react-native-fbsdk
username_0: I'm use Umeng analytics, is a China package. Can you help me ? @username_1
https://github.com/esseak/rn-umeng
username_1: Just follow ru-umeng instructions and the WIX Navigation instruction here https://github.com/wix/react-native-navigation/wiki/Installation---Android
Below goes to MainApplication.java
```
@Override
protected void onResume() {
super.onResume();
MobclickAgent.onResume(this);
}
@Override
protected void onPause() {
super.onPause();
MobclickAgent.onPause(this);
}
@NonNull
@Override
public List<ReactPackage> createAdditionalReactPackages() {
// Add the packages you require here.
return Arrays.<ReactPackage>asList(
new UmengPackage()
);
}
```
username_0: no....
username_0: can't run.
say can't find symbol. @username_1
Users/alex/workflow/guanjiaapp/android/app/src/main/java/com/guanjiaapp/MainApplication.java:42: 错误: 找不到符号
super.onResume();
^
符号: 方法 onResume()
/Users/alex/workflow/guanjiaapp/android/app/src/main/java/com/guanjiaapp/MainApplication.java:40: 错误: 方法不会覆盖或实现超类型的方法
@Override
^
/Users/alex/workflow/guanjiaapp/android/app/src/main/java/com/guanjiaapp/MainApplication.java:48: 错误: 找不到符号
super.onPause();
^
符号: 方法 onPause()
/Users/alex/workflow/guanjiaapp/android/app/src/main/java/com/guanjiaapp/MainApplication.java:46: 错误: 方法不会覆盖或实现超类型的方法
@Override
username_0: I use this at RN0.28, it is work well .
username_1: Try this.
MainActivity.java
```
@Override
protected void onResume() {
super.onResume();
MobclickAgent.onResume(this);
}
@Override
protected void onPause() {
super.onPause();
MobclickAgent.onPause(this);
}
```
MainApplication.java
```
@NonNull
@Override
public List<ReactPackage> createAdditionalReactPackages() {
// Add the packages you require here.
return Arrays.<ReactPackage>asList(
new UmengPackage()
);
}
```
There was some Android breaking change at RN0.29. That is why it works for RN0.28. https://github.com/facebook/react-native/releases/tag/v0.29.0
username_1: @username_0 What is your wechat ID? I think it might be more involved than the above because how WIX Android navigation activity works.
username_0: my wechat is 2292390. thank you very much @username_1
username_0: thanks @username_1 . It is not issues , is my fault
Status: Issue closed
|
dotnet/project-system | 897614375 | Title: The result in Core is inconsistent with Framework when adding a space for Default namespace
Question:
username_0: **Visual Studio Version**: Version 16.11.0 Preview 1.0 [31319.401.d16.11]
**Steps to Reproduce**:
1. Create a Winforms core application.
2. Right click project to open properties window.
3. Find Application page, change Default namespace from WinFormsApp5 to WinForms App5(there is a space), then press enter key.
**Expected Behavior**:
For Framework, an error dialog pops up.

**Actual Behavior**:
For Core, it can be changed successfully.

**More info**:
1. This issue can also repro in .NET Core WPF and console app.
2. This is not a regression issue, it can also repro on 16.7.0 Preview 2.0.
Answers:
username_1: We currently perform no UI validation on input strings. We could add some. In #6907 I proposed adding a `RegexPattern` as optional metadata to string properties, which the UI could use to validate inputs. That approach could be a viable solution to this problem.
username_2: I don't think we will fix this for the current property pages but we should consider how we would deal with this for the new property pages.
username_2: One option is to have a property interceptor that strips out the space (or other unsupported characters). |
davidgiven/wordgrinder | 163602601 | Title: ODT Export issue
Question:
username_0: Hey, I love your word processor. It makes for a great distraction free environment when combined with Ubuntu Server 16.04.
That being said, I've had some issues with ODT Exports via dropbox. It'll state that "something" went wrong and then fail to open in both LibreOffice and Word. I haven't tried the original flavor of OpenOffice yet, but I'm guessing it might be a markup issue. Have you noticed this or is this something new?
Answers:
username_1: (Sorry for the delay; I 've been away.)
I don't use ODT export much myself, so it's quite plausible that there are bugs. Don't suppose you can provide a minimal example of a WordGrinder file that makes a bogus ODT? (The ODT itself would be useful, too.)
username_1: I've just had a comment on the website that this might be related to having two consecutive spaces in the file --- worth a look.
username_1: I still can't reproduce this.
If you have an example I can look at, could you reopen the bug and attach it, please?
Status: Issue closed
username_2: I had this issue exactly once a few months ago on 0.7.0. Unfortunately, I don't have the documents any more. If I can reproduce it, I'll attach the documents I used.
The process I followed when it happened went like this (if I remember correctly):
1. Import from .odt
2. Save to .wg
3. Edit the document
4. Save to .wg
5. Export back to .odt, overwriting the original (which was still in place)
username_1: If you can reproduce it, that'd be great. ODT's painful and I'm sure there are unconsidered edge cases.
username_1: Hey, I love your word processor. It makes for a great distraction free environment when combined with Ubuntu Server 16.04.
That being said, I've had some issues with ODT Exports via dropbox. It'll state that "something" went wrong and then fail to open in both LibreOffice and Word. I haven't tried the original flavor of OpenOffice yet, but I'm guessing it might be a markup issue. Have you noticed this or is this something new?
username_2: I managed to reproduce it, but GitHub's attachment feature doesn't seem to be working in Chromium or Qupzilla, and I can't get Firefox or Brave working because of some weird "GTK symbols detected and can't both be loaded" issue. Would it be okay if I pushed them to a *temporary* GitHub repo?
username_1: Probably easiest just to email them to me at <EMAIL> and I'll attach them here.
username_2: I emailed the documents. The subject line references this bug. If you want to verify my OpenPGP signature to verify it's from me (since my name & email address don't match my handle here), the fingerprint of the key it should verify with is:
CBB4 4C0B 7794 F668 D486 9B68 02C0 8EE2 7F03 0E92
The -bak file was saved in LibreOffice Writer 6.0.4 then imported into WordGrinder. In the WordGrinder document, you can see I start mashing keys in frustration (I was getting annoyed with WordGrinder for not wanting to reproduce the bug on-demand), and I had to try exporting a bunch of times after that, so I couldn't tell you what I did to make it happen other than repeatedly overwriting the odt file from WordGrinder's export function.
It might be worth looking to see of you can find an ODT library somewhere so you don't have to code it yourself. I know there are a few at http://www.opendocumentformat.org/developers/ (though I don't think any of them are for C or Lua - maybe you could look at the tools at the bottom of the page?).
username_1: Got the attachments --- here they are. I'll have a look; thanks very much!
[Untitled 1-bak.odt.gz](https://github.com/username_1/wordgrinder/files/2027345/Untitled.1-bak.odt.gz)
[Untitled 1.odt.gz](https://github.com/username_1/wordgrinder/files/2027346/Untitled.1.odt.gz)
[Untitled 1.wg.gz](https://github.com/username_1/wordgrinder/files/2027347/Untitled.1.wg.gz)
Status: Issue closed
username_1: Very, very belatedly: I did finally get round to looking at the documents. However, I can't reproduce the problem.
However, I _have_ just fixed a whole pile of bugs in the ODT exporter, so I suspect that whatever it was has been fixed. At least, I hope.
Many apologies for the delay! |
vernemq/vernemq | 431510492 | Title: Adding MFLN (Maximum Fragment Length Negotiation) extension
Question:
username_0: TLS 1.2/RFC6066 includes the MFLN extension, which allows small-memory (IoT) clients to use smaller fragments for transmission. When Mosquitto is built against a recent OpenSSL version, it supports this out of the box. Since Vernemq has its own TLS implementation (CMIIW), this extension is not working.
Example:
```bash
$ testssl -S 127.0.0.1:8883 | grep extensions # vernemq server
TLS extensions (standard) "EC point formats/#11" "renegotiation info/#65281"
$ testssl -S 127.0.0.1:18883 | grep extensions # mosquitto pre-built binary
TLS extensions (standard) "renegotiation info/#65281" "EC point formats/#11" "session ticket/#35" "heartbeat/#15"
$ testssl -S 127.0.0.1:28883 | grep extensions # mosquitto built from source against OpenSSL 1.1
TLS extensions (standard) "renegotiation info/#65281" "EC point formats/#11" "session ticket/#35" "extended master secret/#23" "encrypt-then-mac/#22" "max fragment length/#1"
```
Would it be possible to add support for this?
Answers:
username_1: Hi @username_0
VerneMQ has no TLS implementation, rather Erlang/OTP which VerneMQ is built upon has. So the question is if Erlang/OTP supports this already or will in the future. I tried googling for this `Erlang MFLN TLS support` but came up empty handed, I'm afraid. You could perhaps ask on the Erlang questions mailing list: http://erlang.org/mailman/listinfo/erlang-questions, usually they are quick to respond.
Alternatively you could terminate TLS in HAProxy or similar instead of in VerneMQ.
username_0: Ah, I see now. There is a patch for Erlang/OTP that adds something like this, but it was never accepted: https://github.com/rlipscombe/otp/commit/71c53d20191d3ddf43fc0aa87fabf5ac84ef70f3
I will be closing this issue now.
Status: Issue closed
username_2: Ha, interesting, maybe we should inquire why that patch wasn't accepted back then.
Thanks for digging this up, and thanks for documenting the HAProxy test!
username_3: @username_0 Thank for sharing the Haproxy config. I had a doubt, will it work with any other broker also which does not support MFLN, i am using emqx (based on erlang too). I have haproxy setup too, but when i tested with testsh, i did not get max fragment length in output.
Can you share something ? Am i missing something ?
username_0: @username_3 I think it's a bit out of the scope of this issue - but yes, you need to compile HAProxy against OpenSSL 1.1.1 or higher (released in Sep 2018), see https://github.com/openssl/openssl/commit/cf72c7579201086cee303eadcb60bd28eff78dd9 |
benfox1216/Crate | 648648445 | Title: Code annotation
Question:
username_0: Create branch and annotate in files related to Style Feature (deliverable for each team member).
- [ ] Make a list of the specific files of code that will need to be updated in order to add the additional features outlined in your track
- [ ] For each file, walk through the code and add a comment above each line/block that describes what that code is doing.<issue_closed>
Status: Issue closed |
candy-kingdom/mints | 646918662 | Title: Predefined type for parsing files
Question:
username_0: _As a CLI developer, I can use some predefined type to convert a CLI argument that represents a file path into the opened file handle._
## API
`test.py`:
```py
from mints import cli, Arg, File
@cli
def test(x: Arg[File]):
print(x.read())
```
`1.txt`:
```
Hello!
```
`shell`:
```
$ python test.py 1.txt
Hello!
```
## Notes
- This is pretty similar to [`File`](https://click.palletsprojects.com/en/7.x/api/#click.File) type from Click.
## Questions
- Should we import predefined types from a separate module, e.g., `from mints.types import File`?
Answers:
username_1: We definitely should. |
OpenMined/PySyft | 684081839 | Title: Add torch.Tensor.floor_ to allowlist and test suite
Question:
username_0: # Description
This issue is a part of Syft 0.3.0 Epic 2: https://github.com/OpenMined/PySyft/issues/3696
In this issue, you will be adding support for remote execution of the torch.Tensor.floor_
method or property. This might be a really small project (literally a one-liner) or
it might require adding significant functionality to PySyft OR to the testing suite
in order to make sure the feature is both functional and tested.
## Step 0: Run tests and ./scripts/pre_commit.sh
Before you get started with this project, let's make sure you have everything building and testing
correctly. Clone the codebase and run:
```pip uninstall syft```
followed by
```pip install -e .```
Then run the pre-commit file (which will also run the tests)
```./scripts/pre_commit.sh```
If all of these tests pass, continue on. If not, make sure you have all the
dependencies in requirements.txt installed, etc.
## Step 1: Uncomment your method in the allowlist.py file
Inside [allowlist.py](https://github.com/OpenMined/PySyft/blob/syft_0.3.0/src/syft/lib/torch/allowlist.py) you will find a huge dictionary of methods. Find your method and uncomment the line its on. At the time
of writing this Issue (WARNING: THIS MAY HAVE CHANGED) the dictionary maps from the
string name of the method (in your case 'torch.Tensor.floor_') to the string representation
of the type the method returns.
## Step 2: Run Unit Tests
Run the following:
```python setup.py test```
And wait to see if some of the tests fail. Why might the tests fail now? I'm so glad you asked!
https://github.com/OpenMined/PySyft/blob/syft_0.3.0/tests/syft/lib/torch/tensor/tensor_remote_method_api_suite_test.py
In this file you'll find the torch method test suite. It AUTOMATICALLY loads all methods
from the allowlist.py file you modified in the previous step. It attempts to test them.
# Step 3: If you get a Failing Test
If you get a failing test, this could be for one of a few reasons:
### Reason 1 - The testing suite passed in non-compatible arguments
The testing suite is pretty dumb. It literally just has a permutation of possible
arguments to pass into every method on torch tensors. So, if one of those permutations
doesn't work for your method (aka... perhaps it tries to call your method without
any arguments but torch.Tensor.floor_ actually requires some) then the test will
fail if the error hasn't been seen before.
If this happens - don't worry! Just look inside the only test in that file and look
[Truncated]
pointer objects to very many remote object types. So, if your method returns anything
other than a single tensor, you probably need to add support for the type it returns
(Such as a bool, None, int, or other types).
*IMPORTANT:* do NOT return the value itself to the end user!!! Return a pointer object
to that type!
*NOTE:* at the time of writing - there are several core pieces of Syft not yet working
to allow you to return any type other than a torch tensor. If you're not comfortable
investigating what those might be - skip this issue and try again later once
someone else has solved these issues.
### Reason 3 - There's something else broken
Chase those stack traces! Talk to friends in Slack. Look at how other methods are supported.
This is a challenging project in a fast moving codebase!
And don't forget - if this project seems to complex - there are plenty of others that
might be easier.<issue_closed>
Status: Issue closed |
ODM2/ODM2StreamingDataLoader | 118209594 | Title: Improve Mapping Page
Question:
username_0: Add some sort of indication as to whether a column in the data file is mapped or not -- especially since there may be 70+ columns to be mapped in some cases.
Change color of column in the table
Status: Issue closed
Answers:
username_0: Added color to mapped columns. |
qdm12/gluetun | 927908965 | Title: Help: No connection after Server reboots
Question:
username_0: <!---
⚠️ If this about a Docker configuration problem or another service:
Start a discussion at https://github.com/username_1/gluetun/discussions/new
OR I WILL INSTA-CLOSE YOUR ISSUE.
-->
<!---
⚠️ Answer the following or I'll insta-close your issue
-->
**Is this urgent?**: No
**Host OS** (approximate answer is fine too): Ubuntu 18.04.5 LTS
**CPU arch** or **device name**: (GNU/Linux 5.4.0-74-generic x86_64)
**What VPN provider are you using**: Windscribe
**What is the version of the program** OpenVPN 2.5 version: 2.5.2 Unbound version: 1.13.0 IPtables version: v1.8.6
```
Running version latest built on 2020-03-13T01:30:06Z (commit d0f678c)
```
**What's the problem** 🤔
I am not sure if it is the right place to ask this question, but after I restart my server every container attached to Gluetun is no longer able to be accessed, I have to manually restart each container then it works again. Since it is every container, it seems to be a problem with Gluetun. I have tried to added depends-on with a condition for a service_healty, but that has not changed anything. When I run the curl ifconfig.io it just says "curl: (6) Could not resolve host: ifconfig.io." One of the services Transmission, keeps printing a error like this after restart: "Couldn't connect socket 86, port 15430 (errno 99 - Address not available) (/home/buildozer/aports/community/transmission/src/transmission-3.00/libtransmission/net.c:339)"
**Share your logs... (careful to remove in example tokens)**
```log
```
**What are you using to run your container?**: Docker Compose
<!---
💡 You can highlight your code with https://docs.github.com/en/github/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks#syntax-highlight
-->
Please also share your configuration file:
```yml
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
volumes:
- ~/docker-services/gluetun:/gluetun
restart: unless-stopped
environment:
- VPNSP=windscribe
- REGION=US East
- OPENVPN_USER=############
- OPENVPN_PASSWORD=##########
- TZ=America/New_York
cap_add:
- NET_ADMIN
ports:
- 9091:9091
[Truncated]
jackett:
image: linuxserver/jackett
container_name: jackett
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- RUN_OPTS=run options here #optional
volumes:
- ~/docker-services/jackett/config:/config
- /mnt/seagate/Media/torrents/completed:/downloads
network_mode: "service:gluetun"
# depends_on:
# gluetun:
# condition: service_healthy
# ports:
# - 9117:9117
restart: unless-stopped
```
Status: Issue closed
Answers:
username_1: This is expected unfortunately because of How Docker handles networking. If you find a solution to this please please let me know though.
The workaround is to have everything in the same docker-compose.yml and `docker-compose down && docker-compose up -d` to restart EVERYTHING. Or have some shell script if your containers are split up in multiple Docker-compose.yml.
username_0: I could not get the command for me to run on bootup with cron, but I found this way that makes that the connect containers are connected to the internet. The way I did this is to have health checks for those two containers and another container called [autoheal](https://github.com/willfarrell/docker-autoheal) that would restart the containers if they are unhealthy. I found this website that had a lot of helpful docker config files to base my compose file off of: https://www.gitmemory.com/issue/htpcBeginner/docker-traefik/35/742486145 With this method it will work to make sure if the server reboots, it will restart the containers, but it, unfortunately, does not work with watchtower, since when the watchtower updates it has to recreates with a different container ID and therefore the container dependant containers can not find gluetun unless they are recreated as well. I just turned off updates for gluetun so I do not have this issue. Hope this helps someone!
Here is my updated config file:
```yml
---
version: "2.1"
networks:
web:
external: true
internal:
external: false
services:
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
volumes:
- ~/docker-services/gluetun:/gluetun
restart: unless-stopped
environment:
- VPNSP=windscribe
- REGION=US East
- OPENVPN_USER=****
- OPENVPN_PASSWORD=*****
- TZ=America/New_York
cap_add:
- NET_ADMIN
ports:
- 9091:9091
- 9117:9117
- 49153:49153
- 49153:49153/udp
- 7000:8000/tcp
networks:
- internal
- web
labels:
# com.centurylinklabs.watchtower.depends-on: jackett,transmission
com.centurylinklabs.watchtower.enable: false
transmission:
image: linuxserver/transmission:latest
container_name: transmission
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- TRANSMISSION_WEB_HOME=/combustion-release/ #optional
volumes:
- ~/docker-services/transmission/config:/config
network_mode: "service:gluetun"
depends_on:
- gluetun
restart: always
labels:
- autoheal=true
healthcheck:
test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
interval: 30s
[Truncated]
test: ["CMD", "curl", "http://ifconfig.io"] # HTTP Control Server running on Gluetun
interval: 30s
timeout: 2s
retries: 1
labels:
- autoheal=true
autoheal:
container_name: autoheal
image: willfarrell/autoheal:latest
restart: always
environment:
- TZ=America/New_York
- AUTOHEAL_START_PERIOD=45
- AUTOHEAL_INTERVAL=30
# - AUTOHEAL_CONTAINER_LABEL=all
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
```
username_1: Another solution not requiring a program to access the docker.sock (⚠️ I'm scared the hell out to let anyone access it 😄):
Define for each connected container the healthcheck `nslookup github.com || kill 1` which means kill process id `1` (the parent process for any container) if the first nslookup fails. That will force the container to exit and restart (and also have `restart: always` of course)
For example for docker-compose.yml:
```yml
healthcheck:
test: ["CMD", "nslookup", "github.com", "||", "kill", "1"]
interval: 5s
timeout: 3s
retries: 1
start_period: 40s
```
I haven't tested it yet, but feel free to try it out 😉
username_1: I'll document it in the Wiki 😉
username_0: Thanks!
username_1: Ah interesting, I didn't know that they changed IDs when gluetun would change. I'll also do some testing, that's definitely useful for that `Trigger mechanism such that a container restart triggers other restarts`. I'll get back with fixes 😉
username_0: deunhealth does restart the container, normally if my server restarts or if it gets unhealthy. That part works great 😜 I have my restart policy to "always". My problem is when watchtower pulls down an update and attempts to restart gluetun (with a new docker ID) as well as other attached containers (which fails as they can not find gluetun anymore because of the change of the docker ID). It seems that when starting a container, with the network is set as another container name, docker will set the network to a specific docker ID instead of just the name. That ID will only be updated when if each dependant container is recreated, but watchtower will only does not recreate them.
username_1: Ah I get it. So restarting gluetun isn't the same as pulling a newer image and then re-creating it. I'll see what I can do, there might be a way to get the new gluetun network config and patch other containers with the new hostname, maybe even without a restart. But clearly my TODO I mentioned (trigger restarts from a restart) won't help actually here. |
tokio-rs/axum | 964855848 | Title: Cookie Support
Question:
username_0: ## Feature Request
It would be nice to have support for cookie handling.
### Proposal
Add:
* a well-typed helper for adding cookies to a response
* `extract::Cookies` for getting request cookies ad a map
* Add `extract::Cookie`for getting a single cookie
That's a little bit awkward to do without const generics supporting `&str`.
One option would be
```rust
struct extract::Cookie<T: CookieName>;
trait CookieName {
const NAME: &'static str;
}
```
Answers:
username_0: `reqwest` uses [cookie-rs](https://github.com/SergioBenitez/cookie-rs), which should provide most of what's needed.
This should probably be an optional feature for users that don't want to pull in that extra dependency.
username_1: We already provide an integration with [headers](https://docs.rs/headers/0.3.4/headers/index.html) which provides typed headers. It has a [`Cookie`](https://docs.rs/headers/0.3.4/headers/struct.Cookie.html) header that can be extracted using `TypedHeader` like so:
```rust
use axum::extract::TypedHeader;
use headers::Cookie;
async fn handler(TypedHeader(cookie): TypedHeader<Cookie>) {
let value = cookie.get("key");
}
```
Do you think this fits your use-case?
In general I'm trying to be conservative with which crates axum provides specific integrations for. I'd rather provide good tools that users can use to build exactly what fits their app. Specifically I'm talking about `FromRequest`. While I haven't looked into cookie-rs I don't think it would be difficult to build an extractor for it by implementing `FromRequest` for some type you have.
Status: Issue closed
username_2: I think cookie-rs provides a decent bit more functionality than just cookie header parsing / formatting, notably around encrypted cookies. I don't personally have a use case for it, but that might warrant some sort of an integration (which could likely just be its own crate, though).
username_1: Adding an example for it, like we do for tokio-postgres, would be fine.
username_0: The `headers` crate is indeed fine for extracting request cookies.
Sadly it doesn't allow constructing a cookie header, but I guess we'll need some other helper crate for that.
username_1: Sounds like that would make a good contribution to `headers`. If someone is up for implementing it :)
username_3: and I want to ask how to set cookie
username_1: @username_3 You can include a Set-Cookie header in a response like so https://github.com/tokio-rs/axum/blob/main/examples/sessions/src/main.rs. There are more examples of how to return headers here https://docs.rs/axum/0.2.3/axum/#building-responses
How you build the header value itself is up to you. |
fedspendingtransparency/data-act-broker-backend | 318268736 | Title: How to get D1, D2, GTAS data when running local env of data act broker backend
Question:
username_0: When using the local environment of the data act broker where can we get the most current D1, D2, SF133 and GTAS data? Is there a configuration option which allows it to access it directly from treasury? Without these files, the validation on local env will always create the relevant submission errors/warnings |
Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2 | 685632851 | Title: [Feature Request] Implement AddDownstreamWebApi
Question:
username_0: See this [issue in Microsoft Identity Web](https://github.com/AzureAD/microsoft-identity-web/issues/403) for details, but let's update the samples to use the new API in [0.3.0-preview](https://aka.ms/ms-id-web/0.3.0-preview). Here's an [example](https://github.com/AzureAD/microsoft-identity-web/blob/master/tests/WebAppCallsWebApiCallsGraph/Client/Startup.cs#L47) and the [controller](https://github.com/AzureAD/microsoft-identity-web/blob/master/tests/WebAppCallsWebApiCallsGraph/Client/Controllers/TodoListController.cs#L22)
Answers:
username_1: @kalyankrishna1 FYI
Status: Issue closed
|
TiagoUmemura/ES2_Tiago | 174088558 | Title: Executar garantia de qualidade de software usando o checklist (Maisa)
Question:
username_0: Ínicio: 30/08/2016 ás 15h17
-- Registre essa tarefa no controlador de tarefas
-- Coloque o documento sob controle de versão na pasta de tarefas de aula
-- O membro da equipe que tem o papel de testador em um determinado projeto irá fazer o checklist desse projeto.
ckelist no moodle
Answers:
username_0: Fim: 30/08/2016 ás 17h15
Status: Issue closed
|
elsa-workflows/elsa-core | 750545040 | Title: how to retrieve workflow information by api?
Question:
username_0: I need to know current state of a specific workflow by api . is there any way to pass instanceId to a api to fetch workflow status ?
Answers:
username_1: Yes, in Elsa 1 you would use the `IWorkflowInstanceStore` service and its `GetByIdAsync` method to get a single workflow instance by ID.
Status: Issue closed
|
bitcoin/bitcoin | 294943278 | Title: Qt Splash Screen is deleted (and accesses wallet) after Shutdown() (and wallets are deleted).
Question:
username_0: In trying to debug #12337 I added a sleep at the end of AppInitMain() to simulate being able to close the application late in the AppInitMain() process. If I close the splash screen at this point, I get a reliable crash when the splashs screen is deleted as it tries to disconnect itself from the wallet ShowProgress signal after Shutdown() has been called and the wallet has been deleted.
Answers:
username_1: @username_0 Are you sure this is something with the wallet?
I think the issue is that the shutdown process has started in `requestShutdown`, but initialize will happily continue with `initializeResult` and start threads late in the shutdown progess. Deleting this running thread will crash the application according to the qt docs:
https://github.com/qt/qtbase/blob/e5033a5c9b769815112e922d0b224af860afd219/src/corelib/thread/qthread.cpp#L412-L415
username_1: Potentially I am talking about a different issue.
username_2: Should be fixed by #12374.
Status: Issue closed
|
uwerat/qskinny | 287784551 | Title: Platform Settings
Question:
username_0: We should have a class for global settings, that are not covered by Qt/QPA.
F.e. what should be the minimal distance for accepting a pan gesture, or when to abort a pending gesture detection to let the children handle the events. |
nus-cs2103-AY2021S2/pe-dev-response | 859979266 | Title: None
Question:
username_0: # Team's Response
Minor issue because most people would have come across these terms and be familiar with it. Similar to how an app for software developers does not need to explain what a "PR" is, for example.
## Duplicate status (if any):
-- |
dfernandezperez/ChIPseq-snakemake | 854426247 | Title: Missing slash
Question:
username_0: If trimming is set to false the error does not occur. The problem is in the common rules files I believe. You forgot a slash, see here:
https://github.com/username_1/ChIPseq-snakemake/blob/02771385ab68fcaba3e61e4b32519553d009c9e3/workflow/rules/common.smk#L31-L33
If you change `{tmp}/fastq/trimmed{sample}.{group}.fastq.gz` to `{tmp}/fastq/trimmed/{sample}.{group}.fastq.gz`then it works.
Thanks one more time for the nice pipeline!
Davide
Answers:
username_1: Hey Davide!
Thanks a lot, I have already fixed the issue and pushed the update. We don't use anymore the trimming so I didn't catch the error, thanks again!
Daniel
Status: Issue closed
|
grzegorz914/homebridge-lgwebos-tv | 701756261 | Title: LG Plex app titel and id
Question:
username_0: May very well be me who is missing something very obvious, but I cannot find the correct input name and reference (title and id) for the Plex app on my LG. I suspect that it may be down to the fact that the Plex app was not installed on the LG when I first installed this plugin, so the Plex app does not appear in the app list in .homebridge/log webosTV/apps_ file. Is there anyway to force an update of the contents of this file other than deleting the plugin and re-adding?
I am running WebOS 4.0 and latest version of both Homebridge and this plugin.
Answers:
username_0: Ok, found the solution myself. The apps-file does not update with new apps installed. But if you delete the file, a new one will be created with the current set of apps installed. May be worth adding this to the instructions for the plugin.
username_1: i will update the code to check the apps every TV start
Status: Issue closed
username_2: @username_0 did you find the appID? I can't see it even in the debug logs when I open the app.
` "vendor": "Plex",
"largeIcon": "Plex_130x130.png",
"icon": "http://192.168.178.34:3000/resources/1dab52aeecbc6af060e7ff2e6808d64c4ab0fdaf/Plex_80x80.png",
"splashBackground": "Plex_Splash.png",
"title": "Plex",
"bgImage": "Plex_Splash.png",`
Thats everything I see, am I missing something?
username_1: What is it, this is not a Homebridge log.
username_2: Hi @username_1 So I turned on debug, opened the app on the TV and then downloaded the log. This is the whole section where Plex is mentioned at all
{
"defaultWindowType": "card",
"appsize": 141406,
"bgColor": "#000000",
"CPApp": false,
"systemApp": false,
"version": "2.2.0",
"vendor": "Plex",
"hasPromotion": false,
"tileSize": "normal",
"largeIcon": "Plex_130x130.png",
"lockable": true,
"transparent": false,
"icon": "http://192.168.178.34:3000/resources/1dab52aeecbc6af060e7ff2e6808d64c4ab0fdaf/Plex_80x80.png",
"checkUpdateOnLaunch": true,
"spinnerOnLaunch": true,
"handlesRelaunch": false,
"unmovable": false,
"id": "cdp-30",
"inspectable": false,
"inAppSetting": false,
"privilegedJail": false,
"trustLevel": "default",
"mediumLargeIcon": "11121380745650039_16789827_115x115.png",
"splashBackground": "Plex_Splash.png",
"binId": 1006017,
"title": "Plex",
"deeplinkingParams": "",
"visible": true,
"resolution": "1920x1080",
"requiredEULA": "generalTerms",
"noSplashOnLaunch": false,
"accessibility": {
"supportsAudioGuidance": false
},
"folderPath": "/media/cryptofs/apps/usr/palm/applications/cdp-30",
"bgImage": "Plex_Splash.png",
"installTime": 1583346274,
"type": "web",
"main": "index.html",
"iconColor": "#1f1f1f",
"disableBackHistoryAPI": true,
"removable": true
},
username_1: Get the complete log during switch to PLEX app
username_2: OK, please excuse my ignorance, I have Plex in now. I misread something in the instructions and confused them with a different plugin.
Seems to have bumped Netflix off though so will carry on playing around.
Thanks for the quick response! |
project-koku/koku-ui | 389290341 | Title: Cost information in cost details for OCP is shown in gb
Question:
username_0: **Describe the bug**
The costs are shown in graphics and lists as gb, they should be $
**To Reproduce**
Go to the details page for OCP
**Expected behavior**
Units should be coherent with the input. For money, $ or any other currency configured. |
cosmos/sdk-tutorials | 674136985 | Title: [Question] Why there is still the comment-outed code using "cliCtx.EnsureAccountExists()" in nameservice?
Question:
username_0: https://github.com/cosmos/sdk-tutorials/blob/6d737ae233a5e9113082d9bb41424f2f994065f0/nameservice/x/nameservice/client/cli/tx.go#L75-L77
It was comment outed with bf4738a6ee06cbc8eae12eac2e667537cbe8a59c.
Why it is not deleted and still here?
Is the validation implemented in somewhere else used from tx.go?
(Suggestion: I think there should be comments describing the reason aside if it still has some meaning.)
Answers:
username_1: The commend is removed, thanks for creating this Issue.
Status: Issue closed
|
assistance-online/status | 590853998 | Title: Release 2020.08 op 02-04-2020 om 12:30
Question:
username_0: ## Aankondiging
Donderdagmiddag om 12:30 uur wordt er een update uitgerold. Naar verwachting is de update om 13:15 uur gereed. Gedurende deze periode kunnen fouten optreden en zal de gebruiker enige vertraging ervaren. Zodra de release begint zal dit bericht bijgewerkt worden.
In deze versie zitten de volgende wijzigingen:
* AO-7132 Boekingsdatum wordt nu per factuurregel naar AFAS geëxporteerd
* AO-7110 Factuurvariabelen kunnen nu ook gebruikt worden in 'Mail voor verzenden facturen'-template
* AO-5619 Als chauffeur wil ik in de app opdrachten kunnen accepteren en toewijzen. Dit betreft alleen het gedeelte in de webapplicatie, waarin je vanaf nu de rol 'Nachtplanning' kunt toewijzen aan een gebruiker. Als de nieuwe appversie in de store staat, is deze functionaliteit volledig bruikbaar. Momenteel zijn we nog bezig met het onderzoeken en oplossen van een aantal bugs in de app, waardoor dit naar verwachting nog enige tijd duurt.<issue_closed>
Status: Issue closed |
atom/apm | 86310092 | Title: Run `prepublish` script from package.json on apm publish
Question:
username_0: Applicable `apm` code: https://github.com/atom/apm/blob/6c94d96f5d640a51d49a41e2e1d1aa47310516e8/src/publish.coffee
I would like to use the `prepublish` script in `package.json` file, as described by `npm`, to generate my documentation before publishing. See https://github.com/username_0/atom-beautify/blob/master/package.json#L150
This currently does not work / is not a feature. Is this something that is of interest to others? Would a pull request be accepted? What other considerations does the community have, before / if I were to develop a Pull Request?
Thanks!
Glavin
Answers:
username_1: There is a `publish-spec.coffee` already so I'd say add new specs there, and add a simple package fixture that includes a `prepublish` script.
Also this will require doing a `git commit` after running the script right? Since the built assets will need to be part of the repository before the tag is pushed and the `atom.io` API is hit.
username_0: - `npm publish` does not deal with `git` and just flows down the lifecycle: https://github.com/npm/npm/blob/180da67c9fa53103d625e2f031626c2453c7ebcd/lib/publish.js#L74-L77
- `npm` stores its own copy of the code and does not use Git/GitHub as reference. Therefore it does not care about committing before `prepublish` and `publish` stages as it never makes a `git commit` anywhere and pushes whatever code is available when `publish` is executed.
- `apm publish` uses `npm version` internally and not `npm publish`.
- See https://github.com/atom/apm/blob/b47423db2dd093404cab80d752793cdf91986111/src/publish.coffee#L56-L57
- Therefore we will need to run `prepublish` and then `git commit` (if applicable), then run `npm version` and the other lifecycle stages, including `postpublish` -- might as well add that, too, as it should not be as complicated.
I would prefer that it be included under the `Prepare # release` commit, however this is handled mostly by `npm version`, which checks for the clean git working directory. See https://github.com/atom/apm/blob/b47423db2dd093404cab80d752793cdf91986111/src/publish.coffee#L56 and https://github.com/npm/npm/blob/cc41475f0e237e07cf436487126b96a893dfcc85/lib/version.js#L152
So let's consider what should happen when running `apm publish`:
- [ ] run `prepublish` script
- [ ] check if it exitted successfully
- [ ] show error message if applicable
- [ ] continue if successful
- [ ] check if changes to staging area and commit if available?
- [ ] What would the commit message be? `Preparing to prepare # release`? :stuck_out_tongue_winking_eye:
- [ ] continue to run `publish` as normal, which includes incrementing version and publishing to `apm` API
- [ ] run `postpublish` if all successful
I'd like to design and implement a Pull Request in these next few days. Any feedback would be greatly appreciated. Thanks!
username_2: +1 for this!
username_3: Huge :+1:
This is preventing me from packing static resources. It seems I have no choice but to download static resources on activation, which is unnecessary extra work and dirty too.
On a related note, the `prepublish` script is executed when I run `apm install myuser/mypackage` where `myuser/mypackage` is the GitHub repo reference of the package, but not when I run `apm install mypackage`, where `mypackage` is the name of the Atom package as published on atom.io. Unfortunately I need the latter to work, because obviously users will install from the package browser, and not by running `apm install github-ref`.
username_4: Is there any update or workaround for this? |
nb-twy/ParkDirectories | 624559615 | Title: Require at least one argument after -a
Question:
username_0: Executing `pd -a` parks the current directory with and empty string as the reference. This cannot be deleted without clearing the entire list or editing `.pd-data` directly.
**This is a bug!**
`pd -a ref` must require at least one argument after `-a` or it should exit with an error code and a message.
```bash
$ pd -a
ERROR: Name required to park the current directory
```
Status: Issue closed
Answers:
username_0: Commit <PASSWORD> fixes this bug. |
gobridge/about-us | 466212135 | Title: Chance to Win up to 15% Scholarship by way of a Free Quiz
Question:
username_0: Now take a very simple and engaging quiz on Data Science that can show you the much-needed mirror of your skills and aptitudes in the field. JanBask Training global e-learning has launched a short quiz that can help you analyze the depth of your knowledge in the field of Data Science and a few lucky winners can also get a 15% Scholarship.
The company came up with this offer to help its wide range of learners to do some self-assessment of their skills. Many times, individuals wonder whether or not they require formal training in a particular field or are just curious to know the level of their knowledge. For these kinds of learners, this quiz is like a blessing. All they have to do is to go to this
https://www.janbasktraining.com/practice-test/data-science-quiz
and register to take the quiz. It is that simple. Along with this, a few lucky winners who would register for the quiz can get an opportunity to be awarded a 15% Scholarship by the company. The winners would be selected by a random draw of lot.
We had a chance to discuss this offer with the Vice President of the company <NAME>. Tarun told us, "Our constant endeavor is always to help our learners grow and become more acquainted with their aptitudes. This is why we have launched this quiz offer and the added offer of a scholarship to motivate more and more learners and visitors of our site to indulge in some constructive self-assessment." Tarun also highlighted a few features of this quiz, and they are as follows-
Few lucky registrants can win a scholarship of up to 15%.
You can take the quiz for a maximum of 5 times.
You cannot leave the quiz in between if you do you will have to restart afresh.
You will be posed with ten questions based on basic DevOps skills.
You will have to select one answer among the four answers given there.
Once you are done with the quiz, your results will be emailed to you immediately.
What are you waiting for? Hurry up! Register for the quiz now and heighten your chances of winning a scholarship of up to 15%.
About JanBask Training: JanBask Training is an online training platform that is committed to providing quality online training at very affordable prices to learners across the world. All the courses are designed after taking into consideration the job patterns of the market, certification guidelines, and industry standards.<issue_closed>
Status: Issue closed |
ant-design/ant-design-pro | 293005635 | Title: 请问fetch如何添加超时处理
Question:
username_0: dva脚手架生成的项目自带的fetch 您是自己安装的axios吗 还是dva中包含axios我没发现呀
Answers:
username_1: 网上有promise all版本的
username_1: 但是我是使用的axios,自带超时处理,扩展性也好点
username_0: dva脚手架生成的项目自带的fetch 您是自己安装的axios吗 还是dva中包含axios我没发现呀
username_1: 自己安装的axios,fetch库在dva中依赖了,但不是强依赖,只是选择性依赖,所以你完全可以根据自己的习惯和需求选择axios还是fetch,并不会存在http库重复的情况
username_1: 当然你如果喜欢用fetch,你可以google一下fetch如何添加超时处理,但是fetch本身应该没有提供超时处理的配置,你可以再看看
username_0: 好的 了解了 谢谢啦
Status: Issue closed
|
Kaggle/kaggle-api | 408356728 | Title: what is the KAGGLE_KEY?
Question:
username_0: i first got 'ValueError: Error: Missing username in configuration.'
then i type ' 7. export KAGGLE_USERNAME=[Your Username]'
and the first error solve...
but then i got 'ValueError: Error: Missing key in configuration.'
what and where is the key,how to fix it ?
Answers:
username_1: Hi, try to see https://github.com/Kaggle/kaggle-api#api-credentials
username_0: @username_1 i tried to export username or kaggle.json to environment variable but both not work fine...
username_0: @username_1 of cause...download it into /.kaggle/ dir ,isn't it?
username_0: opened the file not found any thing like KAGGLE_KEY...
username_1: @username_0 because in json file it has name `key`
username_0: @username_1 no , i can't search name key in the json file
username_0: any one help?
username_3: @username_0 you can try to put kaggle.json under /home/user/.kaggle/
username_4: Please see the [README's API credentials](https://github.com/Kaggle/kaggle-api#api-credentials) section, it is perfectly explained there. |
palominolabs/metrics-guice | 76305248 | Title: Does this not work with @Provides method
Question:
username_0: I am creating few object using @Provides method (https://github.com/google/guice/wiki/ProvidesMethods) and metric annotations on such objects using metrics-guice does not work.
Anything extra I need to do to make this work?
Thanks!
Answers:
username_1: Can you go into more detail about what you're doing, or publish an example that shows the issue?
username_1: Closing due to inactivity.
Status: Issue closed
|
dntAtMe/uselessfs | 528808155 | Title: Create support for basic data mirroring
Question:
username_0: Add data mirroring as one a layer for a filesystem; for example, separately implemented functions for fuse_operations struct.
Example functionality:
-getattr
-create, mknod, open
-read, write, release, flush
-readdir, opendir, releasedir, mkdir
RAID1-Like system |
atoum/atoum | 248423193 | Title: exception asserter : cannot use `isInstanceOf` on interface
Question:
username_0: Hello,
The doc misses a point on the asserter `exception::isInstanceOf`. The current implementation checks also that the class given as argument is extending the `Exception` class : https://github.com/atoum/atoum/blob/9a063587cc39e47b6bd50b0adb994400d27ca679/classes/asserters/exception.php#L66
But I should admit I would rely on interface in this asserter, so I'm wondering if this extra check is really needed ? Or should we add an extra asserter method to really check on `Exception` inheritance like `isExceptionInstanceOf` ?
Or you might have an extra tips to achieve my test ? (ensure exception is instance of an interface)
Answers:
username_1: @username_0 thanks for reporting this!
I added the bug label because the `exception::isInstanceOf` should accept anything `instanceof` understands, an class name or an interface.
The code actually has a bug. I'm going to fix it. PR is coming.
**extra note**
The feature you suggest is more or less already available. Here is an example:
```php
<?php
class bar extends atoum {
public function testLogout()
{
$this
->given($this->newTestedInstance())
->then
->exception(function() { $this->testedInstance->logout(); })->isInstanceOf(\Throwable::class)
->object($this->exception)->isInstanceOf(\exception::class) // Here, $this->exception refers to the last captured exception
;
}
}
```
username_0: Thanks !
And I like this notation ! I was using `$this->exception()->getValue()` for now and was not satisfied ;)
Status: Issue closed
|
metrue/blog | 232139086 | Title: Things nobody will tell you aboutReact.js
Question:
username_0: ### Things nobody will tell you about React.js<br><br><a href="http://ift.tt/2oz8Hlg">http://ift.tt/2oz8Hlg</a><img src="http://ift.tt/eA8V8J"><br><br>
:book: Well if that happened to me I would say: a ok so in the future we will render HTML like we do it today in our file.include.php but with Javascript right? What’s the improvement? This is not even real Javascript right? Doesn’t it throw errors?<br><br>
:clock10: May 30, 2017 at 12:16PM |
GRIDAPPSD/GOSS-GridAPPS-D | 669139258 | Title: Create unit tests for token based auth
Answers:
username_1: @tdtalbot I think that we should just use the integration tests inside gridappsd-python for these and then do some negative testing there.
In develop now there is two functions for the generic login case that needs to work in order for any of these tests to pass.
https://github.com/GRIDAPPSD/gridappsd-python/blob/d02b52e14127100a370c6d06e6d3588042e3d50f/tests/conftest.py#L17 and
https://github.com/GRIDAPPSD/gridappsd-python/blob/d02b52e14127100a370c6d06e6d3588042e3d50f/tests/conftest.py#L29
We could add an invalid password on the client side when failure happens (maybe you have done this). Anyways, that's the easy way to get these tested at least against the platform without having to mock everything up.
I could imagine having a function that creates more gridappsd objects with different username/password combos and such.
Status: Issue closed
|
volks73/cargo-wix | 321665805 | Title: Better error messages than "can't find file"
Question:
username_0: I'm trying to build a project, but all I'm getting as an error is "The system can't find the file (os error 2)". It would be really helpful to know **what** file exactly it can't find or where the error came from.
Answers:
username_1: Yes, that is way too generic, and I can see how that is very frustrating. I will investigate and implement a better error message.
Were you able to eventually build your project and/or discover which file was missing? Did you try to use the sub-command with a higher verbosity, i.e. `> cargo wix -vvv` and if so, what was the output like?
username_1: Fixed as of 075b6cc6638234583393dd752a610279be381fbf
Status: Issue closed
username_0: @username_1 You should maybe add that you have to add the `%WIX%\bin` directory to the `%PATH`.
So I had to add: `C:\Program Files (x86)\WiX Toolset v3.11\bin` to the `PATH` environment variable. You don't actually need the VS developer console to run Wix, I ran it from the git bash and it ran fine.
username_0: Or do this automatically (checking if `%WIX%` exists and adding it to the `%PATH%` if it isn't already present there). |
azavea/raster-vision | 287555342 | Title: Why move from segmentation to OD?
Question:
username_0: I was wondering why did you guys move from semantic segmentation to object detection for sat imagery prediction? While they may not technically be the same, the final result is similar.
Answers:
username_1: The main reason is that takes much less effort to build a training set for OD, so that's where we're focused at the moment.
Status: Issue closed
|
thrust-bitcodes/http-client | 307808198 | Title: Problema com parametros do verbo GET
Question:
username_0: Boa Tarde,
Após a última atualização estamos tendo um problema com a passagem de parametros no verbo get:
Para o código abaixo escrito dentro de um arquivo `/app/teste.js` :
```
params: function (params, req, res) {
res.json(params)
},
testarParams: function (params, req, res) {
var resp = httpClient.get('http://localhost:8778/app/teste/params?id=557897').fetch()
res.json(resp.body)
}
```
Se abrir a url **http://localhost:8778/app/teste/params?id=557897** através do browser, obtemos o resultado:
```
{
"id": 557897
}
```
Ao chamar a mesma URL através do httpClient, (criamos o exemplo, está na rota **http://localhost:8778/app/teste/testarParams**) obtemos o seguinte resultado:
```
{
"id": "557897?"
}
Antes da atualização o resultado era o mesmo, após a atualização o dado esta sendo enviado no formato errado, string ao invés de number e está sendo devolvido um ponto de interrogação no final.
```
Answers:
username_1: Conforme mostrado nos exemplos, criei o arquivo:
```javascript
//test.js
var httpClient = require('http-client');
exports = {
params: function (params, req, res) {
res.json(params)
},
testarParams: function (params, req, res) {
var resp = httpClient.get('http://localhost:8778/app/test/params?id=557897').fetch()
res.json(resp.body)
}
}
```
Através do postman e do chrome, realizei ambas as chamadas.
O erro não foi apresentado em nenhum dos ambientes e chamadas.
Favor verificar a versão do http-client utilizada e se acontecerá com você na última versão.
Caso simulado, reabrir a issue com os detalhes de simulação
Imagens do postman abaixo:


Status: Issue closed
|
RoseContactTracer/covid-tracer-frontend | 866279876 | Title: Positive Pool View
Question:
username_0: A member of health services should be able to insert/view a list of students in positive pools for the day. Student affairs/contact tracers should be able to view this as well
- [ ] View created
- [ ] View is populated with real data
- [ ] Health services can input more entries
- [ ] Day old pools are not shown
Answers:
username_0: Done
Status: Issue closed
|
restsharp/RestSharp | 389438092 | Title: JSON deserialization doesn't work
Question:
username_0: ## Expected Behavior
JSON deserialization works properly
## Actual Behavior
JSON Deserialization fails with System.InvalidCastException: 'Invalid cast from 'System.String' to 'TweetClient.Tweet'.'
## Steps to Reproduce the Problem
1. Run the following code
```
public static string BaseURL = "https://tweetazure.azurewebsites.net/api";
public static string ResourceName = "RestSharp";
var client = new RestClient(BaseURL);
var request = new RestRequest(ResourceName, Method.GET, DataFormat.Json);
var tweetResponse = client.Execute(request);
var restSharpContent = tweetResponse.Content;
// TODO fails
SimpleJson.SimpleJson.DeserializeObject<Tweet>(restSharpContent);
```
Or run https://github.com/username_0/dotnetru-backend/tree/master/TweetClient
## Specifications
RestSharp 106.5.4
Windows 10 x64
</details>
Seems that RestSharp incorrectly escapes characters while downloading.
Same request with plain HttpClient or Flurl works properly.
Answers:
username_1: Tried to check, but that url is dead.
What happens if you call client.Execute<Tweet> and check reponse.Data object?
username_0: @username_1 URL is not dead, it's: `GET https://tweetazure.azurewebsites.net/api/RestSharp`
username_1: Now it's working, seems like a disturbance in the force .. I was getting 404.
I've checked that response is "{\"TweetedImage\":\"\"}" .. what you've posted is VS escaped string. I can confirm that there are some wrong settings in RS as if I take minimum code with WebRequest that RS is using, it works flawlessly as well ..
```
string BaseURL = "https://tweetazure.azurewebsites.net/api";
string ResourceName = "RestSharp";
var request = System.Net.WebRequest.Create(BaseURL + "/" + ResourceName);
request.Method = "GET";
var response = request.GetResponse();
var stream = response.GetResponseStream();
var reader = new StreamReader(stream);
var content = reader.ReadToEnd();
response.Close();
```
username_2: I don't think RestSharp does anything with the content when downloading it.
username_2: It is actually wise to try Fiddler first before opening an issue.
Content type parameter that you set to JSON is only used for body parameters.
For requests without a body, we sent requests with
`Accept: application/xml, text/xml, application/json, text/json, text/x-json, text/javascript`, so the server can decide what to send.
Your API returns a JSON string wrapped in XML:
```
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">{"TweetedImage":""}</string>
```
The response content type is also set to `application/xml`.
If your API would send the response properly, you'd get it working as expected.
Status: Issue closed
username_2: There are at least three ways to fix this:
- Force JSON on the API side
- Add the `Accepted` header to the request client
- Remove the XML handler `client.RemoveHandler("application/xml")`, which will not add any XML content types to the `Accepted` header, leaving only JSON.
username_0: @username_2 it’s not about Fiddler, it’s about RestSharp not doing a good job of handling such cases while all other tools work as expected.
username_2: @username_0 I am sorry you feel that way but clearly, you are wrong. RestSharp sends the `Accept` header for any response handlers that are registered, in full compliance to the RFC. Your API incorrectly sends an unexpected XML response that wraps your JSON response as a string, instead of at least return a proper XML. It is either a bug in Azure Functions, or something else, I don't know. The response header clearly tells RestSharp that it is XML. If you want to change the request content type to JSON only, you can use one of the methods I pointed to. |
mar10/fancytree | 14237154 | Title: RFC: Specify persistence feature
Question:
username_0: Discuss and finalize this RFC:
https://github.com/username_0/fancytree/wiki/SpecPersistence
Answers:
username_1: Hi!
Restore without extract nodes animation?
Thx.
username_0: @username_1 Thanks for contributing!
you mean *expand* animation? This will be addressed in #730
username_2: Can I restore the new node rather than status that i add so that I can see it after the page refreshing?
username_0: @username_2 sorry, but I do not understand your question
username_3: @username_0
I need help please. Can you help me find a solution for my problem. I need to know how I can achieve this with fancytree. See below
- I am using fancytree in [electron ](https://electronjs.org)application based on node.js
- My requirements are store and load persistence information in local configuration file instead of using cookie or storage.
**Reference:**
To explain, I will be using [layout.jquery](http://layout.jquery-dev.com) as reference because it has options for what I exactly need to do as they support persistence too by name stateManagement.
[demo_saved_state]( http://layout.jquery-dev.com/demos/saved_state.html)
**DEFAULT OPTIONS**
```js
stateManagement: {
enabled: false,
// true = enable state-management, even if not using cookies
// needed when autoSave or autoLoad are functions not boolean
autoSave: true,
// autoSave:boolean ==> Save state using cookie
// autoSave:function ==> use your own method to save it
autoLoad: true,
// autoLoad:boolean ==> Load state using cookie at init
// autoLoad:function ==> use your own method to load state options at init and return it
}
```
**PRACTICAL EXAMPLE**
```js
// Save and Load state from file instead of cookies-built-in-method
// in node.js/electron environment using library nconf
stateManagement: {
enabled: true,
autoLoad: function () {
// load instance_state at init
// get instance_state from anywhere like a file (if in electron),cookie,store.js whatever
// get from local file using nconf/node.js
return nconf.get('tree');
},
autoSave: function (instance, instance_state, instance_options) {
// save instance_state whereever you want
// save to local file using nconf/node.js
nconf.set('tree', instance_state).save();
}
}
```
username_3: By looking at source code in ```jquery.fancytree.persist``` where
```js
if (typeof Cookies === "function") {
cookieSetter = Cookies.set;
cookieGetter = Cookies.get;
cookieRemover = Cookies.remove;
} else {
cookieSetter = cookieGetter = $.cookie;
cookieRemover = $.removeCookie;
}
```
It would be nice if it had object support so that I could use it like
```js
// object support
if (typeof Cookies === "function" || "object") {
cookieSetter = Cookies.set;
cookieGetter = Cookies.get;
cookieRemover = Cookies.remove;
} else {
cookieSetter = cookieGetter = $.cookie;
cookieRemover = $.removeCookie;
}
```
**Usage:**
var customObject = {
get: myGetter,
set: mySetter,
remove: myRemover
};
//
//
```
$("#tree").fancytree({
extensions: ["persist"],
checkbox: true,
persist: {
cookie: customObject, // use custom object
store: "cookie", // use custom function
}
});
```
username_3: Can you please make it configurable so that custom ```cookieSetter,cookieGetter, cookieRemover``` can be defined
username_0: @username_3 Good suggestion.
Would you like to open a separate issue for this feature?
(this one is for general extension discussion)
Thanks!
username_3: @username_0
Ok I will open new issue as improvement |
databricks/intellij-jsonnet | 484317331 | Title: UnsupportedOperationException with 2019.2
Question:
username_0: ```
java.lang.UnsupportedOperationException
at java.base/java.util.AbstractList.add(AbstractList.java:153)
at java.base/java.util.AbstractList.add(AbstractList.java:111)
at com.jsonnetplugin.JsonnetCompletionContributor$1.addCompletions(JsonnetCompletionContributor.java:54)
at com.intellij.codeInsight.completion.CompletionProvider.addCompletionVariants(CompletionProvider.java:40)
at com.intellij.codeInsight.completion.CompletionContributor.fillCompletionVariants(CompletionContributor.java:150)
at com.intellij.codeInsight.completion.CompletionService.getVariantsFromContributors(CompletionService.java:63)
at com.intellij.codeInsight.completion.CompletionResultSet.runRemainingContributors(CompletionResultSet.java:148)
at com.intellij.codeInsight.completion.CompletionResultSet.runRemainingContributors(CompletionResultSet.java:141)
at com.intellij.codeInsight.template.impl.LiveTemplateCompletionContributor$1.addCompletions(LiveTemplateCompletionContributor.java:77)
at com.intellij.codeInsight.completion.CompletionProvider.addCompletionVariants(CompletionProvider.java:40)
at com.intellij.codeInsight.completion.CompletionContributor.fillCompletionVariants(CompletionContributor.java:150)
at com.intellij.codeInsight.completion.CompletionService.getVariantsFromContributors(CompletionService.java:63)
at com.intellij.codeInsight.completion.CompletionService.performCompletion(CompletionService.java:119)
at com.intellij.codeInsight.completion.impl.CompletionServiceImpl.performCompletion(CompletionServiceImpl.java:55)
at com.intellij.codeInsight.completion.CompletionProgressIndicator.calculateItems(CompletionProgressIndicator.java:824)
at com.intellij.codeInsight.completion.CompletionProgressIndicator.runContributors(CompletionProgressIndicator.java:809)
at com.intellij.codeInsight.completion.CodeCompletionHandlerBase.lambda$null$5(CodeCompletionHandlerBase.java:325)
at com.intellij.openapi.application.impl.ApplicationImpl.tryRunReadAction(ApplicationImpl.java:1106)
at com.intellij.codeInsight.completion.AsyncCompletion.tryReadOrCancel(CompletionThreading.java:170)
at com.intellij.codeInsight.completion.CodeCompletionHandlerBase.lambda$startContributorThread$6(CodeCompletionHandlerBase.java:317)
at com.intellij.codeInsight.completion.AsyncCompletion.lambda$null$0(CompletionThreading.java:95)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$2(CoreProgressManager.java:169)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:591)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:537)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:59)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:156)
at com.intellij.codeInsight.completion.AsyncCompletion.lambda$startThread$1(CompletionThreading.java:91)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:294)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
```
Answers:
username_1: Ok the problem still exists.
Plugin version: 0.5
Idea version: IntelliJ IDEA 2019.2.3 (Ultimate Edition)
Build #IU-192.6817.14, built on September 24, 2019
Runtime version: 11.0.4+10-b304.69 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Linux 5.3.7-arch1-1-ARCH
GC: ParNew, ConcurrentMarkSweep
Memory: 1894M
Cores: 12
Registry:
Non-Bundled Plugins: Key Promoter X, com.databricks, Jetbrains TeamCity Plugin, com.intellij.plugins.watcher, name.kropp.intellij.makefile, org.intellij.plugins.hcl, org.jetbrains.plugins.go-template, Karma, org.jetbrains.plugins.go, com.intellij.kubernetes, com.vladsch.idea.multimarkdown, org.sonarlint.idea, org.toml.lang
can i do something to help debugging?
username_1: mhh, ok the build is not working and throw a error that, according to jetbrains is fixed.
but i knowledge is not good enough to fix it.
so i need to wait for the next databricks release
username_0: You probably need to set the project SDK to the IDEA Platform SDK rather than some random JDK on your machine.
username_1: @username_2 i have the same errors with the new verison 0.7
username_2: I've seen this randomly, but haven't managed to reliably reproduce. Any one of you want to submit a PR to fix it?
username_1: In general yes, but i don't have the knowledge to fix it.
username_1: any hints were to look?
username_2: @username_1 should be this line in the stack trace `JsonnetCompletionContributor.java:54`
It's adding to a `java.util.List` it received from the IntelliJ AST API, and apparently sometimes the list cannot be added to. Copying the elements to our own list before adding should fix the crash
username_1: sorry did not understand enough to fix it.
but it mostly happen on completion when the object is imported.
local openshift = import "../lib/openshift.libsonnet";
then openshift. and it throw the error
maybe that helps?
Status: Issue closed
|
SharePoint/sp-dev-docs | 241333676 | Title: 🐞 Link control in property pane doesn't open popup
Question:
username_0: #### Category
- [ ] Question
- [ ] Typo
- [x] Bug
- [ ] Additional article idea
#### Expected or Desired Behavior
When specifying the correct values according the provided interfaces for the link control, a popup window should load the link.
#### Observed Behavior
The link opens in the same window. Looking at the source, I see nothing extra added to the `<a>` tag.
#### Steps to Reproduce
- create a new spfx web part project
- within the webpart, add the following import statements to the top of the page
```typescript
import {
PropertyPaneLink,
IPropertyPaneLinkProps,
} from '@microsoft/sp-webpart-base';
// @HACK because these aren't exported
// see: https://github.com/SharePoint/sp-dev-docs/issues/707
import {
IPopupWindowProps,
PopupWindowPosition
} from '@microsoft/sp-webpart-base/lib/propertyPane/propertyPaneFields/propertyPaneLink/IPropertyPaneLink';
```
- replace the default text field control for the `description` property with this one:
```typescript
{
groupName: 'Popup window properties',
groupFields: [
PropertyPaneLink('', <IPropertyPaneLinkProps>{
text: 'Voitanos',
href: 'https://www.voitanos.io',
popupWindowProps: <IPopupWindowProps>{
title: 'Voitanos',
width: 200,
height: 200,
positionWindowPosition: PopupWindowPosition.center
}
}),
]
}
```
- start local workbench & test using `gulp serve`
Answers:
username_0: Related to #707
username_1: Thanks for reporting this issue. The "popupWindowProps" feature was originally part of **office-ui-fabric-react**, but it was removed in [September](https://github.com/OfficeDev/office-ui-fabric-react/pull/293). It was an oversight that this option was not removed from the SPFx Property Pane before GA. We will remove this setting in the next release.
If this feature is important to you, I would encourage you to file it as a feature request for **office-ui-fabric-react** since they own this part of the user experience. Thanks!
Status: Issue closed
username_1: @dzearing FYI
username_0: Thanks for the followup @username_1 |
Electron-Cash/Electron-Cash | 892009603 | Title: Appearing offline
Question:
username_0: I don't know what is wrong with my account.

Answers:
username_0: [electrum.imaginary.cash] failed to connect timed out
[fulcrum.devops.cash] failed to connect timed out
[bitcoincash.quangld.com] failed to connect timed out
[bch.soul-dev.com] failed to connect timed out
[bch2.electroncash.dk] failed to connect [Errno 101] Network is unreachable
[electron.jochen-hoenicke.de] failed to connect [Errno 101] Network is unreachable
[greedyhog.mooo.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bch.loping.net] failed to connect [Errno 101] Network is unreachable
[bch.cyberbits.eu] failed to connect timed out
[jktsologn7uprtwn7gsgmwuddj6rxsqmwc2vaug7jwcwzm2bxqnfpwad.onion] cannot resolve hostname
[j2tjfxntnsqpojaamnndgmfrc6lh3thattnlpc2xx53h2ojoi7agccid.onion] cannot resolve hostname
[electrumx-cash.1209k.com] failed to connect [Errno 101] Network is unreachable
[bitcoincash.quangld.com] failed to connect timed out
[electroncash.de] failed to connect [Errno 101] Network is unreachable
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[electrs.bitcoinunlimited.info] failed to connect timed out
[blackie.c3-soft.com] failed to connect timed out
[electrumx.hillsideinternet.com] failed to connect timed out
[crypto.mldlabs.com] failed to connect timed out
[bch0.kister.net] failed to connect timed out
[electroncash.dk] failed to connect [Errno 101] Network is unreachable
[electrumx-bch.cryptonermal.net] failed to connect timed out
[ec-bcn.criptolayer.net] failed to connect timed out
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bitcoincash.network] failed to connect timed out
[bch.imaginary.cash] failed to connect timed out
[bch.crypto.mldlabs.com] failed to connect timed out
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[Network] network: retrying connections
[jh3jgcrwweh6yvmprtjnp72u2hqn34nlftlg3msrr4vmlapft4yvt2id.onion] cannot resolve hostname
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bch.imaginary.cash] failed to connect timed out
[fulcrum.devops.cash] failed to connect timed out
[bch.cyberbits.eu] failed to connect timed out
[bitcoincash.network] failed to connect timed out
[electrs.bitcoinunlimited.info] failed to connect timed out
[bch2.electroncash.dk] failed to connect [Errno 101] Network is unreachable
[electrumx.hillsideinternet.com] failed to connect timed out
[bch.loping.net] failed to connect [Errno 101] Network is unreachable
[electroncash.de] failed to connect [Errno 101] Network is unreachable
[j2tjfxntnsqpojaamnndgmfrc6lh3thattnlpc2xx53h2ojoi7agccid.onion] cannot resolve hostname
[bch.crypto.mldlabs.com] failed to connect timed out
[jktsologn7uprtwn7gsgmwuddj6rxsqmwc2vaug7jwcwzm2bxqnfpwad.onion] cannot resolve hostname
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[electrumx-bch.cryptonermal.net] failed to connect timed out
[greedyhog.mooo.com] failed to connect timed out
[electrumx-cash.1209k.com] failed to connect [Errno 101] Network is unreachable
[ec-bcn.criptolayer.net] failed to connect timed out
[electron.jochen-hoenicke.de] failed to connect [Errno 101] Network is unreachable
[electroncash.dk] failed to connect [Errno 101] Network is unreachable
[electrum.imaginary.cash] failed to connect timed out
[blackie.c3-soft.com] failed to connect timed out
[fulcrum.fountainhead.cash] failed to connect timed out
[bitcoincash.quangld.com] failed to connect timed out
[bch0.kister.net] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[crypto.mldlabs.com] failed to connect timed out
[bch.soul-dev.com] failed to connect timed out
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[Network] network: retrying connections
[electrumx-cash.1209k.com] failed to connect [Errno 101] Network is unreachable
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
[bitcoincash.quangld.com] failed to connect timed out
[Network] connecting to bitcoincash.quangld.com:50002:s as new interface
username_1: Sorry, I didn't notice this issue before because it didn't use the word "Android".
It looks like your device was unable to reach the network. Did this affect other apps, or only Electron Cash? And did you manage to fix the problem?
Status: Issue closed
username_1: If this is still a problem, please answer the questions above and I'll reopen the issue. |
concourse/concourse | 409978218 | Title: build page scrolls to the top
Question:
username_0: ## Bug Report
In at least one case on prod we have visited the build page for a failed build, scrolled to the bottom and found the browser autoscrolling upwards, quite some distance...
https://ci.concourse-ci.org/teams/main/pipelines/prs/jobs/unit/builds/528
Answers:
username_1: We see the same with 5 RC74.
Status: Issue closed
|
microformats/mf2py | 339271016 | Title: print("here") in implied_properties.py
Question:
username_0: implied_properties.py has:
if not mf2_classes.root(poss_a.get('class', [])):
print("here")
return poss_a
This is printed by running mf2py on the contents of:
https://paleotronic.com/2018/07/08/the-jackintosh-a-real-gem-remembering-the-atari-st/
BTW, much easier debugging if you "raise".
Answers:
username_1: oops! somehow overlooked that. Luckily it doesn't break any actual parsing!
username_0: Ha ha well, on the plus side it did identify a test for that case! Compiler writers do this a lot, they sprinkle their code with "not implemented yet" calls that cause customers to send them examples for that particular part of the code. |
MicrosoftDocs/cpp-docs | 1091418244 | Title: wrong text for pragma
Question:
username_0: ---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ce8f0a4b-8eeb-158f-3995-66fa674e67d1
* Version Independent ID: d9e7ec65-7238-a9fb-6ab3-dc71144e3a34
* Content: [Pragma directives and the __pragma and _Pragma keywords](https://docs.microsoft.com/en-us/cpp/preprocessor/pragma-directives-and-the-pragma-keyword?view=msvc-170)
* Content Source: [docs/preprocessor/pragma-directives-and-the-pragma-keyword.md](https://github.com/Microsoft/cpp-docs/blob/master/docs/preprocessor/pragma-directives-and-the-pragma-keyword.md)
* Product: **visual-cpp**
* Technology: **cpp-language**
* GitHub Login: @username_1
* Microsoft Alias: **corob**
Status: Issue closed
Answers:
username_1: @username_0
Good catch, and thanks for the report. We've since published a fix. |
donmccurdy/glTF-Transform | 972217713 | Title: Various Khronos samples broken by quantization
Question:
username_0: Examples:
- BoxTextured
- BrainStem
- CesiumMan
- MorphStressTest
- DragonAttenuation
- RiggedFigure
- RiggedSimple
- TextureSettingsTest
- TextureTransformMultiTest






Answers:
username_0: Volume attenuation distance is a distinct problem, already reported at https://github.com/mrdoob/three.js/issues/22343.
The rest — particularly texture settings, skinning, and morph targets — are likely to represent bugs in the `quantize()` function implemented here.
username_0: Many issues fixed in https://github.com/username_0/glTF-Transform/pull/333.
A few remaining:
- [ ] Broken morph targets (screenshots above)
- [ ] Need to update volume extension (https://github.com/mrdoob/three.js/issues/22343)
- [ ] Skinned mesh offsets (80% sure this is a model-viewer bbox issue; models where static meshes define bbox are fine)
username_0: Not a bug, but quantization currently skips any texture coordinates extending beyond the [0,1] range. Could improve completeness by quantizing, adding `KHR_texture_transform` when those are encountered.
username_0: Closing this and filing the remaining tasks as new issues:
- https://github.com/username_0/glTF-Transform/issues/335
- https://github.com/username_0/glTF-Transform/issues/336
Status: Issue closed
|
LeetCode-Feedback/LeetCode-Feedback | 1088662593 | Title: Solutions don't run - 336. Palindrome Pairs
Question:
username_0: <!--
Note - Any content mention below in `<!-- ->` blocks are just comments
to help you fill-up the issue. It won't be visible in the actual issue after
you click on submit.
-->
#### Your LeetCode username
YoungShaw
#### Category of the bug
- [ ] Question
- [x] Solution
- [ ] Language
- [ ] Missing Test Cases
#### Description of the bug
<!-- A clear and concise description of what the bug is. -->
In the question 336. Palindrome Pairs, **Approach 3: Using a Trie**, the Java solution won't pass the last test case #136, it will throw **Time Limit Exceeded** error.
#### Code you used for Submit/Run operation
<!--
Please make sure you wrap your code with ``` tags.
Otherwise we may reject your request.
-->
- I used the original code from Approach 3: Using a Trie
```
class TrieNode {
public int wordEnding = -1; // We'll use -1 to mean there's no word ending here.
public Map<Character, TrieNode> next = new HashMap<>();
public List<Integer> palindromePrefixRemaining = new ArrayList<>();
}
class Solution {
// Is the given string a palindrome after index i?
// Tip: Leave this as a method stub in an interview unless you have time
// or the interviewer tells you to write it. The Trie itself should be
// the main focus of your time.
public boolean hasPalindromeRemaining(String s, int i) {
int p1 = i;
int p2 = s.length() - 1;
while (p1 < p2) {
if (s.charAt(p1) != s.charAt(p2)) return false;
p1++; p2--;
}
return true;
}
public List<List<Integer>> palindromePairs(String[] words) {
TrieNode trie = new TrieNode();
// Build the Trie
for (int wordId = 0; wordId < words.length; wordId++) {
String word = words[wordId];
String reversedWord = new StringBuilder(word).reverse().toString();
TrieNode currentTrieLevel = trie;
[Truncated]
// Move down to the next trie level.
int charIndex = word.charAt(j) - 'a';
currentTrieLevel = currentTrieLevel.next[charIndex];
if (currentTrieLevel == null) break;
}
if (currentTrieLevel == null) continue;
// Check for pairs of case 1. Note the check to prevent non distinct pairs.
if (currentTrieLevel.wordEnding != -1 && currentTrieLevel.wordEnding != wordId) {
pairs.add(Arrays.asList(wordId, currentTrieLevel.wordEnding));
}
// Check for pairs of case 2.
for (int other : currentTrieLevel.palindromePrefixRemaining) {
pairs.add(Arrays.asList(wordId, other));
}
}
return pairs;
}
}
```
Answers:
username_1: Hi @username_0
Thank you for reaching out to us. I've relayed this issue to our team to investigate. |
amaiya/ktrain | 841074262 | Title: failing to run at multi label problem and treating it as binary problem
Question:
username_0: (X_train,y_train),(X_test,y_test), preprocess = text.texts_from_df (train_df=data_train,
text_column="texts",
label_columns="label",
val_df=data_test,
maxlen=100,
preprocess_mode="bert")
the label column contains multi class labels
getting this ==
Is Multi-Label? False
print(len(np.unique(y_train))) gives me length of 2 when originally 3 labels in the column
Answers:
username_1: As demonstrated in the tutorial, multi-label problems require multiple columns that are many-hot-encoded:
```
Loads text data from Pandas dataframe file. Class labels are assumed to be
one of the following formats:
1. one-hot-encoded or multi-hot-encoded arrays representing classes:
Example with label_columns=['positive', 'negative'] and text_column='text':
text|positive|negative
I like this movie.|1|0
I hated this movie.|0|1
Classification will have a single one in each row: [[1,0,0], [0,1,0]]]
Multi-label classification will have one more ones in each row: [[1,1,0], [0,1,1]]
2. labels are in a single column of string or integer values representing class labels
Example with label_columns=['label'] and text_column='text':
text|label
I like this movie.|positive
I hated this movie.|negative
3. labels are a single column of numerical values for text regression
NOTE: Must supply is_regression=True for integer labels to be treated as numerical targets
wine_description|wine_price
Exquisite wine!|100
Wine for budget shoppers|8
```
You've only specified one column. The returned `y_train` variable should be one-hot-encoded (binary or multi-classification) or multi-hot encoded (multilabel) where each column will only contain a 1 or 0.
Note also that there is a difference between multi-label and having more than 2 classes (multiclassification). Examples in multi-label problems can have multiple targets, whereas in multi-classification each example is assigned one of possibly many targets.
Please follow tutorial for more information.
P.S. If you switch to DistilBert (which returns objects instead of Numpy arrays), make sure you switch to something like this:
```
trn, val, preproc = text.texts_from_df(...
```
Status: Issue closed
username_0: Hello Amaiya,
Sorry I was a bit confused in the beginning my problem revolves around multi class but not multi label (like you mentioned), so I followed the tutorial ipynb and with the your comments, I was able to fix it easily.
Like you said when printing the shape of Y_train I can see it has 3 columns now (what was be expected)
Thanks mate 👍 |
rijkvanzanten/luaus | 225250175 | Title: It's not clear what this project will do when it's finished
Question:
username_0: You claim it will keep track of your game scores; but how? You haven't explained how this will be done with a physical button at all 😛
I can't make out from the description or the whishlist what this thing will do. What lobby? What box? 😕
I want to give you guys feedback but I can't really at this point. Which you probably already know. Might be a good idea to explain your app on monday and get some feedback on it then.
Answers:
username_1: Our final README.md will be self-explanatory. Apologies for the inconvenience!
Status: Issue closed
|
Polymer/polymer-analyzer | 213981499 | Title: Add a general feature for tracking classes
Question:
username_0: Similar to Namespace, and a superclass of both Element and ElementMixin.
Would be useful in the `call-super-in-callbacks` lint pass.
Needed to display docs for Dom API.
Answers:
username_1: We really need this for the docs. Might bump to high.
username_0: Taking a shot at this now.
Status: Issue closed
|
johnewart/gearman-java | 158569219 | Title: No Response to Gearman Admin Protocol
Question:
username_0: Status, does indeed work, but I needed 'Workers' and 'Version' commands to internally keep track of workers in my program. Great work on this project btw. Its awesome!
Answers:
username_1: It should; status at least should work -- let me take a look. What commands were you expecting to work that didn't?
username_0: Status, does indeed work, but I needed 'Workers' and 'Version' commands to internally keep track of workers in my program. Great work on this project btw. Its awesome!
username_1: Thanks! Should be easy enough to add. I'll take a look this week.
username_2: Hi! Somebody did this? It is useful functionality for gearman ui too http://rodolforipado.net/gearmanui/
username_3: Check this - https://github.com/username_1/gearman-java/pull/15 |
Swiss-Polar-Institute/project-application | 503926476 | Title: create master call model
Question:
username_0: All calls should belong to a master call or similar (possible name: funding instrument?). This would allow for overall statistics for each type of call to be created.
Answers:
username_0: in SPI application diagram, search required by instrument.
username_0: created FundingInstrument model
Status: Issue closed
|
davidofwatkins/ge-cancellation-checker | 169953324 | Title: Multiple TypeError issues
Question:
username_0: Just getting this setup, and I'm getting a few TypeError issues...
<pre>
Please wait...
On GOES login page...
Logging in...
Bypassing human check...
Error on page: TypeError: undefined is not an object (evaluating 'document.document.ApplicationActionForm')
phantomjs://code/ge-cancellation-checker.phantom.js:54 in onError
Entering appointment management...
Entering rescheduling selection page...
Choosing Location: Washington Dulles International Global Entry EC - 22685 International Arrivals- Main Terminal, Washington Dulles International Airport, Sterling , VA 20041, US
Error on page: TypeError: null is not an object (evaluating 'document.querySelector('.date table tr:first-child td:first-child').innerHTML')
phantomjs://code/ge-cancellation-checker.phantom.js:54 in onError
</pre>
Answers:
username_0: Looks like I needed the fix referenced in #11 - works now.
username_1: Cool, closing as duplicate.
Status: Issue closed
|
cerner/terra-core | 328583638 | Title: [terra-form-select] Allow custom onBlur
Question:
username_0: # Issue Description
<!-- Describe the issue as best you can. We love screenshots! -->
The Select should allow a custom onBlur.
## Issue Type
<!-- Is this a new feature request, enhancement, bug report, other? -->
- [ ] New Feature
- [X] Enhancement
- [ ] Bug
- [ ] Other
## Expected Behavior
<!-- Tell us how it should work -->
The Select should honor a custom onBlur provided by consumers.
## Current Behavior
<!-- Tell us what happens instead of the expected behavior -->
<!-- Leave a comment "N/A" if there is no current behavior -->
The Select does not honor a custom onBlur
<!----------------------------------------------------------------------------------->
<!-- If you are reporting a bug, please fill out the sections below. -->
<!-- Otherwise, the sections below can be deleted. -->
<!----------------------------------------------------------------------------------->
## Steps to Reproduce
<!-- Provide a link to a live example, or an unambiguous set of steps to -->
<!-- reproduce this bug. Include code to reproduce, if relevant -->
1. Create a select with a custom onBlur
2. Focus the Select by clicking or tabbing to it.
3. Click away or press tab to lose focus.
4. Observe the custom onBlur is not invoked.<issue_closed>
Status: Issue closed |
diningphil/PyDGN | 1139705714 | Title: Specify internal and external sizes of validation sets
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
It is not possible to use a different validation set ratio for model selection and model assessment.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Add a new parameter in the data config files that allows to specify the external validation ratio
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_0: (yeah, I'm answering to myself) This has been worked on and will released in the next big PyDGN release. |
deepset-ai/FARM | 695642592 | Title: Add Electra Specific Classification Head
Question:
username_0: FARM has a genereric classification head: https://github.com/deepset-ai/FARM/blob/d7b5cb61f062436789aa9081a85ade4f56e2db6f/farm/modeling/prediction_head.py#L240
Electra from Transformers seems to have a specific head with a gelu activation: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_electra.py#L397
Should this also be added in FARM?
Answers:
username_1: Differences in PredictionHeads is definitely something to watch out for, especially when converting trained models between FARM and transformers.
About gelu I found: https://github.com/deepset-ai/FARM/pull/364
I would not like to add a special Electra Prediction Head, since it seems rather random. Only the interoperability with transformers for doc classification is an issue. What about creating error messages in the conversion functions for these cases as is done for roberta model, e.g. [here](https://github.com/deepset-ai/FARM/blob/master/farm/modeling/adaptive_model.py#L537)?
Status: Issue closed
username_1: Seems inactive - closing this for now. |
prometheus/prometheus | 376767687 | Title: Error 500 when querying prometheus with "~" character in filters
Question:
username_0: ## Bug Report
**What did you do?**
I set up Grafana to be able to query prometheus. I use grafana's variables in my query
```
60*rate(kong_hit{api_name=~"[[client]]"}[5m])
```
**What did you expect to see?**
A grapsh, and the ability to change filters with grafana
**What did you see instead? Under which circumstances?**
Request error:
*Request*
Request details
Url | /api/datasources/proxy/2/api/v1/query_range?query=60*rate(kong_hit%7Bapi_name%3D~%22https-mock-service%22%7D%5B5m%5D)&start=1541150601&end=1541151201&step=2
-- | --
Method | GET
Accept | application/json, text/plain, */*
*Response*
```
{
"status": "error",
"errorType": "execution",
"error": "server returned HTTP status 500 Internal Server Error",
"message": "server returned HTTP status 500 Internal Server Error"
}
```
**Environment**
* System information:
`Linux 4.15.0-24-generic x86_64`
* Prometheus version:
```
prometheus, version 2.4.3 (branch: HEAD, revision: 167a4b4e73a8eca8df648d2d2043e21bdb9a7449)
build user: root@1e42b46043e9
build date: 20181004-08:42:02
go version: go1.11.1
```
* Prometheus configuration file:
```
# Sample config for Prometheus.
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
[Truncated]
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 2s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
remote_write:
- url: http://prometheus-fast-remote:8080/write
remote_read:
- url: http://prometheus-fast-remote:8080/read
```
* Logs:
```
level=error ts=2018-11-02T11:03:14.085343885Z caller=engine.go:498 component="query engine" msg="error selecting series set" err="server returned HTTP status 500 Internal Server Error"```
Answers:
username_1: It looks like you've a remote read backend that doesn't support that matcher.
username_0: @username_1 Thx, for your answer I will look on that way
Status: Issue closed
|
nliautaud/gedcom-svg-fanchart | 743168404 | Title: Save the image
Question:
username_0: Can you please add a feature to download the image with the current pallette?
Answers:
username_1: Hi,
You should be able to export the chart in PDF using *print* in the browser :
https://github.com/username_1/gedcom-svg-fanchart#saving--printing
That's a vector export that you can convert to any other format if needed.
username_1: Three saving / export options added & readme updated
Status: Issue closed
|
skill-collectors/incubator | 562074379 | Title: "7 Languages in 7 Weeks" book club
Question:
username_0: ## Why should people participate?
This book covers seven languages that each have a different [programming paradigm](https://en.wikipedia.org/wiki/Programming_paradigm). The book's curriculum covers each language in a week, with 3 days of exercises for each. This exposes you only to the beginnings of how to use each language, but highlights the key differences in how the think about problem solving.
If you want to expand the way you think about programming and/or find a new language to dive in to this is a great way to do that.
Even if you never use the languages the book exposes you to, you can take these new approaches to problem solving to your day-to-day language of choice.
## What would it mean to lead?
## What would it mean to participate?
## Outcome<issue_closed>
Status: Issue closed |
pact-foundation/pact_broker | 277996894 | Title: what is Pact Broker CI Nerf Gun ?
Question:
username_0: The ReadMe talks about a magic gun in provider CI verification which would determine the person who caused the failure, sounds very much interesting but the wiki entry has nothing but a text **'I wish'** 🌵 .
Link to wiki https://github.com/pact-foundation/pact_broker/wiki/pact-broker-ci-nerf-gun
Answers:
username_1: I'm afraid it is just a figment of my imagination. However, if you wish to work on an implementation, I'd be happy to collaborate with you ;)
Status: Issue closed
|
cashshuffle/cashshuffle | 386604258 | Title: Maybe return server host/port in the HTTP stats?
Question:
username_0: https://github.com/cashshuffle/cashshuffle/blob/fdd68f03af3eabc3008f5c16daab44ba88453674/server/stats.go#L23
It might be useful to have the info http server thingie return host/port combo for the cashshuffle protocol service it's associated with.
Rationale: UI-wise it might be easier for users to just remember the host/info port of the shuffle server and "behind the scenes" clients can query the info port to get the real cash shuffle protocol port to connect to.
Was thinking for the EC plugin actually that would eliminate UI noise to have that.
Let me know!
Answers:
username_1: We can easily expose the port, but not the host as it might be behind a NAT and I don't want to introduce that complexity to the stats endpoint. However I think the port is all you need. =)
username_1: So I think this should work for you: https://github.com/cashshuffle/cashshuffle/commit/4f2482d301c0665e9b8b813e281052ab306d156e
Just added a new key `shufflePort` so you can get the port you need. You should already have the host. =)
Status: Issue closed
username_0: Awesome, thanks man! |
kubernetes/kubernetes | 431447428 | Title: Certificates problems in kubeadm HA
Question:
username_0: **What happened**:
Hi All. Im facing a problem in HA k8s cluster bootstrapped by kubeadm. All work OK, but kubectl logs does not work. *net/http: TLS handshake timeout* .If I not wrong, kube-apiserver in *kubectl logs* forward the request *to the kubelet node where the pod are running*. Appear that kubelet certificate on that node are rejected, why???
If I make a curl from master to the node where the pod are running (same request that *kubectl logs*) the error is *Peer's certificate issuer has been marked as not trusted by the user.* .So, if in theory, *kubeadm copy all certificates to all nodes*, and *trust* this certificates, why the certificate was marked as untrusted? (editado)
**What you expected to happen**:
kubectl logs work properly
**How to reproduce it (as minimally and precisely as possible)**:
Deploy an HA kubernetes cluster with kubeadm
**Anything else we need to know?**:
k8s version 1.14
**Environment**:
- Kubernetes version (use `kubectl version`):
1.14.0
Answers:
username_0: /sig kubeadm
username_1: i'm not seeing this particular problem in a local HA test setup.
but i do see a spam of:
```
I0410 21:40:51.463549 1 log.go:172] http: TLS handshake error from 172.17.0.5:51654: tls: client offered only unsupported versions: [300]
I0410 21:40:53.465046 1 log.go:172] http: TLS handshake error from 172.17.0.5:51678: tls: client offered only unsupported versions: [300]
I0410 21:40:55.467707 1 log.go:172] http: TLS handshake error from 172.17.0.5:51706: tls: client offered only unsupported versions: [300]
I0410 21:40:57.471854 1 log.go:172] http: TLS handshake error from 172.17.0.5:51736: tls: client offered only unsupported versions: [300]
I0410 21:40:59.471560 1 log.go:172] http: TLS handshake error from 172.17.0.5:51770: tls: client offered only unsupported versions: [300]
I0410 21:41:01.473058 1 log.go:172] http: TLS handshake error from 172.17.0.5:51796: tls: client offered only unsupported versions: [300]
I0410 21:41:03.474616 1 log.go:172] http: TLS handshake error from 172.17.0.5:51824: tls: client offered only unsupported versions: [300]
```
when calling:
```
kubectl logs <kube-api-server-pod> -n kube-system
```
@kubernetes/sig-cluster-lifecycle
username_0: @username_1 and what about *kubectl logs -f* .This will trigger the error.
username_1: `-f` is just the flag to follow the log stream, i don't think this will cause a `net/http: TLS handshake timeout`.
username_2: /remove-lifecycle rotten |
usnistgov/dastard | 467579547 | Title: crash on start file writing
Question:
username_0: ```
panic: runtime error: index out of range
goroutine 171 [running]:
github.com/usnistgov/dastard.(*AnySource).writeControlStart(0xc0001d4098, 0xc00021e000, 0x5, 0x1)
/home/pcuser/go/src/github.com/usnistgov/dastard/data_source.go:507 +0x1141
github.com/usnistgov/dastard.(*AnySource).WriteControl(0xc0001d4098, 0xc00021e000, 0x0, 0x0)
/home/pcuser/go/src/github.com/usnistgov/dastard/data_source.go:450 +0x4a4
github.com/usnistgov/dastard.(*SourceControl).WriteControl.func1()
/home/pcuser/go/src/github.com/usnistgov/dastard/rpc_server.go:394 +0x48
github.com/usnistgov/dastard.CoreLoop(0xbd6740, 0xc0001d4000, 0xc0001aa120)
/home/pcuser/go/src/github.com/usnistgov/dastard/data_source.go:152 +0x141
created by github.com/usnistgov/dastard.Start
/home/pcuser/go/src/github.com/usnistgov/dastard/data_source.go:134 +0x13a
exit status 2
````<issue_closed>
Status: Issue closed |
itinero/routing | 452697702 | Title: Questionable travel time calculation
Question:
username_0: The route calculated with the same program seen in #267 seems to have the correct distance, but the reported travel time feels unusually high.
|Provider|Reported travel time|
|-|-|
|Project OSRM|[175 minutes](http://map.project-osrm.org/?z=9¢er=39.453161%2C-118.307648&loc=39.489762%2C-117.065927&loc=39.502923%2C-119.788225&hl=en&alt=0)|
|Google|[168 minutes or so](https://goo.gl/maps/VvCjJApvU3v)|
|Bing|[206 minutes **with traffic**](https://binged.it/2GcnSwY)|
|Itinero (fastest, not contracted)|254 minutes|
Is there something we can learn from [Project OSRM's own car profile](https://github.com/Project-OSRM/osrm-backend/blob/86aebc0812e7183edf2f43e6f67083bd6fbdae86/profiles/car.lua)?
Answers:
username_0: Another sample route that seems way too high (times are low because I'm using a custom `IArrayFactory` that creates `NativeMemoryMappedArray<T>` instances backed by files in a temp folder that I manually clear out, so that I don't have to micromanage my page file as much):
```csharp
// replace these in Program.cs:
const float SrcLat = 42.329434f;
const float SrcLon = -83.038549f;
const float DstLat = 33.806898f;
const float DstLon = -118.146133f;
```
```
Loading router DB from disk... done after 13.176 seconds.
Starting calculation for car (not contracted)... done after 401.520 seconds.
GeoJSON: << REDACTED, see attachment >>
Total distance: 2,480.597 miles
Total time: 2,735.515 minutes
```
Same kind of table as above:
|Provider|Reported travel time|
|-|-|
|Project OSRM|[2,100 minutes](http://map.project-osrm.org/?z=5¢er=39.283294%2C-97.976074&loc=42.329707%2C-83.037926&loc=33.807352%2C-118.146821&hl=en&alt=0)|
|Google|[1,980 minutes](https://goo.gl/maps/wJew7DQRySw)|
|Bing|[2,431 minutes **with traffic**](https://binged.it/2Gbr9wG)|
Route GeoJSON: [route.txt](https://github.com/itinero/routing/files/3263156/route.txt) |
jan-warchol/selenized | 580703985 | Title: Vim mapping colours to highlight groups
Question:
username_0: The current mapping from the colours to their highlight groups does not seem to be optimal. I would have expected Selenized to look very similar to Solarized (just with better contrast and slightly different shades of the colours), but the syntax colours are completely shuffled around.

(left: Solarized8, right: Selenized)
Also, something weird is going on with the status line.
Answers:
username_1: Hello @username_0,
this opens a very interesting topic. To be honest, I don't particularly like Solarized's choice of token-color mappings (in particular I think using green for keywords is a bad choice). However, I'm not completely satisfied with current selenized coloring and I indeed intend to change it at least a bit.
There is also the issue with number of colors available: solarized uses 8 "accent" colors, but it requires [an ugly hack that selenized aims to avoid](https://github.com/username_1/selenized/blob/master/whats-wrong-with-solarized.md#problems-with-implementation). So, full solarized compatibility is impossible (at least with coloring based on terminal's ANSI codes).
My question is: is having selenized mimic solarized's syntax coloring a must-have for you (and any other people)? Would you be okay with some kind of middle ground?
(PS I'm not talking about the status line, that one is definitely a bug.)
username_1: PS could you send me the source used to make the screenshots? I'm surprised that class name was colored green, it's supposed to be pink. I'd like to inspect it.
username_0: @username_1: Thanks for the quick response.
I think Solarized's colour mappings are generally pretty well chosen. Something approaching this at least should be welcome by most people I think.
I map Python class names to `Type` in my .vimrc, because by default, Vim does not dissociate between class names and function names. I think this is a pretty common thing to do. Unfortunately, in Selenized, `Type` has the same colour as strings.
username_1: Ah, ok. Thanks for explaining.
username_0: Sure.
username_2: No I have not yet, been meaning to get to it this week along with `:set spell` highlights.
username_1: Okay, so I suggest the following plan:
* I'll rework highlighting of syntax groups to be more like solarized
* I like current syntax highlighting so I'll use it as a basis for another theme, most probably full-color version of [limestone](https://github.com/username_1/limestone-colors) (I need to add some screenshots there...)
* unstyled elements mentioned in #67 are almost exclusively interface elements, so I'll leave them to @username_2
* we'll need some code samples for testing. I'll setup a repo and I count on @username_0's help here :-)
username_1: @username_0 Here's the repo with samples for testing; feel free to comment or submit your own samples. https://github.com/username_1/highlighting-code-samples
username_2: I'll get on that this week.
username_1: Closed via #70, but part of the discussion is continued in #74.
Status: Issue closed
|
postmanlabs/postman-app-support | 722056567 | Title: OpenAPI Examples are not respected in generated documentation
Question:
username_0: **Describe the bug**
Postman API to Collection generation does not support examples in the objects. The strategies are described here: https://swagger.io/docs/specification/adding-examples/
**To Reproduce**
Steps to reproduce the behavior:
1. Go to "Browse"
2. Click on "Create new API" with "Open API 3.0"
3. Enter test names and open "Define"
4. A sample schema should be present. Replace the schema with
```yaml
openapi: 3.0.2
info:
title: My API
version: 0.0.0
paths:
'/users/{someOtherString}':
post:
operationId: CreateUser
parameters:
- name: someOtherString
in: path
description: property-two-examples description
required: true
schema:
type: string
examples:
property-example-one:
value: stringOne
property-example-two:
value: stringTwo
- name: someString
in: header
description: Header for XYZ ABC Purposes
required: true
schema:
type: string
examples:
property-example-one:
value: VALUE X
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUserRequest'
required: true
responses:
'201':
description: 201 response
content:
application/json:
schema:
$ref: '#/components/schemas/CreateUserResponse'
components:
schemas:
CreateUserRequest:
type: object
properties:
firstName:
type: string
lastName:
[Truncated]
required:
- firstName
- lastName
- role
```
5. Save
6. Click on "Develop"
7. Create Documentation
8. Open documentation
**Expected behavior**
The examples request and response should use the example values provided in the specification.
**Screenshots**
<img width="572" alt="Screen Shot 2020-10-14 at 11 53 54 PM" src="https://user-images.githubusercontent.com/5751414/96087412-a9a59e80-0e78-11eb-8170-4e812e04de3c.png">
**App information (please complete the following information):**
App Type: Chrome App
Postman Version: Latest on browser
OS: [e.g. macOS Mojave 10.14.6]
Answers:
username_1: Hi, @username_0 Thanks for highlighting this behavior. I suppose you want to generate the `request body` as per the specified schema. There are a few steps I followed to get the desired outcome.
- Describe the `examples` in the target `schema`, like:
```
schemas:
CreateUserRequest:
type: object
properties:
firstName:
type: string
lastName:
type: string
required:
- firstName
- lastName
example:
firstName: 'mickey'
lastName: 'mouse'
```
- Select `example` at the time of `Creating a new documentation` for `Request/Response` parameter generation.
<img width="698" alt="Screenshot 2020-10-15 at 2 03 00 PM" src="https://user-images.githubusercontent.com/23141842/96098101-5e7aa280-0eef-11eb-9254-e8df986cda17.png">
- Open the `Documentation`
<img width="406" alt="Screenshot 2020-10-15 at 2 06 40 PM" src="https://user-images.githubusercontent.com/23141842/96098368-a994b580-0eef-11eb-99d1-10ab17caa2a5.png">
This will bring joy if this resolves your issue. Feel free to raise concerns otherwise.
username_0: Hey @username_1
Thank you for the details!
I did try that for the schema. However, I believe that is only one way of defining an example per the OpenAPI documentation. The YAML provided above has the following snippet:
```yaml
parameters:
- name: someOtherString
in: path
description: property-two-examples description
required: true
schema:
type: string
examples:
property-example-one:
value: stringOne
property-example-two:
value: stringTwo
- name: someString
in: header
description: Header for XYZ ABC Purposes
required: true
schema:
type: string
examples:
property-example-one:
value: VALUE X
```
This is a valid mechanism for defining an example as well per the OpenAPI Documentation:
**Swagger OpenAPI 3.0**: https://swagger.io/docs/specification/adding-examples/
**OpenAPI Docs**: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md#example-object-examples
Would there be a way to access those example properties to match the full spec? Is there something incorrect with the above openAPI spec? I believe it has been validated correctly as well.
username_2: This is a complicated part of the spec, for sure. This will also need consideration when we move to support OAS 3.1.x. See discussion here: https://github.com/OAI/OpenAPI-Specification/pull/2336#discussion_r490453096
username_0: Hey @username_3 / @abhijitkane was just wondering if there was any update here?
username_0: Hey @username_3 / @username_1 was just wondering if there was any update here?
username_0: @username_3 / @username_1 Checking in :)
username_3: @username_0 No updates yet. Will get back to you on this as soon as there's an update
username_4: Heads up, examples in OpenAPI are bonkers and there's a lot of similar but different approaches. https://phil.tech/2020/openapi-examples/ |
influxdata/telegraf | 200387776 | Title: Change prefix strategy for input plugins
Question:
username_0: Hi
Input plugins can add prefix before metrics name.
The metrics format is transformed to prefix_name
When I override the prefix with an empty string with "name_prefix" the underscore is not deleted and the the metrics format become : _name
It is not very clean
Furthermore I am using dropwizard metrics and grafana, if I change my collect strategy from jolokia to metrics opentsdb exporter for exemple all my grafana templates are wrong because dropwizard metrics works only with dot.
It should be nice to align syntax and use only dot and not underscore
Status: Issue closed
Answers:
username_1: Please only request a single feature in a single issue.
Also please use the template because I don't understand what you're asking for exactly otherwise. |
mitodl/open-discussions | 421584352 | Title: When removing a post from the post list view, the post isn't removed
Question:
username_0: ### Steps to Reproduce
1. Make a post with a non-moderator user
2. Log out and log in as a moderator
3. Remove the post
### Expected Behavior
The list of posts should refresh and the removed post should be gone.
### Actual Behavior
The removed post remains until the whole page is refreshed.
### Related Issues
A UX issue: moderators are getting confused between the meaning of "remove" and "delete". Perhaps we should display both options on the same posts. In order words, we shouldn't give moderators the option to remove their own posts. And we should allow all uses to delete their own posts from the channel listing page.
(Optional)<issue_closed>
Status: Issue closed |
Josscii/micro-blog | 842528019 | Title: 2021-03-27
Question:
username_0: 这几天零散的学(玩)完了 flexbox zombie 的课程,有如下👇感想:
1. 本质是通过不断的练习来加深记忆,从而达到熟练的程度。
2. 通过故事加游戏流程来改善练习过程枯燥的问题,图形可能也更能帮助记忆。
3. 小屏幕设备的适配不好,因为使用的 13寸的 mbp,所有纵向的关系基本靠猜。
4. 游戏是英文引导的,但是字幕没法选择进行查询,很多单词不认识,有些地方看得不是很明白,只了解大概意思。
5. 学完之后对 flexbox 的布局有了一定的肌肉记忆,所谓熟能生巧,但是据我所知 flexbox 还有不少技巧需要去了解。
https://mastery.games/flexboxzombies/ |
leo424y/heysiri.ml | 775344062 | Title: %E4%BB%A5%20google%20%E4%BE%86%E8%AA%AA%EF%BC%8C%E5%8F%AF%E4%BB%A5%E7%94%A8%20site%3Ayourapp.com%20%E4%BE%86%E6%90%9C%E5%B0%8B%EF%BC%8C%E5%8F%AF%E4%BB%A5%E6%9F%A5%E8%A9%A2%E4%B8%8D%E6%83%B3%E8%A6%81%E8%A2%AB%E6%94%B6%E9%8C%84%E7%9A%84%E9%83%A8%E5%88%86%E6%98%AF%E5%90%A6%E5%B7%B2%E7%B6%93%E5%BB%BA%E7%AB%8B%E7%B4%A2%E5%BC%95%E4%BA%86%E3%80%82
Question:
username_0: %E4%BB%A5%20google%20%E4%BE%86%E8%AA%AA%EF%BC%8C%E5%8F%AF%E4%BB%A5%E7%94%A8%20site%3Ayourapp.com%20%E4%BE%86%E6%90%9C%E5%B0%8B%EF%BC%8C%E5%8F%AF%E4%BB%A5%E6%9F%A5%E8%A9%A2%E4%B8%8D%E6%83%B3%E8%A6%81%E8%A2%AB%E6%94%B6%E9%8C%84%E7%9A%84%E9%83%A8%E5%88%86%E6%98%AF%E5%90%A6%E5%B7%B2%E7%B6%93%E5%BB%BA%E7%AB%8B%E7%B4%A2%E5%BC%95%E4%BA%86%E3%80%82 |
PennyLaneAI/pennylane | 1140723819 | Title: [BUG] Multi GPU Usage Breaks qml.qnn.TorchLayer
Question:
username_0: Name: PennyLane
Version: 0.21.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/miniconda3/envs/py9/lib/python3.9/site-packages
Requires: autoray, retworkx, cachetools, semantic-version, scipy, pennylane-lightning, networkx, numpy, toml, appdirs, autograd
Required-by: PennyLane-Lightning
Platform info: Linux-4.18.0-348.7.1.el8_5.x86_64-x86_64-with-glibc2.28
Python version: 3.9.7
Numpy version: 1.22.2
Scipy version: 1.8.0
Installed devices:
- default.gaussian (PennyLane-0.21.0)
- default.mixed (PennyLane-0.21.0)
- default.qubit (PennyLane-0.21.0)
- default.qubit.autograd (PennyLane-0.21.0)
- default.qubit.jax (PennyLane-0.21.0)
- default.qubit.tf (PennyLane-0.21.0)
- default.qubit.torch (PennyLane-0.21.0)
- lightning.qubit (PennyLane-Lightning-0.21.0)
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
Answers:
username_1: Hi @username_0 thanks a lot for filing this report! :slightly_smiling_face:
At the moment it's a bit challenging for us, unfortunately, to get devices with multiple GPUs and it can take a bit of time for us to investigate the issue here more deeply. Would you be interested in helping us with narrowing down where the issue could be and potentially with the fix?
Let us know and we could provide more pointers. :slightly_smiling_face:
username_2: Does it work if you explicitly limit all computations to a single GPU?
Try replacing
```python
device = torch.device("cuda" if torch.cuda.is_available() else "CPU")
```
with
```python
device = torch.device("cuda:0")
```
username_0: Hi @username_1 I'm more than happy to run code when I can, however my help may be limited on this topic as I'm not a CUDA expert (though I'm sure I'll learn more as this bug gets resolved).
username_0: @username_2 :
I did two things:
1) Running:
```
#device = torch.device("cuda" if torch.cuda.is_available() else "CPU")
device = torch.device("cuda:0")
```
yields
```
Let's use 4 GPUs!
Traceback (most recent call last):
File "/home/GPU_PL_error_lightning.py", line 81, in <module>
loss_evaluated = loss(model(xs), ys)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/qnn/torch.py", line 277, in forward
reconstructor.append(self.forward(x))
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/qnn/torch.py", line 281, in forward
return self._evaluate_qnode(inputs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/qnn/torch.py", line 296, in _evaluate_qnode
return self.qnode(**kwargs).type(x.dtype)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/qnode.py", line 560, in __call__
res = qml.execute(
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/interfaces/batch/__init__.py", line 342, in execute
cache_execute(batch_execute, cache, return_tuple=False, expand_fn=expand_fn)(tapes)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/interfaces/batch/__init__.py", line 173, in wrapper
res = fn(execution_tapes.values(), **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/interfaces/batch/__init__.py", line 125, in fn
return original_fn(tapes, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/_qubit_device.py", line 289, in batch_execute
res = self.execute(circuit)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/devices/default_qubit_torch.py", line 233, in execute
return super().execute(circuit, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/_qubit_device.py", line 201, in execute
self.apply(circuit.operations, rotations=circuit.diagonalizing_gates, **kwargs)
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/pennylane/devices/default_qubit.py", line 216, in apply
self._state = self._apply_operation(self._state, operation)
[Truncated]
File "/home/miniconda3/envs/py9/lib/python3.9/site-packages/torch/functional.py", line 327, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:1! (when checking argument for argument mat2 in method wrapper__bmm)
```
2) Running
#if torch.cuda.device_count() > 1:
# print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
# model = torch.nn.DataParallel(model)
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("cuda:0")
yields the desired output:
```
Average loss over epoch 1: 0.4943
Average loss over epoch 2: 0.4226
Accuracy: 75.5%
```
username_1: Hi @username_0, we had more of a look into this and it looks like a fix could take time as it would require explicit support for `torch.nn.DataParallel`.
The issue is related to the fact that `torch.nn.DataParallel` will attempt to access the state in the PennyLane device from multiple GPUs in parallel. When using `default.qubit` with `diff_method="backprop"`, we are using the native Torch device `default.qubit.torch` internally. This device assumes that device executions happen sequentially.
The [`execute` method of the device](https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/devices/default_qubit_torch.py#L205) does the transition in between Torch devices, if necessary. It infers the Torch device to use based by checking what Torch device the input parameters to gates were using.
The steps in `execute` can be summarized as:
1. We gather the operations and observables in the circuit;
2. We check: did the user specifiy the Torch device explicitly?
a) [If not](https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/devices/default_qubit_torch.py#L210), we check which Torch device the state vector is on and if need be, we place it to the Torch device where the gate parameters are;
b) [If yes](https://github.com/PennyLaneAI/pennylane/blob/e1b15d27cc5a4a882b552c936580dd903fe3b988/pennylane/devices/default_qubit_torch.py#L217), we warn the user in case they are mixing Torch devices;
3. [We execute the circuit](https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/devices/default_qubit_torch.py#L233).
The error that we see comes from the fact that step 2.a) may be executed more than once before 3 is executed once.
A basic logging was carried out by modifying step 2.a) as:
```python
if self._state.device != self._torch_device:
print("Changing the state device from:", self._state.device)
print("Changing the state device to:", self._torch_device)
self._state = self._state.to(self._torch_device)
```
And adding a `try`-`except` block around the execution:
```python
try:
sup = super().execute(circuit, **kwargs)
except RuntimeError:
print("Error at op: ", ops_and_obs)
return sup
```
The raw log is:
```
Let's use 4 GPUs!
Changing the state device from: cpu
Changing the state device to: cuda:0
Changing the state device from: cuda:0
Changing the state device to: cuda:1
Changing the state device from: cuda:1
Changing the state device to: cuda:2
Error at op: [RX(tensor(0.6368, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(0.1965, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), RX(tensor(5.5435, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(5.7491, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), RX(tensor(2.4056, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(6.0275, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), RX(tensor(2.4533, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(3.7755, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), RX(tensor(1.6121, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(4.9866, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), RX(tensor(5.9110, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(0.8368, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), RX(tensor(5.8723, device='cuda:0', grad_fn=<SelectBackward0>), wires=[0]), RX(tensor(3.7296, device='cuda:0', grad_fn=<SelectBackward0>), wires=[1]), CNOT(wires=[0, 1]), expval(PauliZ(wires=[0])), expval(PauliZ(wires=[1]))]
Changing the state device from: cuda:2
Changing the state device to: cuda:1
```
What this seems to tell is that we've changed the device of the state to `cuda:2` by the time `super().execute(circuit, **kwargs)` has been called using parameters on `cuda:0`.
Potential solutions could include:
* Copying the underlying state or even the device itself (at the expense of having it multiple times in the memory) such that there's a separate state vector;
* Creating a lock such that the logic in `execute` can only be run by a single GPU at once. This does seem to defeat the advantage of parallelization.
Could potentially `torch.nn.parallel.DistributedDataParallel` be a solution? PyTorch seems to recommend using that. |
StanfordHCI/bang | 485573748 | Title: Update user chat display settings based on pre-survey
Question:
username_0: Context:
We currently have the functionality to deliver a pre-survey to users prior to the start of all rounds.
Change:
Based on the user's pre-survey answers we want to modify the display of their chat presence across all rounds of the batch.
Specific modification:
Specifically, we'd like to make the user's avatar gendered based on the user’s answer to a gender pre-survey question.
Answers:
username_0: Context:
We currently have the functionality to deliver a pre-survey to users prior to the start of all rounds.
Change:
Based on the user's pre-survey answers we want to modify the display of their chat presence across all rounds of the batch.
Specific modification:
Specifically, we'd like to make the user's avatar gendered based on the user’s answer to a gender pre-survey question.
username_0: @username_1 looking at https://bang-dev.username_1.ru/ I see the avatars checkbox in the batches page, but I don't see the pre-survey functionality in the template. The avatar genders are supposed to be chosen based on a gender question in the pre-survey of a template so I'm not sure how the functionality works. Could you please explain? Thanks!
username_1: Hi! As previously mentioned, there is a specific question:
Status: Issue closed
username_0: Gotcha thanks! that's clear. Now I think we just need to see the pre-survey in the template! |
Taiki-Ishigaki/robotics-paper | 546186194 | Title: Capture Point: A Step toward Humanoid Push Recovery
Question:
username_0: ## 概要
外乱への安定化制御において、角運動量を考慮するためフライホイール付きLIPMでフライホイールを制御するときのトルク制限や角度制限によってCapture Regionがどのように変化するのか調べた
## 論文情報
author : <NAME>, <NAME>, <NAME>, <NAME>
conference or journal : 2006 6th IEEE-RAS International Conference on Humanoid Robots
link : https://ieeexplore.ieee.org/document/4115602
## ロボット情報
フライホイールのついた二足歩行ロボット(シミュレーション)
## 既存手法の問題点、それに対して
ヒューマノイドロボットにおいて、Capture point(ロボットが完全に停止するための足の着地点)の集合であるCapture Regionを計算することは難しい。LIPMでは計算することができるが、腕のふりや前傾するなどによる角運動量の変化を考慮できていない
→LIPMにフライホイールをつけたモデルでフライホイールの角度制限やトルク制限を考慮してCapture Regionがどう変化するか調べた
## 技術や手法の重要なところ
フライホイールの角度や角速度が急激に変化したときのCapturePointの領域の変化を示した。
フライホイールの制御則は、bang-bang制御として、フライホイール付きLIPMの運動方程式から伝達関数を求めてCaptureRegionを求めた。
運動方程式の各変数やトルク、角度制限を無次元化した変数に置き換えて、無次元化したトルク、角度制限の値をもとにモデルの比較を行えるようにした。
## 検証方法
トルクや角度制限を増加させて、CaptureRegionが増加することを示した。
フライホイール付きの簡易二足歩行モデルでシミュレーションを行い、
* 外乱が加わり重心速度が急激に変化したとき
* single supportの歩行
それぞれで実験を行い、フライホイールを制御することで軌道エネルギーがゼロに近くに収束していることを示した。
## 考えること
重心の高さが変化するときの影響
フライホイールがbang-bang制御だから他に変えたときどうなるか |
ccxt/ccxt | 661120481 | Title: Error on binance.fetchOrders, 'dict' object has no attribute 'encode'
Question:
username_0: I'm getting error code ``` 'dict' object has no attribute 'encode' ``` while using fetchOrders on Binance. Same with fetchOpenOrders, fetchClosedOrders, fetchTrades. I can however query my balance through fetchBalance which indicates that there's nothing wrong with my API keys. I usually have my symbol saved as a variable symbol, which type is 'str', not 'dict'.
Am I missing something obvious like misreading the error or is the problem something else?
- OS: MacOS Catalina
- Programming Language version: Python 3.8
- CCXT version: 1.31.54
```
order= ccxtClient.fetch_orders('BNB/BTC')
print(order)
```
```
File "/Users/davidbjorkstrom/Doc/CODEZ/PYTHON/SPYDER/SPYDERBOT/Binance-Trailing-Stop-Loss-master/CCxtBreadMaker.py", line 108, in <module>
order= ccxtClient.fetch_orders('BNB/BTC')
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/binance.py", line 1310, in fetch_orders
response = getattr(self, method)(self.extend(request, query))
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/base/exchange.py", line 463, in inner
return entry(_self, **inner_kwargs)
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/binance.py", line 1994, in request
response = self.fetch2(path, api, method, params, headers, body)
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/base/exchange.py", line 481, in fetch2
request = self.sign(path, api, method, params, headers, body)
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/binance.py", line 1925, in sign
signature = self.hmac(self.encode(query), self.encode(self.secret))
File "/Users/db/opt/anaconda3/envs/ccxtBreadMaker/lib/python3.8/site-packages/ccxt/base/exchange.py", line 1197, in encode
return string.encode()
AttributeError: 'dict' object has no attribute 'encode'
```
Answers:
username_1: @username_0 please follow the docs very carefully:
- https://github.com/ccxt/ccxt/wiki/FAQ#what-is-required-to-get-help
- https://github.com/ccxt/ccxt/wiki/Manual#authentication
The error says that your `apiKey` and `secret` should be strings. You're passing a `dict` instead.
Your code for initializing the exchange should look like this:
```Python
import ccxt
exchange = ccxt.binance({
'apiKey': 'YOUR_API_KEY',
'secret': 'YOUR_SECRET',
'enableRateLimit': True,
})
symbol = 'BNB/BTC'
order = exchange.fetch_orders(symbol)
print(order)
```
I would highly recommend to read the entire Manual from top to bottom at least once, and to follow it very carefully, literally to a word – that will save your time, really.
Let us know if the above does not answer your question.
Status: Issue closed
username_0: Hi!
I've read the docs in search for answers and have initialised my exchange as above. But when i changed the place from where I import my keys it started working! So thanks a lot for pointing that out! I'm fairly new to this so.
username_1: @username_0 thx for reporting back, good to know that you've figured it out. |
deanpettinga/rnaseq_workflow | 484554280 | Title: ensembl mart always human
Question:
username_0: need to incorporate the config.yaml species information into the .Rmd so it will download the correct mart for annotation ens2gene, etc.
Answers:
username_0: this issue will be nullified by new script to run edgeR utilizing the [vari-bbc/bbcRNA](https://github.com/vari-bbc/bbcRNA) R package
Status: Issue closed
|
FlowCrypt/flowcrypt-android | 1116068374 | Title: version 1.2.8
Question:
username_0: @username_1 I think we are ready to make a new release. Just need to review the current issues for 1.2.8 on your side. While you are doing that I'm working on some issues for 1.2.9 and will move some of them(completed) to 1.2.8 if needed.
Answers:
username_0: @username_1 I think we are ready to make a new release. Just need to review the current issues for 1.2.8 on your side. While you are doing that I'm working on some issues for 1.2.9 and will move some of them(completed) to 1.2.8 if needed.
username_1: I'll go ahead and do a release now
Status: Issue closed
|
microsoft/fluentui | 1143883439 | Title: Implement SpinButton component
Question:
username_0: 1. [ ] Implement behavior according to technical spec
2. [ ] Styling according to Figma design spec
1. [ ] High Contrast
3. [ ] Add jest tests
4. [ ] Add Cypress tests (for focus behavior)
5. [ ] Accessibility checklist
1. [ ] Resolve any blockers
Answers:
username_0: Things to consider:
1. [ ] Use `react-input` for the `input` slot of `SpinButton`. Seems technically possible and the viability of this approach likely depends on how SpinButton will look relative to `react-input`. |
AvaloniaUI/Avalonia | 1129060116 | Title: System.Numerics.Vector Bug not loading in applications that also reference System.Numerics.Vector properly(strongly named assignment problem)
Question:
username_0: **Describe the bug**
We have several applications all UI with Avalonia, in where we develop plugins for other software and use Avalonia as our UI for popping up our own UI. Recently we've bumped into a problem in regards to referencing problem with Avalonia and System.Numerics.Vector.dll with strongly naming and not being able to load the reference. We have resolved this with some other connectors but cannot resolve it with this app. Have tried multiple solutions online but to no avail. The program themselves do not have a dependency on system.numerics.
**To Reproduce**
Steps to reproduce the behavior:
1.Download SAP2000 from [CSI products ](https://www.csiamerica.com/products/sap2000)
2. Pull our speckle-sharp repo and switch to the structural SAP branch
3. debug with the executable launching and the start up project being ConnectorSAP2000v23
4. put a debug point around the exception in the
**Expected behavior**
The assembly should be forced loaded properly without exceptions and the UI should pop up accordingly.
**Screenshots**
<img width="1267" alt="SAP2000PluginException" src="https://user-images.githubusercontent.com/29717150/153296115-fde13964-8749-4171-89f5-a7dc19e4691b.png">
**Desktop (please complete the following information):**
- OS: Windows
- Version [e.g. 0.10.0-rc1 or 0.9.12]
**Additional context**
Add any other context about the problem here.
Answers:
username_1: It seems that your application has already loaded a custom-built version of System.Numerics.Vectors that doesn't have a strong name.
username_0: 
username_2: As far as I remember, this package isn’t strongly named only because ReactiveUI is not.
If you need strong naming, you can’t use reactive ui.
I don’t know how is it related to this issue though. |
RJTveit/Crossroads | 714465340 | Title: Revise database and data model
Question:
username_0: - [ ] Required to revise assigned primary and foreign key constraints
- [ ] Edit data model to link all tables
- [ ] Populate database with 10 entries per table
- [ ] Create database for image and video file storage<issue_closed>
Status: Issue closed |
apache/iceberg | 686085005 | Title: Flink: Support Flink streaming reading
Question:
username_0: Flink is famous for its streaming computation.
- Iceberg has this capability as a message bus for stream computing. Even in near real time, it can meet many requirements.
- Compared with Kafka, iceberg can save all the historical data, while Kafka can only save the data of recent days. The ad-hoc can query all the historical data. Moreover, iceberg has efficient query performance and storage efficiency.
After https://github.com/apache/iceberg/pull/1346 , it is easy to build Flink streaming reading based on it.
Unlike Spark, Flink streaming continuous monitor new Files of table, and directly send the splits to operators. The source don't need take care of micro-batch size, because the operator stores incoming splits into state, and consume one by one.
Monitor ----(Splits)-----> ReaderOperator
Monitor (Single task):
- Monitoring snapshots of the Iceberg table.
- Creating the splits corresponding to the incremental files using `FlinkSplitGenerator`. (Actually using `TableScan.appendsBetween`).
- Assigning them to downstream tasks for further processing.
ReaderOperator (multiple tasks):
- Put received splits into state (A splits queue).
- Read splits using `FlinkInputFormat` in a checkpoint cycle.
- Let the main thread complete the snapshot for checkpoint.
- After that, the task continues to consume the remaining splits in the state.
Answers:
username_1: @username_0 we should use the new FLIP-27 source interface, right?
* enumerator/monitor runs in jobmanager
* enumerator tracks discovered splits
* enumerator assigns a split when a readers requests for one
We probably don't want enumerator statically assign all discovered splits up front. Dynamic assignment is better for load balancing with straggler/outlier reader nodes.
username_0: Hi @username_1 , yes, The advantage is that the assignment will be more dynamic balanced.
It depends on the progress of FLIP-27.
We are trying to implement Filesystem/Hive source on FLIP-27 in FLINK 1.12. And in order to achieve this goal, we are modifying the interfaces of FLIP-27 too. (FLIP-27 in Flink 1.11 is not ready)
If the time is not urgent, we can wait for FLINK 1.12.
username_2: @username_1 , we have implemented an internal version for flink streaming reader, which is not built on top of FLIP-27 now. Here is the pull request https://github.com/generic-datalake/iceberg-poc/pull/3/files for our own branch. As Jingsong described, once FLIP-27 is ready, we'd happy to switch the current implementation to FLIP-27.
username_1: @username_0 @username_2 thx. We are currently implementing an Iceberg source based on FLIP-27 interface. Our initial goal is for backfill purpose. it is bounded but with streaming behavior. Meaning app code stayed with DataStream API, just switching source from Kafka to Iceberg. We are also very interested in streaming/continuous read pattern. It is not urgent. we can probably collaborate. Would love to see building blocks being pushed upstream slowly.
username_1: regarding `TableScan.appendsBetween`, we might need more flexibility of fine-grained control. E.g. if Flink job is lagging behind or bootstrap from an old snapshot, we probably don't want to eagerly plan all the unconsumed `FileScanTask`. That might blow up Flink checkpoint state if the enumerated list of `FileScanTask` is too big.
I am thinking about two level of enumerations to keep the enumerator memory footprint in check.
* first, enumerate the list of unconsumed `DataOperations.APPEND` snapshots. It is cheap to track and checkpoint this list
* second, enumerate `FileScanTask` up to a configurable number of oldest snapshots (e.g. 6) from the first step
if job is keeping up with the ingestion, we should only have one unconsumed snapshots.
username_2: @username_1 , what is the maximum size of a table in your production environment ? I'm thinking whether it's worth to implement the two-phase enumerators in the first version.
If we have 1PB data and each file have the size 128MB, then it will have 8388608 files. If every `FileScanTask` consume 1KB , then its state is ~ 8GB. That should be acceptable for the flink state backend.
username_0: Hi @username_1 , about `TableScan.appendsBetween`, we can limit the snapshot number of scan, even scan only one at a time. Because `TableScan.appendsBetween` seems to be just a combination of single incremental snapshots, we can handle only one snapshot at a time.
- For FLIP-27 Source, it is easy to do, because tasks came to ask for splits, the only problem is that within the coordinator, the coordinator can completely control how the splits are generated.
- For the old API, the downstream reading operator should has a max split queue size to back pressure the Enumerator. And do above thing.
username_1: I am mainly talking about in the context of FLIP-27 source where enumerator runs in jobmanager and needs to track the unconsumed splits.
@username_2 note that this is not keyed state (like after keyBy user_id) etc. 8 GB operator state can be problematic. I vaguely remember RocksDB can't handle a list larger than 1 GB. the bigger the list, the slower it gets.
@username_0 how would the enumerator/coordinator track which snapshot is planned/enumerated using? maybe I didn't understand how to use `TableScan.appendsBetween`. I was thinking the idea was to run `planFiles` or `planTasks` between last planned snapshot and the latest table snapshot. that is what I was referring earlier as eager discovery/planning of all unconsumed splits. how are we scanning one snapshot at a time?
I was mainly thinking about using `TableScan.useSnapshot(long snapshotId)`. we can use a snapshot blocking queue (with configurable size) to back pressure the enumerator.
username_0: NIT: I think we still need use `appendsBetween(snapshot-1, snapshot)` since we want to get incremental data.
True, I think this is easy to do. |
jokergoo/ComplexHeatmap | 797437175 | Title: Save complexheatmap
Question:
username_0: Hi ,
I would like export the complexheatmap in text,
or csv for further analysis.
Which is the best way?
Thanks a lot in advance!
G

Answers:
username_1: I think what you want is to save a reordered table where the order is the same as in the heatmap, right?
You can get the new orders by:
```
ht = Heatmap(mat, ...)
ht = draw(ht) ## this draws the heatmap
row_order = row_order(ht)
column_order = column_order(ht)
# since you applied splitting on rows, `row_order` is a list, thus you need to unlist it
row_order = unlist(row_order)
# reorder `mat`
mat2 = mat[row_order, column_order]
write.csv(mat2, ...)
```
username_0: ABSOLUTELY THANKS A LOT,
IT'S WORKS!
G
Status: Issue closed
|
redux-observable/redux-observable | 339547923 | Title: Updating with-redux-observable example in nextjs repo.
Question:
username_0: I'm working on updating `with-redux-observable` example in NextJS repository. I would like some comments or a green light from you guys. Most of the things stay the same and I'm just resolving breaking changes, however, there is one thing that I'm not sure of which is [creating StateObservable](https://github.com/username_0/next.js/blob/feature/update-with-redux-observable-example/examples/with-redux-observable/pages/index.js). I used a solution from of the issues here to do it.
Here is a link to my changes: https://github.com/username_0/next.js/tree/feature/update-with-redux-observable-example/examples/with-redux-observable
Answers:
username_1: Cool! It's not clear why you're calling the rootEpic directly inside your Counter component. Can you clarify the intent?
username_0: It is called inside `getInitialProps` which is a hook that allows to fetch initial data on the server.
username_1: @username_0 Gotcha. the idiomatic pattern of redux-observable is to have your components and epics have no direct knowledge of each other, only communicating via dispatched actions. Calling an epic directly in your component wouldn't be a pattern I recommend.
username_0: @username_1 I see this approach is not ideal. Though, that's the original idea of the example to somehow make use of `getInitialProps` to fetch the data in the server... Do you have a better idea how to accomplish something like that? As of redux-saga there is a library that does it: https://github.com/bmealhouse/next-redux-saga. Maybe it is possible to do something similar... |
sportstimes/f1 | 629907182 | Title: Can you add multilang?
Question:
username_0: Can you add multilang?
Answers:
username_1: What languages would you like to support? We'd need to crowd-source the translations as well which could be difficult to coordinate.
username_0: Rususername_1an
username_1: Would you be able to help provide translations if we can work out a solution?
username_1: 
Looking at recent analytics, the most popular local languages are en-GB, en-US, nl-NL and pt-PT.
username_0: Yes of course
username_1: Let's see what @username_2 has to think on it as he's the brains behind the latest implementation.
username_2: @username_0 @username_1 Will have a think and do some research on how best to approach this today.
Would it be useful if the race names were also localized or just the app itself?
username_2: Will keep working on it but it should be fairly easy. I'll get the string files created soon so they can be populated while I continue to add support to the rest of the username_1te. https://share.buffer.com/ApuA6jYj
username_0: I think that the name of the race should also be localized.
username_0: How can now select lang if i create locale/ru files? Maybe drop-down menu? Country flags? Or auto by timezone or maybe use https://developer.mozilla.org/ru/docs/Web/API/NavigatorLanguage/language
username_2: @username_0 For now it’ll be a drop down. I’ve pushed that in the i18n branch. It’ll show in the top right when more than 1 language is available.
If you have the files for RU I can merge them in and it should then configure the app with both EN and RU. Later on we can look at auto detection.
For now the username_1te will have English at / and /en and any additional languages served in /ru /de /es directories etc.
I’m looking at making the generated calendar files localized next.
username_0: In components\OptionsBar.js - string 88 (Showing times for) - not translated for common.json
username_2: Resolved in latest commits 👍
username_0: Maybe need translate - timezone too?
username_2: For now we'll leave those out, I'm looking into the libraries we use for date formatting and the timezone picker to enable localization based on the current selected language. 👍
username_0: Hmm, i try generate in Rususername_1an, but no localization name in ics and other cal.
username_2: That’s the next step :) Hoping to get working on that today. |
spirit-js/spirit | 215205460 | Title: Question: How is the API of Spirit better than koa2?
Question:
username_0: I really appreciate the nice little project over here. But besides performance, isnt the API of spirit essentially similar to koa2? in spirit, middleware looks like
`(handler) => (request) => handler(request)`
which is similar, though more smooth, than koa2's
`(ctx/* == request*/, next) => next()`
I can't see advantages in the route definition, since if you're returning a response map, you're transmitting HTTP information too.
e.g. a route in spirit might be
`route .define([
route .get ("/", [], () => { return "Hello World!" })
])`
whereas in koa2 i might do
```
var route = () => { return "Hello World!" };
app .use ((ctx, next) => { ctx .body = route (/*or whatever dependencies need be injected*/); return next (); })
```
I really enjoy the simple API of spirit, but I feel it might have made a bad distinction between routes and middleware; when routes have to return response maps, i feel that wouldve more elegantly been made as a middleware by default. Even more impure seems to be the dependency injection in the request; once again I feel like its the job of a middleware. If spirit aims to be pure, lets go all pure!
Answers:
username_1: I don't really like the middleware part, but for me personally the reason to use spirit is that it's predictable, easily testable and "without magic". I usually use the `return {status, body, headers}` variant of the API since IMO it's the most obvious way to respond to an HTTP request.
username_2: I don't think you should feel that way. I don't think of 'response maps' as a `res` object in express or koa, I think of it as just data being passed. Since it isn't tied directly to `res` or some context, you have a lot of options to refactor and keep things DRY depending on how you want it.
Hope it helps, sorry for it being long.
username_1: This isn't true anymore, `koa` now just passes `ctx` to all functions. |
Kodcentrum/Scratchuppgifter-v3 | 437674913 | Title: Fotbollspelet - förenkla instruktion rita målet?
Question:
username_0: Fotbollspelet känns lite onödigt krånglig för att utföra färg-avkänning när det blir mål, olika toner av vit färg, svårt att se. Det kan man göra enklare med ex. annan färg, eller annan lösning. Korrigera i instruktion? |
pgjdbc/pgjdbc | 294441843 | Title: allowEncodingChanges not working in 42.2.x
Question:
username_0: Our application uses SQL_ASCII encoding. With versions 41.1.4 and earlier we were able to add "?allowEncodingChanges=true" to the connection parameters and "set client_encoding = sql_ascii" in our connection validator. Testing with pgjdbc 42.2.x this now throws a protocol violation.
Looking at the 42.2.1 source for QueryExecutoryImpl.receiveParameterStatus() the check around line 2588 correctly tests the allowEncodingChanges setting:
```
if (name.equals("client_encoding") && !value.equalsIgnoreCase("UTF8")
&& !allowEncodingChanges) {
close(); // we're screwed now; we can't trust any subsequent string.
throw new PSQLException(GT.tr(
"The server''s client_encoding parameter was changed to {0}. The JDBC driver requires client_encoding to be UTF8 for correct operation.",
value), PSQLState.CONNECTION_FAILURE);
}
```
Unfortunately the code around line 2629 was changed to not honor this parameter, and fails unconditionally for any non-UTF8 client encoding:
```
} else if ("client_encoding".equals(name)) {
if (!"UTF8".equals(value)) {
throw new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.PROTOCOL_VIOLATION);
}
pgStream.setEncoding(Encoding.getDatabaseEncoding("UTF8"));
```<issue_closed>
Status: Issue closed |
SerenityOS/serenity | 911226432 | Title: Kernel hang when pressing Ctrl+C inside PHP
Question:
username_0: Reproducible using PHP from PR #7757 :
1. Run `php` in a terminal
2. Press `Ctrl + C`
```
[#0 Terminal(27:27)]: /dev/pts/0: VINTR pressed!
[php(32:32)]: ASSERTION FAILED: !(action.flags & SA_SIGINFO)
[php(32:32)]: ../../Kernel/Thread.cpp:717 in Kernel::DispatchSignalResult Kernel::Thread::dispatch_signal(u8)
[#0 php(32:32)]: 0xc05d64b6 abort +0x4b
[#0 php(32:32)]: 0xc05c97ff __assertion_failed(char const*, char const*, unsigned int, char const*) +0xd5
[#0 php(32:32)]: 0xc050f0ba Kernel::Thread::dispatch_signal(unsigned char) +0xee
[#0 php(32:32)]: 0xc0510500 Kernel::Thread::dispatch_one_pending_signal() +0x1de
[#0 php(32:32)]: 0xc02957df Kernel::Thread::BlockResult Kernel::Thread::block<Kernel::Thread::ReadBlocker, Kernel::FileDescription&, Kernel::Thread::FileBlocker::BlockFlags&>(Kernel::Thread::BlockTimeout const&, Kernel::FileDescription&, Kernel::Thread::FileBlocker::BlockFlags&) +0x25c9
[#0 php(32:32)]: 0xc048633b Kernel::Process::sys$read(int, AK::Userspace<unsigned char*>, long) +0x94b
[#0 php(32:32)]: 0xc03ec1f2 syscall_handler +0x1e85
[#0 php(32:32)]: 0xc03ea364 syscall_asm_entry +0x31
```
Answers:
username_0: This is a FIXME to implement `SA_SIGINFO` (https://man7.org/linux/man-pages/man2/sigaction.2.html)
username_1: I started working on SA_SIGINFO a while back but I got lost in the signal stack weeds and gave up. It's wip here for when someone wants to debug signal handlers all weekend 🙈
https://github.com/SerenityOS/serenity/pull/4340 |
blockframes/blockframes | 590894355 | Title: Include Comments to Contracts
Question:
username_0: ## Description
Parties may have custom comments to add to the contract.
## Next steps
- [ ] Data: add a comment parameter
- [ ] UI: update screens with a comment section
- [ ] Dev: look for a reusable text component for contract<issue_closed>
Status: Issue closed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.