repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
MicrosoftDocs/azure-docs | 462827266 | Title: Confusing comments in the first paragraph
Question:
username_0: The first paragraph in the article is little confusing (copied below). Is PoA required for non-Azure hosted clusters, or not?
"the Patch Orchestration Application (POA) is a wrapper around Service Fabrics RepairManager Systems service that enables configuration based OS patch scheduling for non-Azure hosted clusters. POA is not required for non-Azure hosted clusters, but scheduling patch installation by Upgrade Domains, is required to patch Service Fabric clusters hosts without downtime."
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a3a16900-c967-ccf9-2d87-1c994013bf0e
* Version Independent ID: 474287be-47ca-50b3-1240-8cca745eca9e
* Content: [Azure Service Fabric patch orchestration application](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-patch-orchestration-application#feedback)
* Content Source: [articles/service-fabric/service-fabric-patch-orchestration-application.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-patch-orchestration-application.md)
* Service: **service-fabric**
* GitHub Login: @khandelwalbrijeshiitr
* Microsoft Alias: **brkhande**
Answers:
username_1: @username_0 no POA is not a requirement when using non-Azure hosted clusters as the comment mentions. However, if you want to ensure you are able to patch your cluster without downtime you would need to make sure to update the nodes one upgrade domain at a time so at least a single cluster remains active while the others are updating.
Personally, I find using POA makes patching simpler and ensure your services remain up. But it is not a requirement.
username_0: Thank you @username_1 for the clarification. The wording and punctuation in that paragraph is confusing - so it would be good to update the documentation to provide the clarity you just provided.
Status: Issue closed
|
ant-design/ant-design | 480468022 | Title: Maybe it is time to update most of the components that use 'ComponentWillReceiveProps' .
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### What problem does this feature solve?
use ant-design using version 3.14.2.
use Menu or Form or other components using this hook… you know what happened
### What does the proposed API look like?
it Should not contain 'ComponentWillReceiveProps', updating it In the next version please.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Answers:
username_1: Using the latest ant design with react 16.9.0 shows the following deprecation warning
<img width="884" alt="Screenshot 2019-08-14 at 11 16 45 AM" src="https://user-images.githubusercontent.com/1837792/62997193-13957300-be85-11e9-94a4-96aacfdccedd.png">
Status: Issue closed
username_2: Duplicate of #9792 |
keptn/keptn | 706381217 | Title: Lighthouse-service: include_result_with_score works on SLO- not on SLI-level
Question:
username_0: **Current behavior:**
The lighthouse-service filters out the objectives (SLI) that failed; see: https://github.com/keptn/keptn/blob/master/lighthouse-service/event_handler/evaluate_sli_handler.go#L371
This can lead to the situation that the quality gate opens even though the measured value is still not in an acceptable range. Please see this example:

**Changing behavior:**
- The `include_result_with_score: "pass_or_warning"` parameter has no impact on the individual SLIs from an SLO evaluation, but just on the overall SLO result.
- If an SLO has a status pass/warning, then consider all SLIs for comparison regardless of their status.
_Example:_

**Additional task:**
- Extend the `sh.keptn.events.evaluation-done` event to contain the event-id of the evaluation results taken for comparison.
**Definition of Done:**
- `include_result_with_score: "pass_or_warning"` works on SLO- not on SLI-level.
- Event-id of compared evaluation results is part of the payload of the evalation.done event.<issue_closed>
Status: Issue closed |
olafurpg/scalafmt | 161129262 | Title: Two open parens on one line screw up indentation
Question:
username_0: Original:
```scala
mediator.expectMsg(PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(`flowName`, Message("Akka rocks!", time))
))
```
Scalafmt:
```scala
mediator.expectMsg(PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(`flowName`, Message("Akka rocks!", time))
))
```
I would like the input to look like this:
```scala
mediator.expectMsg(PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(`flowName`, Message("Akka rocks!", time))
))
```
Using:
* 0.2.5
* --style defaultWithAlign --maxColumn 120 --spacesInImportCurlyBraces true --alignStripMarginStrings true
Answers:
username_1: I agree that scalafmt's output looks weird and the expected output looks better. The reason why the indentation is 8 spaces is because scalafmt bumps up the indentation for each `(` in a function application. This prevents confusing output like this
```scala
Initialize(config(
"port.http"),
settings
))
```
where `settings` is at the same indentation as `"port.http"` but they are at different nesting levels. It appears that other formatters like clang-format special-case when there is only one argument. Scalafmt already does this for blocks wrapped in curly braces `{}` and could probably do the same with parentheses `()`.
username_0: The example above looks pretty ugly to me: Shouldn't that be rewritten to the following anyway:
```
Initialize(
config("port.http"),
settings
)
```
username_1: It was a synthetic example just to highlight what could happen when the indentation is inconsistent with the nesting level. Scalafmt would actually format it like this
```scala
Initialize(config("port.http"), settings)
```
The inconsistency won't be a problem when there is only one argument.
username_2: I have:
```scala
def map[B](f: A => B): AsyncValidation[E, B] =
AsyncValidation(asyncValid map { valid =>
valid.fold(
error => Xor.Left(error),
success => Xor.Right(f(success))
)
})
```
Scalafmted to:
```scala
def map[B](f: A => B): AsyncValidation[E, B] =
AsyncValidation(
asyncValid map { valid =>
valid.fold(
error => Xor.Left(error),
success => Xor.Right(f(success))
)
})
```
Do you think it's related to this issue?
username_1: @username_2 On master branch that input produces:
```scala
def map[B](f: A => B): AsyncValidation[E, B] =
AsyncValidation(asyncValid map { valid =>
valid.fold(
error => Xor.Left(error),
success => Xor.Right(f(success))
)
})
```
I recommend you run from master branch or wait until 0.2.6 is released, which I hope to get out by tomorrow. A lot of bugs have been fixed since 0.2.5.
username_2: @username_1 you are awesome
Is there any way to run from master with SBT?
`addSbtPlugin("com.geirsson" % "sbt-scalafmt" % "0.2.6-SNAPSHOT")` ?
username_1: @username_2 https://gitter.im/username_1/scalafmt?at=576a4fe42554bbe049ba6ac8
username_1: @username_0 Using `--danglingParentheses true` and no align open parens now produces this output for your example in this issue. Releasing 0.2.9 soon.
```scala
mediator.expectMsg(
PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(
`flowName`,
Message(
"Akka rocks!",
time
)
)
)
)
```
username_0: Now other things are broken:
With 0.2.8:
``` scala
Props(new Actor {
context.stop(self)
override def receive = Actor.emptyBehavior
})
```
becomes with 0.2.9:
``` scala
Props(
new Actor {
context.stop(self)
override def receive = Actor.emptyBehavior
}
)
```
username_1: Oh shoot, I forgot to exclude {} blocks. That's should be a simple fix, did you notice any other issues @username_0?
username_1: I've fixed that example, if there's no other regression I can publish a quick release tonight. I leave for a 3 week vacation tomorrow morning 😄
username_0: I have seen no other issue. Please publish and then have a good vacation! ;-)
username_1: Released 0.2.10! Enjoy 😉
username_0: Thanks!
username_1: Is this still an issue @username_0? Here's the output on master:
```scala
// default style
mediator.expectMsg(
PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(`flowName`, Message("Akka rocks!", time))
))
// dangling parentheses
mediator.expectMsg(
PubSubMediator.Publish(
className[MessageEvent],
MessageAdded(`flowName`, Message("Akka rocks!", time))
)
)
```
Note that you can get the dangling parentheses layout with the default style if you insert a newline between the closing `))` in the end (i.e., opt into config style).
username_0: No longer an issue. Thanks!
username_1: Cool, closing then.
Status: Issue closed
|
SiLab-Bonn/online_monitor | 461421768 | Title: DeprecationWarning: ConfigParser
Question:
username_0: Message:
```bash
DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.
``` |
valor-software/ng2-charts | 174221465 | Title: combine two chart
Question:
username_0: Is it possible to combinate two or three chart in the same graph ? like line chart + bar chart
Answers:
username_1: This may help http://stackoverflow.com/questions/25811425/chart-js-how-to-get-combined-bar-and-line-charts
Status: Issue closed
username_1: ng2-charts is just a port & modification of Chart.js component for Angular 2
Please consider using Stack Overflow first for questions next time. Thanks |
containers/podman | 705287620 | Title: Can't upgrade podman from version 1.6.4 to newer version.
Question:
username_0: **Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)**
@greenpau
/kind bug
**Description**
I use the old podman version containing various bugs but can't upgrade podman using dnf on CentOS8 in any way
**Steps to reproduce the issue:**
I have podman ver 1.6.4 installed on Centos 8.1
1. My installed podman version check:
dnf info podman.x86_64 shows
Version : 1.6.4
Release : 10.module_el8.2.0+305+5e198a41
Architecture : x86_64
Source : podman-1.6.4-10.module_el8.2.0+305+5e198a41.src.rpm
Repository : @System
2. as is podman version - 1.6.4
3. dnf search podman.x86_64
No matches found.
**Describe the results you received:**
Can't upgrade to the newer version containing many issues correction, especially related to container networking and firewall backend.
**Describe the results you expected:**
podman upgraded to the latest release for CentOS8.
**Additional information you deem important (e.g. issue happens only occasionally):**
**Output of `podman version`:**
debug:
compiler: gc
git commit: ""
go version: go1.13.4
podman version: 1.6.4
host:
BuildahVersion: 1.12.0-dev
CgroupVersion: v1
Conmon:
package: conmon-2.0.6-1.module_el8.2.0+305+5e198a41.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.6, commit: <PASSWORD>e0b4eb1a834bbf0aec3e'
Distribution:
distribution: '"centos"'
version: "8"
```
``
**Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?**
No
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Podman on CentOS8 WSL distro upgraded to:
centos-gpg-keys-8.2-2.2004.0.2.el8.noarch centos-release-8.2-2.2004.0.2.el8.x86_64 centos-repos-8.2-2.2004.0.2.el8.x86_64
My enabled distros are
dnf repolist enabled:
repo id repo name
AppStream CentOS-8 - AppStream
BaseOS CentOS-8 - Base
extras CentOS-8 - Extras
Github podman repository contains later versions for CentOS
Answers:
username_1: please try this
```
dnf -y module disable container-tools
dnf -y install 'dnf-command(copr)'
dnf -y copr enable rhcontainerbot/container-selinux
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_8/devel:kubic:libcontainers:stable.repo
dnf -y install podman
```
Status: Issue closed
username_0: Thanks a lot! Upgraded to 2.0.6 |
dart-lang/sdk | 203180056 | Title: regression in the snapshot format?
Question:
username_0: When running the pub snapshot in checked mode (which we do when running unit tests from IntelliJ), we're now seeing this issue:
```
Wrong features in snapshot, expected 'release asserts type-checks x64-sysv' found 'release no-asserts no-type-checks x64-sysv'
```
You can repo in the latest dev sdk (`1.22.0-dev.9.1`) by:
```
./bin/dart --checked ./bin/snapshots/pub.dart.snapshot
```
@username_1 I'm assuming you're the right person to look at this, but please re-assign if not.
/cc @a-siva
Answers:
username_1: Duplicate of dart-lang/pub#1504.
Status: Issue closed
|
05-YujiAbe/kadai | 108471586 | Title: 9月27日の課題について
Question:
username_0: 遅くなり申し訳ございません。
一旦一通り完成いたしました。
設計図やSQLをコミットしておりますが、
以下のURLでもご覧いただけます。
http://fun-gs.sakura.ne.jp/0927/index.php
管理画面(admin / password)
http://fun-gs.sakura.ne.jp/0927/admin/index.php
サイドの記事ランキングは機能しておらずベタ書きです。
当初の目論見より、DB設計が変わりました。
imagesテーブルを作っておりません。
ページングの処理や、記事表示数の変更を追加できました。
Answers:
username_1: レビュー担当の増子です。
URLから実際の動き見せてもらいました。普通に存在するサイトかと思うくらい見た目も綺麗で機能もよくできていると思います。
管理画面もよくできていて、いろいろいじってしまいました笑
問題なし!です |
rancher/fleet | 692196176 | Title: RFE: Update the name for the backup file when it's encrypted
Question:
username_0: If the backend named encrypted backups with an e.g. .encrypted suffix, then the UI could look at the filename entered and definitively tell you that you need to have the encryption config instead of having the user guess from the warning message. |
MicrosoftDocs/azure-docs | 474033418 | Title: Unable to recover from error after entering SAML config
Question:
username_0: After entering the SAML configuration in https://samltoolkit.azurewebsites.net the page (https://samltoolkit.azurewebsites.net/SAMLSSOconfig) keeps returning an error. I'm unable to proceed with the configuration, view it or change it. I suspect I forgot to click the Upload file button for the XML federation file or the service is just down?
"
Error.
An error occurred while processing your request.
"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0800632e-a9b3-5f96-f290-207c7ef6be88
* Version Independent ID: eaa3c7a0-76d5-433f-9ac3-bc72638c0ff3
* Content: [Tutorial: Azure Active Directory integration with Azure AD SAML Toolkit](https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/saml-toolkit-tutorial)
* Content Source: [articles/active-directory/saas-apps/saml-toolkit-tutorial.md](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/saas-apps/saml-toolkit-tutorial.md)
* Service: **active-directory**
* GitHub Login: @username_2
* Microsoft Alias: **jeedes**
Answers:
username_1: @username_0
Thanks for your feedback! We will investigate and update as appropriate.
username_2: @username_0 I have verified that the app is working and I can able to login and see the SAML Configuration. For creating the SAML configuration uploading the file is must.
Can you please try again in the new browser InPrivate window and see how this goes?
username_0: @username_2 I tried an inprivate window but as soon as I log on with the registered account, I get the error page and I am unable to complete the SAML configuration.
username_2: @username_0 Can you please send your config details to our alias and then we can help resolve the issue <EMAIL>.com
username_2: #please-close
Status: Issue closed
|
wxWidgets/Phoenix | 460045382 | Title: Cant set NumCtrl to None if fractionWidth is used
Question:
username_0: **Windows 10**
**4.0.6 msw (phoenix) wxWidgets 3.0.5**
**Python 3.7.3**
**Description of the problem**:
Hi guys,
if i use both fractionWidth and allowNone for one control, the control can't be set to None anymore. Without the fractionWidth it is still possible. Even trying to SetAllowNone after the initialization wont work:
self.txtDummy.SetAllowNone(True)
If i try to read the value of the NumCtrl, it returns 0.0:
self.txtDummy.GetValue()
My example:
self.txtDummy = NumCtrl(self.panel, fractionWidth = 1, allowNone = True)
cheers
username_0
Answers:
username_0: Tested it with the wxWidget Demo -> same result :-(. |
wasm3/wasm3 | 537024638 | Title: Saturated float convertions
Question:
username_0: https://github.com/WebAssembly/nontrapping-float-to-int-conversions
Answers:
username_0: Test script is now prepared to run tests for this proposal:
```sh
./run-spec-test.py ./proposals/nontrapping-float-to-int-conversions/*.json
```
username_1: This may prove to be not as easy to implement as #22 because the proposal introduces 2-byte opcodes while AFAICT, wasm3 only supports 1-byte opcodes.
For future reference, the new instructions and their opcodes are as follows:
| Name | Opcode | Immediate | Description |
| ---- | ---- | ---- | ---- |
| `i32.trunc_sat_f32_s` | `0xfc` `0x00` | | :bowling: saturating form of `i32.trunc_f32_s` |
| `i32.trunc_sat_f32_u` | `0xfc` `0x01` | | :bowling: saturating form of `i32.trunc_f32_u` |
| `i32.trunc_sat_f64_s` | `0xfc` `0x02` | | :bowling: saturating form of `i32.trunc_f64_s` |
| `i32.trunc_sat_f64_u` | `0xfc` `0x03` | | :bowling: saturating form of `i32.trunc_f64_u` |
| `i64.trunc_sat_f32_s` | `0xfc` `0x04` | | :bowling: saturating form of `i64.trunc_f32_s` |
| `i64.trunc_sat_f32_u` | `0xfc` `0x05` | | :bowling: saturating form of `i64.trunc_f32_u` |
| `i64.trunc_sat_f64_s` | `0xfc` `0x06` | | :bowling: saturating form of `i64.trunc_f64_s` |
| `i64.trunc_sat_f64_u` | `0xfc` `0x07` | | :bowling: saturating form of `i64.trunc_f64_u` |
username_0: @username_1 you're right. I'll start with implementing the parsing part. This will later be a good example for SIMD and other extensions.
username_1: @username_0 - I think I now understand the compilation logic well enough to tackle the parsing part. It looks like I will need to write a custom `M3Compiler` function to handle extended opcodes. I can probably tackle this over the weekend unless you've started writing it already.
I also noticed that @username_2 has submitted #81 where he implemented the execution part. I can reuse that code.
username_1: Also, would it be ok to use C99 static array initialization syntax for `c_operations`?
In other words, instead of
```c
...
M3OP( "else", 0, none, d_emptyOpList(), Compile_Nop ), // 0x05
M3OP_RESERVED, M3OP_RESERVED, M3OP_RESERVED, M3OP_RESERVED, M3OP_RESERVED, // 0x06 - 0x0a
M3OP( "end", 0, none, d_emptyOpList(), Compile_End ), // 0x0b
...
```
I would like to write
```c
...
[0x05] = M3OP( "else", 0, none, d_emptyOpList(), Compile_Nop ),
[0x0b] = M3OP( "end", 0, none, d_emptyOpList(), Compile_End ),
...
```
otherwise, we would need over 50 `M3OP_RESERVED` placeholders to fill the gap between `0xc4` and `0xfc` opcodes.
username_0: - Will most probably fail to compile on MSVC.
username_2: What about define more preserved macros:
```c
#define M3OP_RESERVED2 M3OP_RESERVED M3OP_RESERVED
#define M3OP_RESERVED4 M3OP_RESERVED2 M3OP_RESERVED2
#define M3OP_RESERVED8 M3OP_RESERVED4 M3OP_RESERVED4
#define M3OP_RESERVED16 M3OP_RESERVED8 M3OP_RESERVED8
#define M3OP_RESERVED32 M3OP_RESERVED16 M3OP_RESERVED16
...
etc
```
and later if we need skip `50` op codes just use something like:
```c
M3OP_RESERVED32 M3OP_RESERVED16 M3OP_RESERVED2
```
username_0: @username_1 no, it appears to be working on MSVC ;)
Let's o it this way - just use this syntax for the `0xfc` entry
username_0: @username_2 Yup, could be done this way, but I think using this syntax for a single entry wont hurt.
We can fix when (and if) it causes any troubles.
username_0: @username_1 just wondering if you have had any progress on this, or should I continue working on it. Thanks ;)
Status: Issue closed
username_0: https://github.com/WebAssembly/nontrapping-float-to-int-conversions
username_0: Multi-byte opcodes are implemented, but the saturated float to int conversion operators only work on linux/gcc. So this still needs some improvements..
username_2: I found how fix this for Clang but it only works with trunk =(
https://godbolt.org/z/-Jm923
Status: Issue closed
username_0: @username_2 actually fixed this.
We have still a small (and unrelated) issue with Win32 x86 build, but `non-trapping float to int conversions` are good now. Thanks! |
void-linux/void-packages | 862354359 | Title: some basic need
Question:
username_0: <!-- Don't request update of package. We have a script for that. https://alpha.de.repo.voidlinux.org/void-updates/void-updates.txt . However, a quality pull request may help. -->
### System
* xuname:
*output of ``xuname`` (part of xtools)*
* package:
*affected package(s) including the version*: ``xbps-query -p pkgver <pkgname>``
### Expected behavior
### Actual behavior
### Steps to reproduce the behavior
Answers:
username_1: Could you verify these?
1. Is your user in `video` group?
2. `brillo` works for me here. But you need to store the brightness value first (`-O` option) then use `-I` to restore it. There is no need for a service though you can easily run this at boot manually.
3. "dbus does not work in xorg automatically" --> Have you enabled the dbus service? Not really necessary with some xorg WM/DE but you can always use `dbus-launch` or `dbus-run-session` as normal user.
4. "polkit agent won't work" --> Have you installed 1 of the agents from the repo?
username_0: yes am on video group
sorry i didn,t check brillo man but i got a runit service for that from someone github it save brightness at var and restore at boot
yes i use mate-polkit and its working with
"if which dbus-launch >/dev/null && test -z "$DBUS_SESSION_BUS_ADDRESS";
then
eval "$(dbus-launch --sh-syntax --exit-with-session)"
fi"
if i put this line in .xinitrc for startx or /etc/lightdm/xsession file for lightdm
without above line am not able to mount usb and andriod device in pcmanfm
i think polkit agent depend upon dbus also
thanks for taking time !
Status: Issue closed
|
skorch-dev/skorch | 445674818 | Title: BatchNorm with Batch Size of 1
Question:
username_0: Batch normalization (e.g., `BatchNorm1d`) cannot support batch sizes of 1. When using the `fit` command, you can run into the batch size of 1 problem. When it occurs with batch norm, an error like the one below occurs.
`ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])`
It would be nice if there was a way we could specifically disable batches of size 1. Perhaps it could be an optional feature where a batch of size one is skipped or joined to a previous batch. It may already be supported and I do not know it.
Here is a gist where I got the issue with:
https://gist.github.com/username_0/ba4685d1b49563bc8d76c7ee6f45cef5
* `torch`: 1.1.0
* `skorch`: 0.5.0.post0
* Python: 3.7.1
Answers:
username_1: As a quick fix, PyTorch offers the option to drop the last batch if it's incomplete. To pass this option via skorch, use:
`net = NeuralNet(..., iterator_train__drop_last=True, iterator_valid__drop_last=True)`
Of course, this is not an ideal situation. But since we delegate sampling to PyTorch, I don't think there should be a skorch-specific solution; the solution needs to be delegated to PyTorch.
It appears that the way to go is PyTorch's [`BatchSampler`](https://pytorch.org/docs/stable/data.html#torch.utils.data.BatchSampler). If you implement a sampler with the option to drop batches of size 1, I could see it being included in `skorch.helper`. It may even be worth including in PyTorch itself.
username_0: Thanks @username_1. These are good ideas. I verified they basically worked in the simple test case I provided.
I did run into an issue when there is only a single validation batch. The error message you get (see below) is perhaps a bit cryptic.
`TypeError: __call__() missing 1 required positional argument: 'y_true'`
When I tried to use technique with `sklearn`'s `CalibratedClassifierCV`, I get the runtime error below. It seems since batch data is missing, the `sklearn`'s self checks fail.
```
Traceback (most recent call last):
...
File "XXXXX.py", line 264, in fit
self._g.fit(merged_x, merged_y)
File "<python 3.7.1>/sklearn/calibration.py", line 196, in fit
calibrated_classifier.fit(X[test], y[test])
File "<python 3.7.1>/sklearn/calibration.py", line 356, in fit
calibrator.fit(this_df, Y[:, k], sample_weight)
File "<python 3.7.1>/sklearn/isotonic.py", line 323, in fit
X, y = self._build_y(X, y, sample_weight)
File "<python 3.7.1>/sklearn/isotonic.py", line 242, in _build_y
check_consistent_length(X, y, sample_weight)
File "<python 3.7.1>/sklearn/utils/validation.py", line 235, in check_consistent_length
" samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [832, 891]
```
Is this a salvageable error? If you need an MWE, I can try to construct one.
username_1: So I'm not quite sure how the two errors you reported are related. Did you solve the first only to get to the second?
Regarding the sklearn error, I suspect that because values are dropped, the number of predictions is not equal to the number of inputs or targets. I don't see an easy solution to that. On the one hand, you could try to avoid having batches dropped by choosing a sufficiently large batch size, so that a batch of 1 just does not happen (though I guess you would have already done so if possible). Or you could restrict your training data size to be a multiple of the batch size.
On the other hand, you could try to fiddle with sklearn to prune arrays that are too long because of the dropped batches by digging into the code and overriding the corresponding methods.
username_1: @username_0 Any updates?
username_2: @username_0 The first problem is that you instantiated `Module` class inside the constructor. However, the class should be passed by name to the `NeuralNet` or `NeuralNetClassifier`. So, line 26:
```python
mod = NeuralNetClassifier(Module(), batch_size=bs)
```
should be changed to
```python
mod = NeuralNetClassifier(Module, batch_size=bs)
```
Or are there any advantages of instantiating module inside the constructor, that I don't know?
username_0: @username_1 My apologies for dragging on this. I had some end of the term projects that forced me to sideline this for a time. I will get back to this at the end of next week.
username_1: No worries :)
username_0: @username_2 I looked into what you referenced. My reading of the `skorch` source is that the `module` parameter can be either a `Callable` or a `torch.nn.module`. When the `fit` method is initially called, `skorch` runs an initializer function `[initialize_module](https://github.com/skorch-dev/skorch/blob/51000d1e6f33e486d4586f91fd321e5e13c15ad3/skorch/net.py#L458)`.
This function checks if `module` is already a `nn.Module` object and if not initializes it.
I did verify that I can get the error even if I let `skorch` initialize `module`.
username_1: So first let me confirm that it is possible (though not recommended) to pass an initialized module, so that should not be the problem. Coming back to the issue at hand, let me try to summarize what the problem is:
For one reason or another, it is desired to drop some batches from the training and validation data. However, when `predict(_proba)` is called, this leads to a lower number of predictions. When sklearn checks the length of the prediction and the length of the target, it notices the inconsistency and raises an error. Is that the correct state?
If this is indeed the problem, here are some proposals to solve it:
* Try to avoid dropping batches (see discussion above)
* If you have `n` samples and you know in advance that `m` samples will be dropped, pass only `n-m` samples to the scoring function. E.g.: `accuracy(net, X[:n-m], y[:n-m])`. This approach should hopefully fix the example using `CalibratedClassifierCV`.
* If you encounter the error during scoring: Write your own scoring function that automatically truncates the data:
```python
# assume we want to measure accuracy
from sklearn.metrics import accuracy_score
def my_accuracy(estimator, X, y):
y_pred = estimator.predict(X)
n = len(y_pred)
y = y[:n]
return accuracy_score(y, y_pred)
```
Both of these solutions are not particularly satisfying, but I currently see no other way.
username_3: Is this issue still open? Has your question been answered? Do you still have questions?
Status: Issue closed
username_0: This can be closed. Sorry for leaving it open. |
nemac/fswms | 78538575 | Title: Remove current product and the updates would be assigned a calendar date
Question:
username_0: Per <NAME>:
" On the list, Current Drought Monitor works fine, as do the older archive, but the most recent date does not. That is, drought doesn’t map for 05/19/2015. This is a bad thing as no drought is mapped as nothing, and we render nothing.
http://forwarn.forestthreats.org/fcav2?theme=CONUS_Vegetation_Monitoring_Tools&layers=DRTAZV,AAB&mask=&alphas=1,1&accgp=G04&basemap=Streets&extent=-15629230.346047,1728699.1024938,-6887280.2951306,7193029.380543
I’ve never liked that we have to put our trust in “current product” being current when the most recent dated product doesn’t render. Preferably, there would be no “current product” and the updates would be assigned a calendar date, just as we have with ForWarn products. Is there a way to rename the feed such that we don’t have this ambiguity and unmapped date. I’d think this is an easy fix.
"
JDM:
This would be changes to https://github.com/nemac/fswms/blob/master/msconfig/makeviewerconfig#L143
Status: Issue closed
Answers:
username_0: Looks like folks want to leave this alone. |
locationtech/rasterframes | 451580786 | Title: Create mechanism for efficiently reading slices from tiles.
Question:
username_0: ... And then is there a way to "take" from the tile directly rather than the `data`?
_Originally posted by @vpipkt in https://github.com/locationtech/rasterframes/pull/124/files/8a54e5662f5f425c71002164b1c1f57c3d0ef245_ |
KhronosGroup/SPIRV-Tools | 280189861 | Title: Add option in spirv-opt to skip validation.
Question:
username_0: There might be scenarios where it is useful to skip validation in spirv-opt.
The main one is when you have SPIR-V that is technically illegal, but in a way that is intended. If you are not able to change the SPIR-V, then this is your only option. For example, if you want to run the legalization passes to make illegal code from an HSLS front-end legal.<issue_closed>
Status: Issue closed |
stratis-storage/project | 491665612 | Title: a2x (asciidoc) has an orphaned dependency
Question:
username_0: We use a2x to generate our manpages. The dependency is python-which which the original developer is eager for someone to take over: https://github.com/trentm/which/issues/7.
Answers:
username_0: We need to investigate this a bit.
username_0: asciidoc depends on dblatex which in turn provides python2-which. https://src.fedoraproject.org/rpms/dblatex/. Somebody is currently doing something to the Fedora packaging which must be a response to this problem, but which I can't comprehend at all. So, we should wait and see a bit.
Status: Issue closed
username_0: They confirmed that they were fixing it in an email, and the build seems to have succeeded, so I believe that we need take no further action. |
emory-libraries/Pattern-Library | 250055922 | Title: Move Font Awesome fonts folder to root of source folder
Question:
username_0: - Allows other fonts to be stored with those.
- Will need to add fonts folder to deploy process.
- Solves issue of deployment of production CSS and fonts to web server for future use.
Status: Issue closed
Answers:
username_1: Closing this issue since the change was implemented. |
Shopify/skeleton-theme | 64359189 | Title: shop.js - Infinite loop in Firefox 36.0
Question:
username_0: It appears that the code block in shop.js.liquid starting at line ~62 to load full width article images results in an infinite loop in Firefox 36.0 (works as expected in Chrome and Safari).
To test just log something to the browsers JS console and then load a blog or article page in Shopify:
```
var images = $('.article img').load(function() {
var src = $(this).attr('src').replace(/_grande\.|_large\.|_medium\.|_small\./, '.');
console.log('Loaded: ' + src);
```
A screenshot from my Firefox console is attached on an article page with a single image. In the time it took me to post this the image was loaded 70,000+ times and counting:

Answers:
username_1: Same issue for me. And really, do we need to load 2048x2048 images on mobile? Even at 3x, that seems excessive. |
jlippold/tweakCompatible | 773737011 | Title: `Rocket for Instagram` working on iOS 13.5
Question:
username_0: ```
{
"packageId": "me.alfhaily.rocket",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "me.alfhaily.rocket",
"deviceId": "iPhone10,6",
"url": "http://cydia.saurik.com/package/me.alfhaily.rocket/",
"iOSVersion": "13.5",
"packageVersionIndexed": false,
"packageName": "Rocket for Instagram",
"category": "Tweaks",
"repository": "BigBoss",
"name": "Rocket for Instagram",
"installed": "3.7.16",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "me.alfhaily.rocket",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Save posts and view stories anonymously and do much more",
"latest": "3.7.16",
"author": "<NAME>",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
gctools-outilsgc/gccollab-mobile | 305908171 | Title: Term and conditions of user redirect to an external page
Question:
username_0: The terms and conditions of use in the FAQ redirect to the external link instead of the page on the app
<issue_closed>
Status: Issue closed |
StreakYC/node-api-wrapper | 148292859 | Title: Box Field Creation not setting Fields data
Question:
username_0: This code, cud not generate values for fileds with key 1002. It only set data for name and notes.
Tried double-checking the field key but no success.
` var datai = {name: "NewlyNamedBox2", notes: "AdditionalNotesText",
"fields": {"key": "1002", "value": "Some string"}} ;
streak.Boxes.create(pipelineKey, datai).then(function(data) {
console.log("pipeline4 :" + util.inspect(data, {showHidden: false, depth: null}));
});`
Answers:
username_1: Currently you need to use the `streak.Pipelines.Fields.*` methods to manipulate the fields sub-object of a box.
Status: Issue closed
|
XX-net/XX-Net | 62576555 | Title: xxnet1.3.6无法上传个人ID
Question:
username_0: 出以下错误提示,请看看怎么办:
Application: shdgsj123
Host: appengine.google.com
Rolling back the update.
upload fail: <urlopen error [Errno 8] _ssl.c:507: EOF occurred in violation of protocol>
Retry again.
Answers:
username_0: 补充上边的错误日志:
Application: shdgsj123
Host: appengine.google.com
Rolling back the update.
upload fail: <urlopen error [Errno 8] _ssl.c:507: EOF occurred in violation of protocol>
username_0: 系统为Mac10.10.1
username_1: 收到反馈,我看看
username_0: 谢谢,比较着急,无法上传ID
原始邮件
username_1: 如果着急,可以找个Windows机器部署appid。
如果Mac里装虚拟机也可以跑windows。
找个朋友帮你部署也可以。
用公共的appid也可以.
如果要看youtube: 非username_1开头的appid,不限制youtube.
username_0: OK
谢谢
原始邮件
username_0: 不过为什么MAC系统,一直无法正常部署自身的ID呢
盼望解决此问题!
原始邮件
发件人:<EMAIL>
收件人:XX-net/XX-Net<EMAIL>; XX-net/XX-Netreply@reply.<EMAIL>
发送时间:2015年3月18日(周三) 16:41
主题:Re: [XX-Net] username_11.3.6无法上传个人ID (#67)
OK
谢谢
原始邮件
username_1: 刚才我在Mac 10.8 和Mac10.10.2上测试,部署服务端没有发现问题。
不知道是哪里出了问题。
username_2: mac 10.10.2
Sys Platform darwin
OS System Darwin
OS version Darwin Kernel Version 14.1.0: Thu Feb 26 19:26:47 PST 2015; root:xnu-2782.10.73~1/RELEASE_X86_64
OS release 14.1.0
OS detail Release:10.10.2; Version:('', '', '') Machine:x86_64
architecture 64bit
Browser Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
XX-Net Version 1.3.6
Launcher Version 1.0.8
GoAgent Version 3.1.40
Python Version 2.7.6
Proxy Listen 127.0.0.1:8087
Host: appengine.google.com
Rolling back the update.
upload fail: <urlopen error [Errno 8] _ssl.c:507: EOF occurred in violation of protocol>
username_1: 请问你是什么网络?长城宽带用户有反馈无法部署,也有校园网的反馈部署几次后就不能部署。
能否找个windows或Linux机器,或者在虚拟机里测试部署服务端?
我这边测试过Mac 10.8, 10.10,部署是正常的。
多试试不同方法,研究规律看是什么原因。
username_2: 我用的是电信的宽带,在windows 下部署成功,Mac 试了多次都是不成功,同一错误,Host: appengine.google.com
Rolling back the update.
upload fail: <urlopen error [Errno 8] _ssl.c:507: EOF occurred in violation of protocol>
访问youtube时:提示请重新部署服务端: http://127.0.0.1:8085/?module=goagent&menu=deploy
ps:我在windows下已经部署成功了啊,为什么还要重新部署呢?
谢谢
username_1: 收到,Mac下部署问题,还要再研究,我这里测试都很顺利,问题还需要定位。
你在Windows下部署过了,应该就不需要重新部署了。
你可以测试自己的appid在windows下能否使用,再到Mac下测试。
Status: Issue closed
username_1: 1.7已经通过GAE,大大提高部署通过率 |
apple/coremltools | 775039386 | Title: ValueError: Incompatible dim 1 in shapes (is14, 18, 32, 512) vs. (is14, 17, 31, 512)
Question:
username_0: `_ = coremltools.convert(keras_model, inputs=[coremltools.ImageType()])` When I run this line to convert a Keras model to a CoreML Model, I get this error message:
```
Running TensorFlow Graph Passes: 100%|██████████| 5/5 [00:00<00:00, 5.10 passes/s]
Converting Frontend ==> MIL Ops: 90%|████████▉ | 600/670 [00:00<00:00, 838.61 ops/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-321-a1bf8a0e7b7a> in <module>
----> 2 _ = coremltools.convert(model, inputs=[coremltools.ImageType()])
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/_converters_entry.py in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, **kwargs)
181 outputs=outputs,
182 classifier_config=classifier_config,
--> 183 **kwargs
184 )
185
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/converter.py in mil_convert(model, convert_from, convert_to, **kwargs)
127 """
128 proto = mil_convert_to_proto(model, convert_from, convert_to,
--> 129 ConverterRegistry, **kwargs)
130 if convert_to == 'mil':
131 return proto
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/converter.py in mil_convert_to_proto(model, convert_from, convert_to, converter_registry, **kwargs)
169 frontend_converter = frontend_converter_type()
170
--> 171 prog = frontend_converter(model, **kwargs)
172 common_pass(prog)
173
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/converter.py in __call__(self, *args, **kwargs)
73
74 tf2_loader = TF2Loader(*args, **kwargs)
---> 75 return tf2_loader.load()
76
77
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/frontend/tensorflow/load.py in load(self)
78 )
79
---> 80 program = self._program_from_tf_ssa()
81 logging.debug("program:\n{}".format(program))
82 return program
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/frontend/tensorflow2/load.py in _program_from_tf_ssa(self)
176
177 converter = TF2Converter(self._tf_ssa, **self.kwargs)
--> 178 return converter.convert()
179
180 def _populate_sub_graph_input_shapes(self, graph, graph_fns):
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py in convert(self)
405 for g_name in self.graph_stack[1:]:
406 self.context.add_graph(g_name, self.tfssa.functions[g_name].graph)
--> 407 self.convert_main_graph(prog, graph)
408
409 # Apply TF frontend passes on Program. These passes are different
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/frontend/tensorflow/converter.py in convert_main_graph(self, prog, graph)
[Truncated]
44 return types.tensor(primitive_type, ret_shape)
45
~/opt/anaconda3/envs/multimodel/lib/python3.6/site-packages/coremltools/converters/mil/mil/ops/defs/_utils.py in broadcast_shapes(shape_x, shape_y)
42 raise ValueError(
43 "Incompatible dim {} in shapes {} vs. {}".format(
---> 44 i, shape_x, shape_y
45 )
46 )
ValueError: Incompatible dim 1 in shapes (is14, 18, 32, 512) vs. (is14, 17, 31, 512)
```
What is the reason of the problem and how can I solve it?
Versions:
- Python 3.6.12
- TensorFlow 2.3.1
- Keras 2.2.4
- Coremltools 4.0
Status: Issue closed
Answers:
username_1: Since we have not received steps to reproduce this problem, I'm going to close this issue. If we get steps to reproduce the problem, I will reopen the issue. |
npm/cli | 727014836 | Title: [BUG] npm does not create --prefix dir
Question:
username_0: ### Current Behavior:
npm 7.0.3 errors because `--prefix` directory does not exist.
This is appears to be a regression as it does not happen with npm 6 and I was unable to find a ticket or mention in the npm 7 release notes that `--prefix` no longer creates the destination directory.
```
$ npm install <ANY_PACKAGE_NAME> --no-audit --no-save --production --prefix /Users/joe/foo
```
```
npm install exited (code 254)
npm ERR! code ENOENT
npm ERR! syscall lstat
npm ERR! path /Users/joe/foo
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, lstat '/Users/joe/foo'
npm ERR! enoent This is related to npm not being able to find a file.
```
If I manually create the prefix directory, then run `npm install`, it works fine.
### Expected Behavior:
The package should install into the directory specified by the `--prefix` argument and the output should look similar to:
```
npm install exited (code 0)
<PACKAGE_NAME>@<VERSION> install /Users/joe/foo
```
### Steps To Reproduce:
1. Run `npm install <some_package> --prefix /some/path/that/does/not/exist`
2. See error
### Environment:
- OS: macOS 10.15.7 (catalina)
- Node: 14.13.0
- npm: 7.0.3
Answers:
username_1: ------
failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running [/bin/sh -c npm install -g yarn]: runc did not terminate sucessfully
```
Status: Issue closed
Status: Issue closed
username_3: @username_0 Thanks for the report! This should have been fixed in `[email protected]` 😊 |
plotly/plotly.py | 273395037 | Title: Y axis labels have wrong formatting
Question:
username_0: I wanted to use custom Y axis labels, and have found that the labels with numbers are shifted somehow,
below is a sample code that demostrates the strange behaviour. Is there any build-in formatting based on the labels names or some secret tags for formatting?
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import datetime
yTickValues = list(range(8))
yTickNames = ['StartSendingSsp0', 'OrganizedTxSsp0', 'I2cRwHandler', 'GpsParser', 'SSP1_IRQn', 'USART0_IRQn', 'RTC_IRQn', 'I2C_IRQn']
xaxis = [datetime.datetime(2000, 1, 1, 0, 0, 0, 686002),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686008),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686045),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686176),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686192),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686523),
datetime.datetime(2000, 1, 1, 0, 0, 0, 686530)]
yaxis = list(range(8))
# Create a trace
trace = go.Scatter(
x = xaxis,
y = yaxis,
mode = 'markers'
)
layout = dict(title = 'Tracer',
yaxis = dict(
tickvals = yTickValues,
ticktext = yTickNames,
showticklabels = True,
)
)
data = [trace]
fig = dict(data=data, layout=layout)
plot(fig, filename='example')
Answers:
username_1: Hi there,
Running the code you've provided above, I see the intended behaviour that the labels are right aligned. You can increase the margin size to avoid labels getting cut off (https://plot.ly/python/reference/#layout-margin). If that doesn't address the issue you're seeing can you include a screenshot highlighting the issue?
username_0: At the picture above you can see that formatting is different for all labels

username_1: Great, so that example looks different than the code you've included above:

In order for us to be able to tell what the issue is we'll need a reproducible example.
Status: Issue closed
|
ehendrix23/tesla_dashcam | 436090959 | Title: program crashes when 0 folder to process in v0.1.9b1
Question:
username_0: Version
--------
v0.1.9b1
What happen?
--------------
Running against an empty folder crashes the application:
```shell
$ python tesla_dashcam/tesla_dashcam.py --no-notification --monitor_once --delete_source --output /destination /mnt/TeslaCam/SavedClips
Monitoring for TeslaCam Drive to be inserted. Press CTRL-C to stop
TeslaCam folder found on /mnt.
Discovered 0 folders with 0 clips to process.
Traceback (most recent call last):
File "tesla_dashcam/tesla_dashcam.py", line 1630, in <module>
sys.exit(main())
File "tesla_dashcam/tesla_dashcam.py", line 1608, in main
args.delete_source)
File "tesla_dashcam/tesla_dashcam.py", line 797, in process_folders
movie_name is None
UnboundLocalError: local variable 'movie_name' referenced before assignment
```
What do I expect?
-------------------
Application cleanly exit when there is no folder to process
Answers:
username_1: You're exceptions are high. :-)
Thanks for the report!!!!
Issue is fixed in dev branch. You can grab it from here is you want:
https://raw.githubusercontent.com/username_1/tesla_dashcam/dev/tesla_dashcam/tesla_dashcam.py
Status: Issue closed
username_0: Thanks !
I'm running your code in a Kubernetes job pod. Unclean exit makes the pod re-spawn again and again
username_1: No problem. Going to give it a few more days, let me know if you find any other issues. :-) |
Hammerspoon/hammerspoon.github.io | 252442881 | Title: Layout change event on other-display dock exposure
Question:
username_0: I don't know if this is a bug or not, though if it's expected, I'd love to hear suggestions for workarounds.
Let's say you have multiple displays and you have a `hs.screen.watcher.new()` that reacts to layout changes, e.g.:
```
hs.screen.watcher.new(layout_change):start()
```
Now, `layout_changes` is a function that checks a bunch of conditions and depending on those, moves and resizes your windows. Generally your layout isn't going to change very often, but there's one annoying case where it can happen at unexpected times.
Say you've set your System Preferences to auto-hide the dock, and further that the dock shows up at the bottom of the screen. Now say a window on screen 1 has focus and you expose the dock. So far so good. Now give focus to a window on another screen and expose the dock on *that* screen by moving the cursor to the bottom of the other screen. The dock will unhide on that other screen *and* Hammerspoon will trigger a layout change. This means that when you unhide the dock on a new screen all your windows move around! That's not so good. ;)
I'm not sure why you'd get a layout change in that case since clearly the screen layouts aren't changing, only the dock is getting exposed on different screens. I definitely want to avoid moving my windows around in that case.
Answers:
username_1: So the way the screen watcher works is to just register with macOS for the NSApplicationDidChangeScreenParametersNotification event, which does actually get triggered when the Dock moves, because that changes the area of the screen on which apps should be drawing.
I think it probably shouldn't trigger when the dock is set to autohide, because then apps aren't supposed to be actively avoiding it, but for whatever reason, Apple either wants that notification, or hasn't noticed that it's happening.
The only workaround I can think of would be to fetch the fullFrame() for each hs.screen object, cache them and then in your layout_change() callback, check to see if the same number of screens are in the same order with the same sizes :/
(also fwiw, you filed this issue on our website repo, rather than the main github.com/Hammerspoon/hammerspoon repo :)
Status: Issue closed
|
craftcms/cms | 203762483 | Title: Validation isn't performed when enabling entries from entry index page.
Question:
username_0: #### Environment:
- Craft version: 3.x
- PHP version: 7.x
- Database driver & version: MySQL & PostgreSQL
- Plugins & versions:
#### Description:
When enabling an entry from the bulk action dropdown on the edit entry page, entry validation doesn't run, making it possible for enabled entries to go live with required fields that have no data in them.
#### Steps to reproduce:
Go to entry index page. Enable entries that have required fields and no data in them.<issue_closed>
Status: Issue closed |
ori2111/ori2111 | 821076126 | Title: test - Opening a bug
Question:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
Status: Issue closed
Answers:
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
Status: Issue closed
username_0: **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. |
qutip/qutip | 755091681 | Title: How to plot trace distance as a function of time?
Question:
username_0: I am simulating two dynamics via two mesolves. I want to plot the dynamics of the trace distance between the underlying two density matrices. Can someone kindly suggest a way to do that?
Answers:
username_1: QuTiP provides a function `qutip.tracedist` to calculate this, so if you have your density matrices, you just need to use that function. You need to make sure you store the states during the evolution (QuTiP does this by default unless you specify any `e_ops`). For example,
```python
times = np.linspace(0, 1, 101)
opts = qutip.Options(store_states=True)
result_1 = qutip.mesolve(H, state_1, times, c_ops=..., e_ops=..., options=opts)
result_2 = qutip.mesolve(H, state_2, times, c_ops=..., e_ops=..., options=opts)
distances = np.array([qutip.tracedist(a, b) for a, b in zip(result_1.states, result_2.states)])
```
username_0: Thanks, Jakelishman for this. I will try it first then will get back to you. Kindly don't close the issue yet.
username_0: Thanks.. for the help..and you can close the issue .
Status: Issue closed
|
Azure/autorest | 108958399 | Title: Make autorest generated files compatible with Xamarin
Question:
username_0: It would be nice to enable suport for profile "portable-net403+win+wpa81+MonoTouch10+MonoAndroid10+xamarinmac20+xamarinios10" in order to use generated files in Xamarin.
Answers:
username_1: We do support Xamarin via portable-net45+win8+wpa81 (profile 111) and netcore which are based on net45. However we do not currently have plans to provide backwards compatibility for net403.
Status: Issue closed
|
yeuchi/DecoderExercise | 188447641 | Title: Stl save.
Question:
username_0: HI ,
This is a good library.I really liked among all libraries.
Is there a way to save a WPF viewport to Stl file in this library.
I actually tried doing reverse engineering to your library...but no luck..
Could you please give some hints.
Thanks & Regards,
Tony |
webdriverio/webdriverio | 184469959 | Title: Session not found exception when trying to run a WebDriverCSS script
Question:
username_0: Hi All,
I am getting Session not found exception when I tried to run the below script using the command "node webCSS.js"
Steps that I followed
1) Installed node.js and GraphicsMagick Display on the system
2) Installed and started Selenium StandaloneServer using below commands
npm install selenium-standalone@latest -g
selenium-standalone install
selenium-standalone start
3) Installed webdriver io and webdriverCSS using below commands
npm install webdriverio@2 webdrivercss
4) After that I started ran the script WebCSS.js (script provided below)
Script
var assert = require('assert');
// init WebdriverIO
var client = require('webdriverio').remote({desiredCapabilities:{browserName: 'firefox'}})
// init WebdriverCSS
require('webdrivercss').init(client);
client
.init()
.url('https://www.google.com/')
.webdrivercss('startpage',[
{
name: 'header',
elem: '#header'
}, {
name: 'hero',
elem: '//*[@id="hero"]/div[2]'
}
], function(err, res) {
assert.ifError(err);
assert.ok(res.header[0].isWithinMisMatchTolerance);
assert.ok(res.hero[0].isWithinMisMatchTolerance);
})
.end();
Error that I am getting
17:00:52.533 INFO - Executing: [get: https://www.google.com/])
17:00:59.759 INFO - Done: [get: https://www.google.com/]
17:01:00.079 INFO - Executing: [delete session: 376fc292-6d51-4986-9bd0-56fa59877af9])
17:01:00.192 INFO - Executing: [execute script: return (function () {
/**
* remove scrollbars
*/
// reset height in case we're changing viewports
document.documentElement.style.height = 'auto';
document.documentElement.style.height = document.documentElement.scrollHeight + 'px';
document.documentElement.style.overflow = 'hidden';
/**
* scroll back to start scanning
*/
[Truncated]
Build info: version: '2.53.1', revision: 'a36b8b1', time: '2016-06-30 17:37:03'
at org.openqa.selenium.firefox.FirefoxDriver$LazyCommandExecutor.execute(FirefoxDriver.java:377)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:644)
at org.openqa.selenium.remote.RemoteWebDriver.executeScript(RemoteWebDriver.java:577)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.openqa.selenium.support.events.EventFiringWebDriver$2.invoke(EventFiringWebDriver.java:103)
at com.sun.proxy.$Proxy1.executeScript(Unknown Source)
at org.openqa.selenium.support.events.EventFiringWebDriver.executeScript(EventFiringWebDriver.java:217)
at org.openqa.selenium.remote.server.handler.ExecuteScript.call(ExecuteScript.java:54)
at java.util.concurrent.FutureTask.run(Unknown Source)
at org.openqa.selenium.remote.server.DefaultSession$1.run(DefaultSession.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
17:01:02.201 WARN - Exception: The FirefoxDriver cannot be used after quit() was called.
Build info: version: '2.53.1', revision: 'a36b8b1', time: '2016-06-30 17:37:03'
Driver info: driver.version: EventFiringWebDriver
Status: Issue closed
Answers:
username_1: First of all please format your issues, don't just copy paste code in there. It's unreadable and hard to figure out what's the problem.
Second, as you can see in the logs there is a race condition: the end command gets called to early for whatever reason:
```
17:00:52.533 INFO - Executing: [get: https://www.google.com/])
17:00:59.759 INFO - Done: [get: https://www.google.com/]
17:01:00.079 INFO - Executing: [delete session: 376fc292-6d51-4986-9bd0-56fa59877af9])
17:01:00.192 INFO - Executing: [execute script: return (function () {
```
Last, see http://blog.kevinlamping.com/whats-up-with-webdrivercss/ |
naser44/1 | 154201361 | Title: التجنيد الإلزامي في الكويت منتصف
Question:
username_0: <a href="http://ift.tt/1On2hfL">التجنيد الإلزامي في الكويت منتصف ٢٠١٧</a> |
DomT4/homebrew-chromium | 227530838 | Title: README feedback
Question:
username_0: When reading the README, I had some questions that you may want to add to the README.
1. Which Chromium channel are you tracking? Stable, Beta, Dev, Canary, latest?
2. Is the preferred way to install using the regular (non-cask) method? It seemed like the regular method could do automated updates? There were no instructions on how to do that in the README, for newbies. Something like:
```
brew install domt4/chromium/chromium
brew linkapps chromium
brew upgrade
```
Status: Issue closed
Answers:
username_1: Pushed some updates to the README. Let me know if things still aren't clear. Thanks for raising this! 😃 |
cupy/cupy | 474422703 | Title: Support new random number generator in NumPy 1.17
Question:
username_0: https://docs.scipy.org/doc/numpy/release.html#new-features
https://docs.scipy.org/doc/numpy/reference/random/index.html#module-numpy.random
Answers:
username_1: I am interested to work on this for gsoc 2020 and have submitted a draft proposal. Can I get some suggestions as to how I can improve it ?
https://drive.google.com/file/d/1LXLU31iCgrZrysRtEQ1766iC1hFPQ-dh/view?usp=sharing
username_0: This feature was implemented in #4177 with a limited set of distributions. See also #4557.
Status: Issue closed
|
titipata/pubmed_parser | 211844600 | Title: Parsers cannot read the xml file.
Question:
username_0: Error: it was not able to read a path, a file-like object, or a string as an XML
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\pubmed_parser-0.1-py3.6.egg\pubmed_parser\utils.py", line 14, in read_xml
tree = etree.parse(path)
File "src\lxml\lxml.etree.pyx", line 3427, in lxml.etree.parse (src\lxml\lxml.etree.c:81101)
File "src\lxml\parser.pxi", line 1811, in lxml.etree._parseDocument (src\lxml\lxml.etree.c:117832)
File "src\lxml\parser.pxi", line 1837, in lxml.etree._parseDocumentFromURL (src\lxml\lxml.etree.c:118179)
File "src\lxml\parser.pxi", line 1741, in lxml.etree._parseDocFromFile (src\lxml\lxml.etree.c:117091)
File "src\lxml\parser.pxi", line 1138, in lxml.etree._BaseParser._parseDocFromFile (src\lxml\lxml.etree.c:111637)
File "src\lxml\parser.pxi", line 595, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:105093)
File "src\lxml\parser.pxi", line 706, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:106801)
File "src\lxml\parser.pxi", line 633, in lxml.etree._raiseParseError (src\lxml\lxml.etree.c:105612)
OSError: Error reading file 'medline16n0902.xml': failed to load external entity "medline16n0902.xml"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\Python36\lib\site-packages\pubmed_parser-0.1-py3.6.egg\pubmed_parser\medline_parser.py", line 354, in parse_medline_xml
tree = read_xml(path)
File "C:\Program Files\Python36\lib\site-packages\pubmed_parser-0.1-py3.6.egg\pubmed_parser\utils.py", line 17, in read_xml
tree = etree.fromstring(path)
File "src\lxml\lxml.etree.pyx", line 3213, in lxml.etree.fromstring (src\lxml\lxml.etree.c:78994)
File "src\lxml\parser.pxi", line 1848, in lxml.etree._parseMemoryDocument (src\lxml\lxml.etree.c:118325)
File "src\lxml\parser.pxi", line 1729, in lxml.etree._parseDoc (src\lxml\lxml.etree.c:116883)
File "src\lxml\parser.pxi", line 1063, in lxml.etree._BaseParser._parseUnicodeDoc (src\lxml\lxml.etree.c:110870)
File "src\lxml\parser.pxi", line 595, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:105093)
File "src\lxml\parser.pxi", line 706, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:106801)
File "src\lxml\parser.pxi", line 635, in lxml.etree._raiseParseError (src\lxml\lxml.etree.c:105655)
File "<string>", line 1
lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
`
Answers:
username_1: Hi @username_0, thanks for the report and sorry for the late reply! So, I seems like the problem is from reading XML file (`etree.from_string(path)`). Pubmed parser uses [these snippet](https://github.com/username_1/pubmed_parser/blob/master/pubmed_parser/utils.py#L9-L21) to read XML file. Can you check real quick if `lxml` works to read example file for you or the file that you have a problem with? Also! for the MEDLINE one, you have to use `parse_medline_xml` function instead of `parse_pubmed_xml`. `parse_pubmed_xml` is actually for Pubmed Open-Access subset XML files.
username_2: @username_0 @username_1 It seems that the problem is that it cannot find the file? As @username_1 mentions, `pubmed_parser` tries to read the given string as if it were a file path and if that fails it tries to read it as a XML string. So it first fails to read the file, and then it tries to read it as an XML. Can you please check that the file exists at that location?
Status: Issue closed
username_2: I am closing this for now |
metatron-app/metatron-discovery | 360755103 | Title: No value name in multi selected filter where pre-selected value exists
Question:
username_0: **Describe the bug**
Multi select 필터에서 미리 선택된 값이 있는 경우 화면과 같이 [Object] 와 같은 형태로 나타남
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Filter'
2. Click on 'Check multiple' And then select some values. Click OK.
3. See error
**Expected behavior**
선택된 값이 나타나야 함
**Screenshots**
<img width="310" alt="screen shot 2018-09-17 at 3 37 19 pm" src="https://user-images.githubusercontent.com/321821/45610828-ec290100-ba8f-11e8-9873-0e581f16d8b9.png">
**Desktop (please complete the following information):**
- OS: OsX
- Browser chrome
- Version 69.0.3497.92
**Additional context**
filter-multi-select.component.ts 코드가 다음과 같이 수정되어야 함
```
public ngOnInit() {
// array check
if (this.selectedArray == null || this.selectedArray.length === 0) {
this.selectedArray = [];
this.viewText = this.unselectedMessage;
} else {
this.viewText = this.selectedArray.join(',');
----> this.viewText = this.selectedArray.map( item => item.name ).join(',');
}
// Init
super.ngOnInit();
}
```
Answers:
username_1: @username_0 현재 릴리즈마다 브랜치(현재 버전 : 3.0.3 브랜치)를 따고 거기에 merge 를 진행한 후에 최종 테스트가 끝나면 master 에 3.0.3 브랜치를 merge 하는 방식으로 진행되고 있습니다.
Status: Issue closed
username_0: #215와 중복되어 Close 처리 |
rust-lang/rust | 767907001 | Title: rustdoc only writes one file for multiple items exported as `_`
Question:
username_0: <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
For example, generating documentation for this code:
```rust
mod traits {
pub trait TraitA {
fn a();
}
pub trait TraitB {
fn b();
}
}
pub use traits::{TraitA as _, TraitB as _};
```
results in two traits named `_`, which is correct, but, no matter which one you click on, you'll only ever see the `a()` xor the `b()` function since rustdoc tries to write the documentation for both traits to the same file. At least, I think that's what's happening.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.48.0 (7eac88abb 2020-11-16)
binary: rustc
commit-hash: 7eac88abb2e57e752f3302f02be5f3ce3d7adfb4
commit-date: 2020-11-16
host: x86_64-unknown-linux-gnu
release: 1.48.0
LLVM version: 11.0
```
Answers:
username_1: Duplicate of https://github.com/rust-lang/rust/issues/61592
Status: Issue closed
|
graphql-python/graphene | 239393614 | Title: documention on Union
Question:
username_0: Hi guys,
Quite new graphene, I read on the graphql.org website, http://graphql.org/learn/schema/#union-types. There is this way to define a union type
```union SearchResult = Human | Droid | Starship```
It seems that there is code about `Union` in the source, but not the documentation, just want to know Is this already supported in graphene and just lacking of doc?
Thanks!
Answers:
username_0: Any one can help? cc @username_3
username_1: Hi,
you can implement it that way (simplified) :
```
class Anything(graphene.types.union.Union):
class Meta:
types = (Human, Droid, Starship)
class Query(graphene.ObjectType):
search = graphene.List(Anything)
```
Then write a resolver for the search that will concatenate results for each type :
```
def resolve_search(self, args, context, info):
humans = droids = starships = []
# search for each type
return humans + droids + starships
```
username_2: I am also having difficulty here.
```python
import graphene
from graphene.types.union import Union
class Contract(graphene.Interface):
id = graphene.Int(required=True)
basis = graphene.Field(DollarAmount)
contract_date = graphene.Field(Date)
contract_price = graphene.Field(DollarAmount)
notes = graphene.String()
quantity = graphene.Int()
underlying = graphene.Field(Underlying)
class CashContract(graphene.ObjectType):
contract_status = graphene.Field(ContractStatus)
contract_type = graphene.Field(CashContractType)
delivery_date = graphene.Field(Date)
class Meta:
interfaces = (Contract, )
class FuturesContract(graphene.ObjectType):
expiration_date = graphene.Field(Date)
class Meta:
interfaces = (Contract, )
class ContractUnion(Union):
class Meta:
types = (CashContract, FuturesContract)
```
I get the following error when I try and request a common attribute, `id`, in my query:
```
"message": "Cannot query field \"id\" on type \"ContractUnion\". Did you mean to use an inline fragment on \"Contract\", \"CashContract\" or \"FuturesContract\"?"
```
=(
username_1: @username_2 Does your query contain an inline Fragment http://graphql.org/learn/queries/#inline-fragments ?
Since the union returns different types you need a query like this :
```
{
search {
__typename
... on CashContract{
id
contract_status
}
... on FuturesContract{
expiration_date
}
}
}
```
username_2: @username_1 that makes sense, but I also see that in the official guide you can ask for shared attributes between the two different fragments: http://graphql.org/learn/queries/#inline-fragments
```
query HeroForEpisode($ep: Episode!) {
hero(episode: $ep) {
name
... on Droid {
primaryFunction
}
... on Human {
height
}
}
}
```
In this case `name` is on both, so you can ask for it without using a fragment.
username_3: Closing the issue
Status: Issue closed
username_2: @username_3 thanks for the response. I actually don't even want to use a union type... I want to just pass the interface class into the List for my but that does not work. But I was finding that graphene wasn't happy about putting an interface somewhere where an ObjectType was expected? Is this intentional?
username_0: @username_3
Hi Syrus, actaully I opened this issue because of this is any docs about using Union on our `http://graphene-python.org/` website, maybe we could still let this open?
username_3: Hi guys,
Quite new graphene to, I read on the graphql.org website, http://graphql.org/learn/schema/#union-types. There is this way to define a union type:
```union SearchResult = Human | Droid | Starship```
It seems that there is code about `Union` in the source, but not the documentation, just want to know Is this already supported in graphene and just lacking of doc?
Thanks!
Status: Issue closed
|
dividab/tsconfig-paths | 332072243 | Title: Q: Can I glob paths?
Question:
username_0: ```
"paths": {
"@ent/*": [
"server/src/entities/**/*"
]
}
```
Above doesn't work, but can I get this globing somehow in my paths.
Thx
Answers:
username_1: This package follows the rules that the typescript compiler uses for the paths section in tsconfig.json. Those rules do not allow globbing so this is no possible. For more information see the "Path mapping" section in the [typescript docs](https://www.typescriptlang.org/docs/handbook/module-resolution.html)
Status: Issue closed
|
NMGRL/pychron | 310636791 | Title: pipeline node indicator
Question:
username_0: distinguish enabled and disabled nodes
Answers:
username_0: the options seem to have changed. I don't see enable/disable, instead disable autoconfigure/enable autoconfigure. This optiuons seems to toggle, but I am not sure of the function, and node is seemingly not marked as disabled.
Status: Issue closed
|
storyblok/storyblok | 643850531 | Title: Add option for setting a default real path in the schema options
Question:
username_0: **The feature would affect:** (check one with "x")
- [x] *app.storyblok.com (CMS - Interface)*
- [ ] *api.storyblok.com (CMS - Content Delivery API)*
- [ ] *mapi.storyblok.com (CMS - Management API)*
- [ ] *capi.storyblok.com (Commerce - API)*
- [ ] *Commerce - Interface*
- [ ] *Other* <!-- => If you've got an issue with on of our boilerplates or themes - please create an issue in the specific repo -->
**Is your feature request related to a problem? Please describe.**
At the moment it isn't possible to set a default real path for a certain content type which requires to enter it every time again.
**Describe the solution you'd like**
By adding the possibility to define a default real path in the schema setting a large number of stories up would be an easier task when the environment configuration requires to set the same real path configuration for a certain type of content for every instance of that particular type.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. |
awesomeo184/java-study | 968851580 | Title: 12주차 과제: 애노테이션
Question:
username_0: ## 목표
자바의 애노테이션에 대해 학습하세요.
## 학습할 것 (필수)
* 애노테이션 정의하는 방법
* @retention
* @target
* @documented
* 애노테이션 프로세서
Answers:
username_0: https://wisdom-and-record.tistory.com/54?category=907462
애너테이션 프로세서를 직접 작성해보고 싶으신 분들은 아래 글이 도움될 것 같습니다.
http://hannesdorfmann.com/annotation-processing/annotationprocessing101/
username_1: https://cotak.tistory.com/163
username_2: https://velog.io/@username_2/%EC%95%A0%EB%84%88%ED%85%8C%EC%9D%B4%EC%85%98
username_3: https://blog.naver.com/jaewhan232/222476548279
username_4: https://blog.naver.com/ss_1634/222476598038
Status: Issue closed
|
andypicke/WeatherComparer | 646006127 | Title: Update to use in R 4.0.2
Question:
username_0: Hi there,
Are you planning to update so this can be used in R version 4.0.2?
Thanks!
Status: Issue closed
Answers:
username_1: Hi, the main reason the app is not working is that the weather data I was using is no longer freely available (see 'Current Status' at the bottom of the README). If I have time I might try to find an alternative source of weather data and adapt the code to use it. |
aicis/fresco | 119054331 | Title: File-based storage option
Question:
username_0: Currently, the SCE provides a key/value store to suites. This is fine for some suites, but others rely on massive data to be preprocessed. For these a key value store is not optimal, because it is a random-access thing. We should therefore also support some kind of streamed storage, where reading and writing is on a readNext and writeNext basis. One sequential store per thread, perhaps. This can be efficiently implemented by a file stream.<issue_closed>
Status: Issue closed |
nickbabcock/boxcars | 602013662 | Title: Support for RL1.76 replays
Question:
username_0: Just letting you know that the new RL 1.76 update came with some new classes/properties for the heatseeker mode.
Commit with changes needed for my replay parser: https://github.com/username_0/CPPRP/commit/16f248e0619cb058d9512d06c65e6cc770b57b67
Based on summary given by CantFly in the ballchasing discord here: https://discordapp.com/channels/577096078843707392/577564876000460821/700580447750717531
Attached 2 replays from new gamemode [heatseekerreplays.zip](https://github.com/username_1/boxcars/files/4493484/heatseekerreplays.zip)
Answers:
username_1: Wow much appreciated, thanks! 🙇
Status: Issue closed
|
async-email/async-imap | 708202947 | Title: Crashes on doing select
Question:
username_0: * 995 RECENT
* OK [UNSEEN 7]
* OK [UIDVALIDITY 1] UIDs valid
* OK [UIDNEXT 1386] Predicted next UID
* OK [HIGHESTMODSEQ 1000000000000002229]
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)
* OK [PERMANENTFLAGS (\* \Answered \Flagged \Deleted \Seen \Draft)] Permanent flags
A0003 OK [READ-WRITE] SELECT completed
flags: [], exists: 995, recent: 995, unseen: Some(7), permanent_flags: [],uid_next: Some(1386), uid_validity: Some(1)
< A0004
<
< FETCH 1:* FLAGS
<
0
< A0005
<
< LOGOUT
<
```
I don't know exactly what is going on, what I did is write a little ruby script to decode the bytes in the Io error, which looks like this:
```
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)
* OK [PERMANENTFLAGS (\* \Answered \Flagged \Deleted \Seen \Draft)] Permanent flags
A0003 OK [READ-WRITE] SELECT completed
```
I suppose it is choking on the output of the select call or something?
Answers:
username_1: `FLAGS` are invalid, they can't contain `*`.
Same issue as https://github.com/deltachat/deltachat-core-rust/issues/1834
The "bug" is already fixed in `imap-proto` master, but it is not released yet: https://github.com/djc/tokio-imap/pull/91
I have opened an issue about the `imap-proto` release: https://github.com/djc/tokio-imap/issues/93
You can switch to using their git repo or wait for release.
Out of curiosity, which mail server it is?
username_0: It’s the IMAP server of Zoho, I figured they were doing something funky but I don’t know enough about IMAP understand what was going on. They are pretty big I believe tho.
>
username_1: Since you are their customer, could you ping their support and ask them to fix the bug?
See here, they replied they'll fix it in a "month or two": https://github.com/deltachat/deltachat-core-rust/issues/1834#issuecomment-692442748
Maybe if more people report that they are affected by this issue, they will fix the server sooner. |
ungoogled-software/ungoogled-chromium-macos | 459572273 | Title: clearing extended file attributes needs the '-s' switch when building
Question:
username_0: When building,
`xattr -cr out/Default/Chromium.app`
output:
`xattr: No such file: out/Default/Chromium.app/Contents/Frameworks/Chromium Framework.framework/Libraries`
The "Libraries" is a symbolic link to "Versions/Current/Libraries", so `xattr` needs the `-s` switch to avoid this error.
`xattr -csr out/Default/Chromium.app`<issue_closed>
Status: Issue closed |
DINA-Web/agent-specs | 730387414 | Title: Cleanup agent-specs
Question:
username_0: - agent.yaml should be the only file with the section before paths since all the other files are included from paths
- All verbs should use the same tense and start with a capital letter (e.g. Get metadata)
- `Get` without id (Get persons) should use the verb `List` and get with id should use the verb `Get` (not `Find`)<issue_closed>
Status: Issue closed |
mhenrixon/sidekiq-unique-jobs | 505292504 | Title: incorrect `:while_executing` behavior
Question:
username_0: [Newrelic::CustomMetricService] record Custom/Sidekiq::Queue::TestQueue/size, 1
[Newrelic::CustomMetricService] record Custom/Sidekiq::Queue::TestQueue/latency, 0.0014061927795410156
=> "2b145d5d10fb3dd9523c6f4c"
2019-10-10T13:39:42.642Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 INFO: start
2019-10-10T13:39:44.146Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 INFO: start
2019-10-10T13:39:44.666Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed queue.lua in 1ms
2019-10-10T13:39:44.668Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed lock.lua in 2ms
2019-10-10T13:39:44.668Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN INFO: !!! start with ["foo", 1]
2019-10-10T13:39:54.411Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN INFO: !!! finish
2019-10-10T13:39:54.412Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed unlock.lua in 2ms
2019-10-10T13:39:54.413Z pid=11435 tid=gq64bagkv class=TestJob jid=35c69d21c26b4385ac0f1e14 elapsed=11.771 INFO: done
2019-10-10T13:39:54.414Z pid=11435 tid=gq64bagkv class=TestJob jid=2b145d5d10fb3dd9523c6f4c INFO: start
2019-10-10T13:39:55.272Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed queue.lua in 0ms
2019-10-10T13:39:55.274Z pid=11435 tid=gq64bagkv class=TestJob jid=2b145d5d10fb3dd9523c6f4c uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed queue.lua in 1ms
2019-10-10T13:39:55.275Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed lock.lua in 0ms
2019-10-10T13:39:55.275Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN INFO: !!! start with ["foo", 2]
2019-10-10T13:39:55.275Z pid=11435 tid=gq64bagkv class=TestJob jid=2b145d5d10fb3dd9523c6f4c uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN WARN: Timed out after 0s while waiting for primed token (digest: uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN, job_id: 2b145d5d10fb3dd9523c6f4c)
2019-10-10T13:39:55.470Z pid=11435 tid=gq64bagkv class=TestJob jid=2b145d5d10fb3dd9523c6f4c uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed unlock.lua in 98ms
2019-10-10T13:39:55.830Z pid=11435 tid=gq64bagkv class=TestJob jid=2b145d5d10fb3dd9523c6f4c elapsed=1.416 INFO: done
2019-10-10T13:40:04.356Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN INFO: !!! finish
2019-10-10T13:40:04.357Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 uniquejobs=server =uniquejobs:c3bd986d71c5f4ff21ed14cbafae3b28:RUN DEBUG: Executed unlock.lua in 1ms
2019-10-10T13:40:04.358Z pid=11435 tid=gq64bah33 class=TestJob jid=9b8a444211a6287952cb03d6 elapsed=20.212 INFO: done
```
Answers:
username_1: First of all, the while executing job was completely broken in 5.x.
Also, some things have changed between v5 and v7.
```ruby
sidekiq_options(
queue: 'test_queue',
lock: :while_executing,
lock_timeout: 5,
unique_args: -> (args) do
[
args[0]
]
end
)
```
The default `lock_timeout` (seconds to wait for a lock) is zero and you are lucky any of your jobs are processed except the first one. :)
It looks like everything is working fine to me but there is a lot of information missing for me to truly be helpful:
1. What does your `sidekiq.yml` look like?
2. What does your `sidekiq.rb` look like?
3. How do you start sidekiq?
username_1: Thanks a bunch @username_0. It is quite possible `until_and_while_executing` isn't completely working yet.
I'm really happy while executing works for you guys on the older version. I think the locking was done thread locally back then so multiple threads would create problems but I might be wrong on that one. It has been a long time.
What I do know for sure is that you should avoid v6 and aim for v7 instead.
Might I ask what result you get instead on `until_and_while_executing` on `v7.beta2`? I have it tested but there might be some condition I didn't get right yet.
username_0: so, what I've found so far:
I'm using this file for testing
```ruby
class AJob
include Sidekiq::Worker
sidekiq_retry_in { |count| 3 }
@@counter = 0
sidekiq_options(
queue: 'test_queue',
lock: :while_executing,
lock_timeout: 60 * 5,
retry: 2,
unique_args: -> (args) do
[
args[0]
]
end
)
def perform(*arguments)
puts "!!! start #{arguments.inspect}"
sleep 3
if @@counter < 2
puts "!!! raise #{arguments.inspect}"
raise
end
puts "!!! finish #{arguments.inspect}"
ensure
@@counter += 1
end
end
```
for `while_executing`, when I run 5 jobs as
```ruby
5.times { |i| puts AJob.perform_async('a', i); sleep 1 }
```
only 2 or 3 of them finishing,
and for `until_and_while_executing`:
```ruby
class AJob
include Sidekiq::Worker
sidekiq_options(
queue: 'test_queue',
lock: :until_and_while_executing,
lock_timeout: 60 * 5,
unique_args: -> (args) do
[
args[0]
]
end
)
def perform(*arguments)
puts "!!! start #{arguments.inspect}"
sleep 10
puts "!!! finish #{arguments.inspect}"
end
end
```
when I'm trying to push jobs as
`5.times { |i| puts AJob.perform_async('a', i); sleep 1 }`
i'm getting 3 jobs finished instead of 2, and, what is more interesting, there is a delay after pushing 3th and 4th job
username_1: Really nice of you to help debug this.
What happens if you set the concurrency either higher or lower? Do you get different results? What I discovered on v6 which led me to refactor away towards v7 was that given too high concurrency my jobs wouldn't complete. I would, in fact, run into complete lockdowns where everything stopped working.
All the workers were busy but not a single one processing jobs.
With a too low concurrency, everything worked perfectly and this is how I developed and sort of end to end tested v6.
I would be keen on learning if changing the concurrency changes the result.
username_0: for `until_and_while_executing` - tried concurrency 1 and 10 - still the same delay on job pushing,
for `while_executing` - yep, seems on concurrency 1 all jobs start, retry and finish properly, for 2 or more - some jobs disappear
username_1: @username_0 the delay on push will be because you are having a lock timeout of 5 minutes. I strongly recommend against that but it shows the need for separating the lock timeout for client and server middlewares.
username_1: @username_0 can you try to upgrade `sidekiq-unique-jobs` and let me know if you are still having trouble? I've released quite a few bug fixes since you posted this.
username_0: @username_1 hi! I've tested a latest version of `sidekiq-unique-jobs` `7.0.0.beta9` with `sidekiq` `6.0.3` with this slightly modified script
```ruby
class AJob
include Sidekiq::Worker
sidekiq_retry_in { 3 }
sidekiq_options(
queue: 'test_queue',
lock: :while_executing,
lock_timeout: 60,
retry: 2,
on_conflict: :log,
unique_args: -> (args) do
[
args[0]
]
end
)
def perform(*arguments)
log(arguments, :start)
sleep 3
if $redis.get('counter').to_i < 2
log(arguments, :raise)
raise 'Need retry!'
end
log(arguments, :finish)
ensure
$redis.incr 'counter'
end
private
def log(arguments, action)
puts " !!! #{action} #{arguments.inspect} at #{Time.now.to_i - $redis.get("start").to_i} sec, counter is #{$redis.get("counter")}"
end
end
```
so, for the concurrency 1 it seems to work correctly:
```
$redis.set("start", Time.now.to_i); $redis.set("counter", 0); 5.times { |i| puts AJob.perform_async('a', i); sleep 1 }
526bba6b0a21febf20b4557a
41933d0529b39f2dba651396
10131f08909d3942d238f36a
2bdd9905f4938f6bbc132566
7928e1a60b3e7762fe5cce95
```
```
2019-12-09T11:28:01.831Z pid=22249 tid=gnt3gmbgt INFO: Booting Sidekiq 6.0.3 with redis options {:url=>"redis://localhost:6379", :namespace=>"sidekiq", :network_timeout=>5, :id=>"Sidekiq-server-PID-22249"}
2019-12-09T11:28:06.344Z pid=22249 tid=gnt3gmbgt INFO: Running in ruby 2.6.2p47 (2019-03-13 revision 67232) [x86_64-linux]
2019-12-09T11:28:06.344Z pid=22249 tid=gnt3gmbgt INFO: See LICENSE and the LGPL-3.0 for licensing details.
2019-12-09T11:28:06.344Z pid=22249 tid=gnt3gmbgt INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
2019-12-09T11:28:06.348Z pid=22249 tid=gnt3gmbgt DEBUG: Executed update_version.lua in 1ms
2019-12-09T11:28:06.348Z pid=22249 tid=gnt3gmbgt uniquejobs=upgrade_locks INFO: Already upgraded to 7.0.0.beta9
2019-12-09T11:28:06.349Z pid=22249 tid=gnt3gmbgt uniquejobs=reaper INFO: Starting Reaper
2019-12-09T11:28:06.349Z pid=22249 tid=gnt3gmbgt DEBUG: Client Middleware: SidekiqUniqueJobs::Middleware::Client
2019-12-09T11:28:06.350Z pid=22249 tid=gnt3gmbgt DEBUG: Server Middleware: SidekiqUniqueJobs::Middleware::Server, SidekiqMarginaliaIntegration
2019-12-09T11:28:06.350Z pid=22249 tid=gnt3gmbgt INFO: Starting processing, hit Ctrl-C to stop
2019-12-09T11:28:06.350Z pid=22249 tid=gnt3gmbgt DEBUG: {:queues=>["test_queue"], :labels=>[], :concurrency=>1, :require=>".", :environment=>nil, :timeout=>25, :poll_interval_average=>nil, :average_scheduled_poll_interval=>5, :error_handlers=>[#<Sidekiq::ExceptionHandler::Logger:0x0000557fb32d77c8>, #<Method: Airbrake::Sidekiq::ErrorHandler#notify_airbrake>], :death_handlers=>[], :lifecycle_events=>{:startup=>[], :quiet=>[], :shutdown=>[#<Proc:0x0000557fb42735d8@/home/freeman/.rvm/gems/ruby-2.6.2/gems/sidekiq-unique-jobs-7.0.0.beta9/lib/sidekiq_unique_jobs/middleware.rb:42>], :heartbeat=>[]}, :dead_max_jobs=>10000, :dead_timeout_in_seconds=>15552000, :reloader=>#<Sidekiq::Rails::Reloader @app=Calendly::Application>, :max_retries=>10, :config_file=>"./config/sidekiq.yml", :strict=>true, :tag=>"calendly", :identity=>"laptop:22249:c6ff28850cb2"}
[Truncated]
2019-12-09T11:33:01.004Z pid=23127 tid=gpsk6vsh7 class=AJob jid=31ba2ab728fd7685ed549cf4 elapsed=5.327 INFO: done
2019-12-09T11:33:01.005Z pid=23127 tid=gpsk6vqhj WARN: /mnt/sdd/encrypted/calendly/app/jobs/a_job.rb:23:in `perform'
2019-12-09T11:33:02.147Z pid=23127 tid=gpsk6voi3 class=AJob jid=a231762cf661b2e2bccab528 INFO: start
2019-12-09T11:33:02.147Z pid=23127 tid=gpsk6vp4v DEBUG: enqueued retry: {"class":"AJob","args":["a",0],"retry":2,"queue":"test_queue","lock":"while_executing","lock_timeout":60,"on_conflict":"log","unique_args":["a"],"jid":"a231762cf661b2e2bccab528","created_at":1575891171.6642435,"lock_ttl":null,"unique_prefix":"uniquejobs","unique_digest":"uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN","enqueued_at":1575891171.6648395,"error_message":"Need retry!","error_class":"RuntimeError","failed_at":1575891176.4917407,"retry_count":0}
2019-12-09T11:33:02.153Z pid=23127 tid=gpsk6voi3 class=AJob jid=a231762cf661b2e2bccab528 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed queue.lua in 1ms
2019-12-09T11:33:02.155Z pid=23127 tid=gpsk6voi3 class=AJob jid=a231762cf661b2e2bccab528 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed lock.lua in 1ms
!!! start ["a", 0] at 11 sec, counter is 2
!!! finish ["a", 0] at 14 sec, counter is 2
2019-12-09T11:33:05.161Z pid=23127 tid=gpsk6voi3 class=AJob jid=a231762cf661b2e2bccab528 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed unlock.lua in 1ms
2019-12-09T11:33:05.162Z pid=23127 tid=gpsk6voi3 class=AJob jid=a231762cf661b2e2bccab528 elapsed=3.015 INFO: done
2019-12-09T11:33:06.230Z pid=23127 tid=gpsk6vs5r class=AJob jid=97c4ea12ce7b6962fd820741 INFO: start
2019-12-09T11:33:06.231Z pid=23127 tid=gpsk6vp4v DEBUG: enqueued retry: {"class":"AJob","args":["a",1],"retry":2,"queue":"test_queue","lock":"while_executing","lock_timeout":60,"on_conflict":"log","unique_args":["a"],"jid":"97c4ea12ce7b6962fd820741","created_at":1575891172.667428,"lock_ttl":null,"unique_prefix":"uniquejobs","unique_digest":"uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN","enqueued_at":1575891172.6677046,"error_message":"Need retry!","error_class":"RuntimeError","failed_at":1575891181.0016289,"retry_count":0}
2019-12-09T11:33:06.236Z pid=23127 tid=gpsk6vs5r class=AJob jid=97c4ea12ce7b6962fd820741 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed queue.lua in 1ms
2019-12-09T11:33:06.239Z pid=23127 tid=gpsk6vs5r class=AJob jid=97c4ea12ce7b6962fd820741 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed lock.lua in 1ms
!!! start ["a", 1] at 15 sec, counter is 3
!!! finish ["a", 1] at 18 sec, counter is 3
2019-12-09T11:33:09.244Z pid=23127 tid=gpsk6vs5r class=AJob jid=97c4ea12ce7b6962fd820741 uniquejobs=server while_executing=uniquejobs:cc8d44d29605ffd70a159069ab3d8fcc:RUN:RUN:RUN DEBUG: Executed unlock.lua in 1ms
2019-12-09T11:33:09.245Z pid=23127 tid=gpsk6vs5r class=AJob jid=97c4ea12ce7b6962fd820741 elapsed=3.014 INFO: done
```
jobs a0 and a1 have raised an exceptions, and have been retried, but jobs a2, a3 and a4 have not been started at all
username_1: @username_0 is it a rails app? If so, for some reason Sidekiq doesn't do what it should... It is a Sidekiq bug related to rails reloading feature. The block that is supposed to execute your worker code isn't executed.
The problem is here: https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/processor.rb#L131 the @reloader.call` is never called. That whole block is skipped UNLESS... you add `config.cache_classes = true` and `config.eager_load = true`.
username_0: @username_1 I've tried with `config.cache_classes = true` and `config.eager_load = true` - and got the same result
username_2: @username_1 so what are our options here? Any change this will be fixed?
username_3: @username_1 bumping this again. Anything we can do to help?
username_1: @username_3 @username_2 @username_0 i will take another stab at this tonight when my daughters are in bed.
username_1: I can't replicate this problem in V7, let me know if I missed something that would make it fail. I basically just copied the worker `AJob` and renamed it to something more specific and tested with the same thing and I can't make the jobs not finish.
username_1: I'm not sure this is possible to solve for v6. @username_3, @username_2, @username_0 which version (if any) are you on at the moment?
username_3: @username_1 , these are our current sidekiq versions:
```
gem 'sidekiq', '5.2.5'
gem 'sidekiq-unique-jobs', '5.0.10'
```
Last time we tried to upgrade these gems and ran into these problem.
@username_0 , can you try re-creating?
Status: Issue closed
username_0: @username_1 👍🏼 confirming correct work in 7.0.4 version
username_0: @username_1 tested again on sidekiq 6.2 and latest 7.0.7 version, with concurrency 20
please correct me I I'm wrong: with `while_executing` policy, all jobs can be placed in the queue, but only one will be executed at a time, right? but all placed jobs should be executed one by one, no one should be lost?
I see this in my test logs:
```
put 1 at 0.00 sec c0a0fbd95166ca857e708dea
put 2 at 2.01 sec d73b362d2b8105e9d19b6e75
put 3 at 4.01 sec 2cc606df8830787593fcdf76
start 1 at 5.30 sec (I started sidekiq server intentionally later so that it starts processing on ~6 second after 1 job was put in queue)
put 4 at 8.02 sec 8a743745aea7ec6e40bbd184
raise 1 at 11.30 sec (for test purpose job 1 raises an error after 6 second after it was started)
put 5 at 13.03 sec f798577af08b481e240537b3
start 5 at 13.04 sec
finish 5 at 19.04 sec
start 1 at 19.76 sec
finish 1 at 25.76 sec
```
seems 2, 3 and 4 jobs are lost
If I put jobs when the server is running, I see this:
```
put 1 at 0.00 sec deaa33469dc42a22d180e922
start 1 at 1.31 sec
put 2 at 2.01 sec fb53af3ab6067a34fd0c8439
put 3 at 4.01 sec d0d78ca930eda4734433e30c
raise 1 at 7.31 sec
put 4 at 8.02 sec b27c183eae2035669e9d528e
start 2 at 8.12 sec
put 5 at 13.03 sec 27d263d3f5c8e408fdbeb3c1
finish 2 at 14.12 sec
start 1 at 16.60 sec
finish 1 at 22.60 sec
```
again, job 3, 4 and 5 are lost
username_1: @roman-melnor it depends on your configuration. If you have a lock_timeout set it only waits that long from when the job is picked of the queue. I think there should be some log entry for that. Perhaps add `on_conflict: :log` to debug.
If you want to make sure no job is lost you can use `on_conflict: :raise` which will allow the job to be retried. You could also reschedule it. |
arakawatomonori/covid19-surveyor | 605449968 | Title: 香川県と宮崎県が0件なので回答できてない問題があるのではないかという疑問
Question:
username_0: 現時点で香川県と宮崎県が0件に見えるので、
・回答できる人間がいない
・取得漏れ
・表示時のソート間違い
などが考えられるので調査をお願いしたいです。
Answers:
username_1: [](https://gyazo.com/6ee42397594609bfd9676d331c0c111e)
username_1: 岐阜県、なんか出てきた。
単純にない可能性もある?
 |
m93a/filtrex | 877552538 | Title: Add number suffixes (support for units)
Question:
username_0: If a `NUMBER` is followed by a `SYMBOL`, we should treat it as a suffixed number and transformed to something like `options.suffixedNumber( num, sfx )`. If there is no `suffixedNumber` method, we should throw an error that makes sense to the *user* as well as the developer.
This would allow us to write expressions like:
```
width of screen < 800px
```
Answers:
username_0: This will be fixed by #38 and #39.
Status: Issue closed
|
tiberiuzuld/angular-gridster2 | 455531894 | Title: Is there any property for gridster item to hidden it
Question:
username_0: Hello @username_2
Is there any property for gridster item to show / hidden selected item?
Thanks
Answers:
username_1: also would like to know if there is a feature to minimize or maximize the widgets programmatically?
username_2: Hello,
No to both questions.
If you hide one item and then you drag something over it, what will happen then?
You need to make both things on your own outside of the library. |
reduxjs/redux-toolkit | 619326007 | Title: Custom action type resolution
Question:
username_0: Our team has been looking into adding RTK to our project, but we have found a very minor nuisance: our action names use `CONSTANT_CASING`, while RTK enforces `/` as separators and favors (at least from `createSlice`) `another/letter/casing`. We believe fixing this in RTK, rather than renaming hundreds of actions in our app, would help us in gradual adoption.
While this is very minor, I believe fixing from RTK would be extremely simple. All we'd have to do is allow clients to pass an optional `getType` function to `createSlice` and `createAsyncThunk` that we could call from [places](https://github.com/reduxjs/redux-toolkit/blob/master/src/createSlice.ts#L202) [like](https://github.com/reduxjs/redux-toolkit/blob/master/src/createAsyncThunk.ts#L244) [these](https://github.com/reduxjs/redux-toolkit/blob/master/src/createAsyncThunk.ts#L254).
I'd be glad to create a PR to implement this, but I thought I should run the ideia by the RTK team first. Let me know what you think.
Answers:
username_1: Appreciate the suggestion, but:
RTK is deliberately meant to be opinionated about most things. One of those opinions is [structuring action types as `"domain/eventName"`](https://redux.js.org/style-guide/style-guide#write-action-types-as-domaineventname). I don't intend to change that, and I don't see a need to make it configurable.
I understand that that may make it a bit more difficult to adopt in your case, and if so, I'm sorry.
Is there any specific technical reason why you need to stick with the `CONSTANT_CASING` convention, or is it just a matter of familiarity?
username_0: I read this as meaning that, while `sliceName/actionName` was the arbitrary choice, other options would also be acceptable and RTK would be fine with `SLICE_NAME.ACTION_NAME`.
That said, there's no technical reason forcing us to use the latter, and we could also change this from our side. The only reason I wanted to make the change here was so we didn't need educate the whole company in using the new format, which could be a hassle in deciding for RTK.
If there's no value for RTK in adding this change, let's close this issue.
Thanks for your time!
username_1: Yeah, the intent of that phrasing was:
There's nothing different from a technical perspective between `"ADD_TODO"`, `"todos/addTodo"`, `"[Todos] Todo Added"`, and `"todos/todoAdded"`.
So, _for the purposes of the style guide and our official recommendations_, we picked a reasonable choice and went with it.
But, having picked that as a choice for the style guide in general, and having _specifically_ set up RTK to use that, I don't see sufficient benefit in making that configurable.
FWIW, I settled on that choice for a few reasons:
- It helps disambiguate action types, so that there's less likelihood of name clashes
- It goes along nicely with `createSlice` having a "name" field to it that gets prefixed with the reducer names to generate the action type strings
- It clarifies which slice the action relates to
- I've seen folks complain about the use of `CONSTANT_CASING` being less readable
Status: Issue closed
|
ying32/govcl | 696679060 | Title: 我程序要管理员权限,所以程序目录添加了nac.syso,但是提示 too many .rsrc sections
Question:
username_0: github.com/username_1/govcl/pkgs/winappres(.rsrc): too many .rsrc sections
如何优雅地解决这个问题?
肯定管理员权限的,肯定不去动注册表之类的。
Answers:
username_1: 
仔细看红框内注释。
Status: Issue closed
|
davidgiven/ack | 273929167 | Title: infinite rebuilds after editing input to llgen
Question:
username_0: If I build ack, then edit any file that llgen takes as input, then the build system gets stuck in infinite rebuilds. These commands reproduce the problem:
```
$ gmake
$ touch lang/pc/comp/statement.g
$ gmake
$ gmake
```
I'm using ninja. The first `gmake` builds everything. After touching statement.g, the second `gmake` seems to succeed. It runs llgen and rebuilds everything that depends on llgen's output. The third `gmake` repeats the actions of the second `gmake`, by running llgen again and rebuilding the same things.
To stop the infinite rebuilds and work around this problem, I run `gmake clean`. The problem recurs whenever I edit a .g file, or when I edit a .c file and the build system generates a .g file from the .c file.
I observe that llgen succeeds without touching some of its output files; this might be confusing ninja.
```
$ ls -l lang/pc/comp/statement.g
-rw-r--r-- 1 username_0 username_0 9276 Nov 14 14:53 lang/pc/comp/statement.g
$ cd ../ack-build/obj
$ ls -l lang/pc/comp/llgen/
total 312
-rw-r--r-- 1 username_0 username_0 14207 Nov 13 22:15 Lpars.c
-rw-r--r-- 1 username_0 username_0 959 Nov 13 22:15 Lpars.h
-rw-r--r-- 1 username_0 username_0 62723 Nov 13 22:15 declar.c
-rw-r--r-- 1 username_0 username_0 24979 Nov 13 22:15 expression.c
-rw-r--r-- 1 username_0 username_0 5335 Nov 13 22:15 program.c
-rw-r--r-- 1 username_0 username_0 29180 Nov 13 22:15 statement.c
-rw------- 1 username_0 username_0 1728 Nov 13 22:15 tokenfile.c
```
Answers:
username_1: Ugh. I loathe build systems...
It looks like llgen has some logic where it doesn't update file timestamps unless the content changes, and this is confusing ninja. (ninja -t browse and ninja -d explain are really useful here.)
Changing the util/LLgen/build.lua to:
"cd %{dir} && rm -f %{outs} && %{abspath(ins)}"
...appears to solve the problem.
Status: Issue closed
username_1: Fixed --- thanks for producing a test case for this (I'd noticed something weird here, but never tracked it down). |
Baenker/Servicemeldungen-Homematic | 787313375 | Title: ID in der Servicemeldung ausblenden
Question:
username_0: Moin,
wäre es möglich das ihr in der nächsten Version eine Funktion hinzufügt mit der man das schreiben der Geräte ID in die Servicemeldung ausschalten kann. Gerade wenn man die Meldungen über TTS auf z.B. Amazon Echo ausgibt ist das schon sehr nervig wenn dann die ID angesagt wird.
Grüße Jeremy
Answers:
username_1: Möglich wäre es aber ich werde es nicht einbauen. Mir ist der Aufwand dazu zu hoch. Ich würde das Script gerne komplett umschreiben und einiges ändern aber mir fehlt die Zeit und Lust derzeit.
Du kannst das Script jederzeit ändern. Alles was Du entfernen musst ist +' ('+id_name +')' die Teil.
Es fängt an bei Zeile 720 und 721
formatiert_servicemeldung.push(common_name +' ('+id_name +')' + ' - <font color="red">Spannung Batterien/Akkus gering.</font> '+Batterie +datum_seit);
servicemeldung.push(common_name +' ('+id_name +')' + ' - Spannung Batterien/Akkus gering. '+Batterie);
Das heißt überall wo die Zeile mit formatiert_servicemeldung bzw servicemeldung beginnt entfernst Du den Teil. Dort könntest Du es auch noch etwas textlich optimieren damit es bei der Sprachausgabe besser ist...
Status: Issue closed
|
openfoodfoundation/ofn-install | 433837632 | Title: Database encoding
Question:
username_0: Whilst working on #393 I discovered the encoding names for `LC_COLLATE` and `LC_CTYPE` we have been setting is incorrect. It's set by the following script as `en_US.utf8` but the correct value is `en_US.UTF-8`:
https://github.com/openfoodfoundation/ofn-install/blob/96badf275642293086baf5e04bd88e863b305b81/roles/dbserver/files/fix_pg_encoding.sh#L12
In order to set the proper encoding on each of our databases, we would have to:
- Dump the database
- Kill all connections
- Drop and recreate the database with the the correct encoding values
- Restore the dumped data
This needs some discussion obviously. We could write a role to handle it smoothly, but the process would take a small amount of downtime.
Answers:
username_0: I'm getting this with the new plugin which uses recommended values:
```
Database query failed: new collation (en_US.UTF-8) is incompatible with the collation of the template database (en_US.utf8)
```
username_0: It looks like maybe we can stick with the alternate spelling, and just force the new role to use it instead.
https://superuser.com/a/999151
Status: Issue closed
username_1: :see_no_evil: thank God
username_0: Yeah, it looks like it'll be ok. We can specify it with the plugin. Here's some production values:
 |
Azure/azure-cli | 439775575 | Title: Add support to cancel in-flight arm deployments via CLI
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Currently, the only way to cancel in-flight arm deployments programmatically is via the REST api. Adding support to cancel in-flight deployments would alleviate complexity required when scripting deployments.
**Describe the solution you'd like**
Add an option to cancel in-flight arm deployments via the az cli.
**Describe alternatives you've considered**
The only alternative is the REST api, which requires a service principal. Logging into the portal to cancel requires manual intervention.
**Additional context**
None
Answers:
username_1: Command coverage gap we need to fill.
username_2: This feature request has been completed by PR #12915
Status: Issue closed
|
vknet/vk | 115288382 | Title: Требование авторизации для метода Wall.Get / Wall.GetExtended
Question:
username_0: Добрый день.
В документации указано, что токен требуется только в случаях с `filter=suggests` и `filter=postponed`.
https://vk.com/dev/wall.get
Может, в вызов `_vk.Call` добавить `skipAuthorization = (filter != WallFilter.Suggests && filter != WallFilter.Postponed)`?
Answers:
username_1: @username_0 боюсь это не самый корректный подход с точки зрения архитектуры, представляете во что превратиться метод если все исключительные ситуации вносить туда?
username_1: да, возможно стоит передавать 3-им параметром то что вы указали выше. вы можете внести в проект свой вклад и внести свои изменения в код и прислать пул реквест
username_0: @username_1 без исключительных ситуаций тоже не интересно.)
У ВК ведь где-то это так же обрабатывается, наверное.
Ок, щас попробую. Спасибо.
Status: Issue closed
username_1: закрыто пул реквестом#146 |
CGATOxford/UMI-tools | 514259066 | Title: dedup with paired end reads - need to sort by name first?
Question:
username_0: Hi, I used STAR to produce alignments for paired end reads with the '--outSAMtype BAM SortedByCoordinate' setting, followed by 'samtools index ...', then called dedup with the --paired flag. Since 'samtools sort' sorts by coordinates rather than by name, it splits up read pairs. Is this issue accounted for by including the --paired flag when I call dedup, or do I need to call 'samtools sort -n' first?
Answers:
username_1: Closing due to inactivity. Please re-open if required.
Status: Issue closed
|
blockframes/blockframes | 509928895 | Title: Welcome to Bigger Boat landing page
Question:
username_0: ## Description
Landing Page should be the welcome to Bigger Boat homepage instead of the Main app home.
## Objective
- [ ] Link the landing page to the Welcome to bigger boat homepage (https://projects.invisionapp.com/d/main#/console/18708905/389335190/preview)
- [ ] Update the landing page to match this

- [ ] The button should lead to the sign in / sign up page
Answers:
username_1: A similar page already exists in libs/auth/src/lib/pages/welcome-view
username_0: Subtitle text should be:
"The only marketplace offering package deals for TV and Film rights."
Status: Issue closed
|
Piker-Alpha/ssdtPRGen.sh | 212112764 | Title: CPUPstates always stay in 800MHz-4000MHZ
Question:
username_0: MB: X99
CPU: 6800k (1200MHz - 4000MHz, turbo disabled in BIOS)
OS: 10.12.3
X86Platformplugin works fine but there's a problem:
However I change my ssdt, CPUstates in IOre always stay in 33 values and the max is 4000MHz, the min 800MHz.
I'm sure my ssdt is loaded becuz when I removed my ssdt, X86Platformplugin didnt work.

PS: I only use fakecpuid 0306D0 or 040674(both I've tried) and your reboot patch and I've tried every models for freqvectors.
Answers:
username_1: This is fine. Open the generated SSDT and search for **Name (APLF, some value)**. This ACPI object points to the lowest possible frequency i.e. it won't (try to) run on any lower frequencies. |
aio-libs/aiokafka | 1056602945 | Title: aiokafka some time fails, possibly because no support of ZSTD compression
Question:
username_0: Periodically kafka consumer fails with traceback. 80% of time everything works fine, but sometimes kafka consumer fails with `UnboundLocalError`. Possibly because ZSTD compression is not available.
May be related to #501 and #708
See traceback below:
```python
Traceback (most recent call last):
File "/home/user/projects/my-kafka-consumer/venv/bin/my-kafka-consumer", line 11, in <module>
load_entry_point('my-kafka-consumer==0.0.1', 'console_scripts', 'my-kafka-consumer')()
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper
return callback(**use_params) # type: ignore
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/my_kafka_consumer/app.py", line 87, in main
consumer.start()
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/my_kafka_consumer/my_consumer.py", line 151, in start
asyncio.run(self.consume())
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/my_kafka_consumer/my_consumer.py", line 135, in consume
async for msg in self.consumer:
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/consumer.py", line 1248, in __anext__
return (await self.getone())
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/consumer.py", line 1136, in getone
msg = await self._fetcher.next_record(partitions)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/fetcher.py", line 1030, in next_record
message = res_or_error.getone()
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/fetcher.py", line 117, in getone
msg = next(self._partition_records)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/fetcher.py", line 198, in __next__
return next(self._records_iterator)
File "/home/user/projects/my-kafka-consumer/venv/lib/python3.9/site-packages/aiokafka/consumer/fetcher.py", line 241, in _unpack_records
for record in next_batch:
File "aiokafka/record/_crecords/default_records.pyx", line 354, in aiokafka.record._crecords.default_records.DefaultRecordBatch.__iter__
File "aiokafka/record/_crecords/default_records.pyx", line 227, in aiokafka.record._crecords.default_records.DefaultRecordBatch._maybe_uncompress
UnboundLocalError: local variable 'uncompressed' referenced before assignment
```
I think, that code have to be modfied to show why this happens, instead of failing on `UnboundLocalError`<issue_closed>
Status: Issue closed |
UBC-MDS/tidyplusR | 305807659 | Title: tests not passing
Question:
username_0: Loading tidyplusR
Loading required package: testthat
Testing tidyplusR
✔ | OK F W S | Context
Error in x[[method]](...) : attempt to apply non-function
══ Results ═══════════════════════════════════════════════════════════════════════════════════════════════════════════
OK: 0
Failed: 2
Warnings: 1
Skipped: 0
Keep trying!
```
Answers:
username_0: Also - why does it look like only 3 tests are being run? You have more than 3 tests in your `tests` directory...
username_0: ── 1. Error: (unknown) (@test_impute.R#5) ───────────────────────────────────────────────────────────────────────────
`path` does not exist
1: testthat::test_file("impute.R") at testthat/test_impute.R:5
2: stop("`path` does not exist", call. = FALSE)
══ testthat results ═════════════════════════════════════════════════════════════════════════════════════════════════
OK: 127 SKIPPED: 0 FAIL
```
username_0: As well as `goodpractice::gp()` to identify likely sources of errors and style issues. Once you fix the tests so they run you should try to run these functions to identify ways to further improve your package.
username_1: @username_0
Thanks a lot, will test these with `goodpractice::gp()`. I suppose most path, function call errors have occurred for other users and didn't show up for us. We'll do thorough testing this time
Status: Issue closed
username_2: Package check passed
549975f56f286dd88e40f688e574204860d2960f |
JetBrains/TeamCity.VSTest.TestAdapter | 537034003 | Title: No output when run by TeamCity agent
Question:
username_0: We are running xUnit tests using `dotnet vstest` as a part of our build process. The tests are being run by the agent but the results are not being reported. Looking at the raw agent logs it seems that no system messages are being emited.
However, I can log onto the build agent's machine as the user that the agent runs as, copy and paste the `dotnet vstest` command into `cmd.exe` and it will run the tests and emit system messages as expected.
Any suggestions on what is wrong in our setup gratefully received.
Team City version: TeamCity 2017.1.2 (build 46812)
Example DotNet command being used:
```
"C:\Program Files\dotnet\dotnet.EXE" vstest "C:\TeamCity\Agent.1\buildAgent\work\6cbcd640b5d0f18b\build\work\Test.Assembly.dll" /Logger:TeamCity "/TestAdapterPath:C:\TeamCity\Agent.1\buildAgent\work\6cbcd640b5d0f18b\build\work"
```
(I have tried running this command as the agent user in `cmd.exe` from various working directories and it always works as expected.)
Output of `dotnet --info`:
```
.NET Core SDK (reflecting any global.json):
Version: 3.1.100
Commit: cd82f<PASSWORD>
Runtime Environment:
OS Name: Windows
OS Version: 10.0.17763
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.1.100\
Host (useful for support):
Version: 3.1.0
Commit: 65f04fb6db
.NET Core SDKs installed:
2.1.508 [C:\Program Files\dotnet\sdk]
2.1.509 [C:\Program Files\dotnet\sdk]
2.2.104 [C:\Program Files\dotnet\sdk]
3.1.100 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.12 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.12 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.12 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.14 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
To install additional .NET Core runtimes or SDKs:
https://aka.ms/dotnet-download
```
Answers:
username_1: xunit tests **for .net core only** sends tests statistics to TeamCity via service messages without any additional adapters, so you could avoid passing `/Logger:TeamCity "/TestAdapterPath:C:\TeamCity\Agent.1\buildAgent\work\6cbcd640b5d0f18b\build\work"` there
But there is [an issue](https://github.com/xunit/xunit/issues/1706), so you should use the verbosity level at least `normal` adding the argument `/logger:console;verbosity=normal` like in the following:
`"C:\Program Files\dotnet\dotnet.EXE" vstest "C:\TeamCity\Agent.1\buildAgent\work\6cbcd640b5d0f18b\build\work\Test.Assembly.dll" /logger:console;verbosity=normal
`
Or you could update the TeamCity version and dotnet CLI runner will do it for you
username_1: I've update the [readme file](https://github.com/JetBrains/TeamCity.VSTest.TestAdapter#known-issues)
username_0: Thanks @username_1, removing the Team City logger from `dotnet vstest` has done the trick.
To clarify, when you say that .Net Core Xunit tests send statistics without any additional adapters, does this mean that they do not require a reference to `TeamCity.VSTest.TestAdapter` in the test project?
username_1: @username_0 Yes, you could remove that reference if you are planning to have dotnet core xunit test projects only. For instance xunit for full dotnet does not provide TeamCity integration from the box.
username_0: Sounds good, thanks.
Status: Issue closed
|
openatx/uiautomator2 | 275316441 | Title: 2.0定位速度問題
Question:
username_0: 把python uiautomator測試腳本改成 2.0
原先為安卓6.0,如以下語法這樣原先1.0大概1~2秒定位
d(text="下一頁").click()
但安卓升級成7.0後
uiautomator也換成2.0定位時間居然會到5~6秒甚至更久,請問是什麼問題呢?
Answers:
username_1: 12:32:47.182 $ curl -X POST -d '{"jsonrpc": "2.0", "id": "b80d3a488580be1f3e9cb3e926175310", "method": "deviceInfo", "params": {}}' 'http://127.0.0.1:54179/jsonrpc/0'
12:32:47.225 Response >>>
{"jsonrpc":"2.0","id":"b80d3a488580be1f3e9cb3e926175310","result":{"currentPackageName":"com.android.mms","displayHeight":1920,"displayRotation":0,"displaySizeDpX":360,"displaySizeDpY":640,"displayWidth":1080,"productName"
:"odin","screenOn":true,"sdkInt":25,"naturalOrientation":true}}
<<< END
```
你可以把调试的日志打出来看看
username_1: 没有回复,我先关掉了 @username_0
Status: Issue closed
|
SeleniumHQ/selenium | 82047427 | Title: Windows reports "Firefox has stopped working" when using Java "driver = new FirefoxDriver()"
Question:
username_0: public void setUp() throws Exception {
driver = new FirefoxDriver();
}
When running the Java code above (with NetBeans / Maven Run, JDK 1.8), Windows 7 64-bit reports "Firefox has stopped working" (Firefox version 38.0.1).
Work-around: Close error box. Everything continues running fine after closing the Windows error box.
Status: Issue closed
Answers:
username_1: Duplicate of #437
Also, in the future, please be sure to include which version of Selenium you are using. |
godotengine/godot | 280868407 | Title: Cannot inspect nodes with remote scene tree
Question:
username_0: Godot 2.1.4
Windows 10 64 bits
I tried using the remote scene tree, and I noticed I couldn't inspect remote nodes with it, and the selected node gets deselected every second:

Answers:
username_1: Look at the Inspector dock :eye:
username_0: Oh, it was moved here?
But... what is the right panel used for then? Is it object property dictionary? (I remember there was a request to be able to edit that one in general)
username_1: The right panel was not removed when the improvements on the remote inspector were made (I think is gone now on 2.1/HEAD), there is a similar panel when you get errors and breakpoints on the debugger tab (with variables, etc.).
Status: Issue closed
|
ljessendk/easybinder | 266790227 | Title: Accessing error messages from binder or notification
Question:
username_0: currently it is not possible to get the list of error messages from a binder or a notification. As far as I can see, the BasicBinder has all ConstraintViolations (but not accessible from the outside) and the bindings each store conversion and validation error messages. Ideally, the BinderStatusChangeEvent would carry the actual error messages and they would also be accessibe from the binder in an accumulated way.
Answers:
username_1: As a first step, I've added a public method to get the constraint violations (4b0f766edcd782b53ec6af1b5b2da41c01d0d4a1).
So you can get the field level conversion and validation errors using:
`
binder.getBindings().stream().map(e -> e.validate()).collect(Collectors.toList()),
`
And the bean level validation errors using:
`
binder.getConstraintViolations().stream().filter(e -> e.getPropertyPath().toString().equals(""))
.map(e -> ValidationResult.error(e.getMessage())).collect(Collectors.toList()));
`
Or, alternatively, use the BinderAdapter that exposes a Binder compatible interface including BinderValidationStatus handling.
I'll try to add some logic similar to the BinderValidationStatus handling in Binder to BasicBinder in the near future.
username_0: Excellent. Thank you for the quick reply and the good work.
username_1: Part of release 0.5 |
mojohaus/jaxb2-maven-plugin | 240516603 | Title: ignores xjcSourceExcludeFilter
Question:
username_0: trying to convince xjc to ignore the CVS files. My pom plugin settings are:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jaxb2-maven-plugin</artifactId>
<version>2.3.1</version>
<executions>
<execution>
<configuration>
<outputDirectory>target/generatedSources</outputDirectory>
<xjbExcludeFilters>
<filter
implementation="org.codehaus.mojo.jaxb2.shared.filters.pattern.PatternFileFilter">
<patterns>
<pattern>CVS\Entries</pattern>
<pattern>CVS\Repository</pattern>
<pattern>CVS\Root</pattern>
</patterns>
</filter>
</xjbExcludeFilters>
<xjcSourceExcludeFilters>
<filter
implementation="org.codehaus.mojo.jaxb2.shared.filters.pattern.PatternFileFilter">
<patterns>
<pattern>CVS</pattern>
<pattern>CVS\Repository</pattern>
<pattern>CVS\Root</pattern>
<pattern>CVS\Entries</pattern>
</patterns>
</filter>
</xjcSourceExcludeFilters>
</configuration>
</execution>
</executions>
</plugin>
The xjbExclude filter is honoring the the settings but the xjcExclude filter is not: Here is snippet from mvn -X jaxb2:xjc
[DEBUG]
+=================== [18 XJC Arguments]
|
| [0]: -xmlschema
| [1]: -encoding
| [2]: Cp1252
| [3]: -d
| [4]: C:\Users\username_0\Workspaces\release-2.0.2\cot-rssmanager\target\generated-sources\jaxb
| [5]: -extension
| [6]: -episode
| [7]: C:\Users\username_0\Workspaces\release-2.0.2\cot-rssmanager\target\generated-sources\jaxb\META-INF\sun-jaxb.episode
| [8]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/Alert.xsd
| [9]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/CVS/Entries
| [10]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/CVS/Repository
| [11]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/CVS/Root
| [12]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/Camera.xsd
| [13]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/DMS.xsd
| [14]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/Global.xsd
| [15]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/RoadConditions.xsd
| [16]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/Speed.xsd
| [17]: /C:/Users/username_0/Workspaces/release-2.0.2/cot-rssmanager/src/main/xsd/WeatherStation.xsd
|
+=================== [End 18 XJC Arguments]
Answers:
username_1: Could be the effect of using "\" which is being interpreted as an escape char.
Will move to ant patterns instead; the Java Regexps cause confusion. |
google-developer-training/android-fundamentals-apps-v2 | 728845471 | Title: Android Java Fundamentals codelab: Android fundamentals 04.2: Input controls
Question:
username_0: **Describe the problem**
There is an error in the code, the variable spinnerLabel is not declared.

The line was be:
`String spinnerLabel = adapterView.getItemAtPosition(i).toString();
`
**In which lesson and step of the codelab can this issue be found?**
Android fundamentals 04.2: Input controls
6. Task 3: Use a spinner for user choices
Point 3.3
**Additional information**
The code in github is OK, only appears in the web code (https://codelabs.developers.google.com/codelabs/android-training-input-controls/index.html?index=..%2F..%2Fandroid-training#5)
**codelab:** android-fundamentals |
terasolunaorg/terasoluna-tourreservation | 328925413 | Title: add profile for tomcat9-postgresql
Question:
username_0: ## Description
create profile to use tomcat 9.0.6 and PostgreSQL 10.2
## Possible Solutions
TBD
## Affects Version/s
* 5.X.X.RELEASE
## Fix Version/s
- [ ] 5.5.0(master)
- [ ] 5.4.2(5.4.x)
## Final Solution
TBD
## Issue Links
* #XXX
Status: Issue closed
Answers:
username_0: tomcat9+postgresql profile is not used for 1.5.x |
facebook/react-native | 328822446 | Title: MaskedView support for Android
Question:
username_0: ## For Discussion
I've worked on view demonstrated below today, very pleasant experience while using [MaskedViewIOS](https://facebook.github.io/react-native/docs/maskedviewios.html)
In essence it masks Orc graphic with that dark cloud / smoky mask. On android however, I was not able to find anything similar from community and after 3h of trying to create a native component bridge for this, I came nowhere close :/
Hence I wanted to start this discussion, shall we make `MaskedView` more general and support it for Android as well, or are there some limitations that don't allow this (hence only IOS implementation at the moment?)
<img width="370" alt="screen shot 2018-06-03 at 14 53 24" src="https://user-images.githubusercontent.com/3154053/40886303-e568ec54-673d-11e8-886c-af3ab45d022c.png">
Answers:
username_1: This would be great to have for Android!
username_2: React Native Core Team should make MaskedView more general and support it for Android as well. This would be great to have MaskedView for Android :+1:
username_3: Yes! MaskedView is much needed on Android as well.
username_4: To achieve similar effect on Android you can use my [react-native-image-filter-kit](https://github.com/username_4/react-native-image-filter-kit) package:
```js
import { Image } from 'react-native'
import { DstATopComposition } from 'react-native-image-filter-kit'
const style = { width: 320, height: 320 }
const masked = (
<DstATopComposition
dstImage={
<Image
style={style}
source={{ uri: 'https://i.ytimg.com/vi/_ebwvrrBfUc/hqdefault.jpg' }}
/>
}
srcImage={
<Image
style={style}
source={{ uri: 'https://pluspng.com/img-png/download-smoke-effect-png-images-transparent-gallery-advertisement-advertisement-336.png' }}
/>
}
/>
)
```

Text mask [example](https://stackoverflow.com/a/53982429/4134913)
username_5: This issue has been moved to react-native-community/react-native-masked-view#3.
Status: Issue closed
|
kovetskiy/curlbash | 167567035 | Title: Security considerations.
Question:
username_0: Please stop encouraging people to use bad security practices by telling them to execute arbitrary code.
Piping from CURL to Bash is a [really bad idea](https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-server-side/) and everyone should stop doing that.
Please consider closing this project or find other (_more secure_) alternative to piping curl to bash.
Answers:
username_1: _<NAME>, 1983_
username_0: @username_1 Heh. Good to know then.
Maybe you should specify that this is a joke as an addendum somewhere in the readme. |
department-of-veterans-affairs/va.gov-team | 741916136 | Title: Remove the links and message to Storybook and redirect to Storybook documentation
Question:
username_0: ## Issue Description
About 1 month after Storybook is live, remove the links and message to Storybook and redirect to Storybook documentation.
---
## Tasks
- [ ] remove the links and message to Storybook
- [ ] redirect to Storybook documentation
## Acceptance Criteria
- [ ] message is no longer visible on gatsby and design.va.gov
- [ ] Links for components go to Storybook
---<issue_closed>
Status: Issue closed |
mrlesmithjr/ansible-mariadb-galera-cluster | 851820903 | Title: cannot figure out how to run the repo
Question:
username_0: I am sorry to ask this basic question , but I didn't find the inventory file
how can I run this cluster
Answers:
username_0: ASK [ansible-mariadb-galera-cluster : setup_cluster | configuring settings for mariadb and galera] *******************************************************************************************************
changed: [hello] => (item=etc/mysql/debian.cnf)
changed: [hardening] => (item=etc/mysql/debian.cnf)
changed: [hello] => (item=etc/mysql/my.cnf)
changed: [hardening] => (item=etc/mysql/my.cnf)
failed: [hello] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}
failed: [hardening] (item=etc/mysql/conf.d/galera.cnf) => {"ansible_loop_var": "item", "changed": false, "item": "etc/mysql/conf.d/galera.cnf", "msg": "AnsibleUndefinedVariable: {% set _galera_cluster_nodes = [] %}{% for host in groups[ galera_cluster_nodes_group ] %}{{ _galera_cluster_nodes.append( host ) }}{% endfor %}{{ _galera_cluster_nodes }}: 'dict object' has no attribute 'galera-cluster-nodes'"}
username_1: Hi @username_0. Role defaults to group name galera-cluster-nodes for Galera server hosts. You should put galera servers to such group or change variable 'galera_cluster_nodes_group' according to the group you actually use.
username_0: Thanks a lot
username_2: @username_0 - was @username_1 answer ok? If so... can you close this?
username_2: Thanx! |
rapidsai/dask-cuda | 983214657 | Title: [BUG] Failure when performing ORDER BY desc query with JIT_UNSPILL enabled
Question:
username_0: I get an unexpected error when performing the `ORDER BY desc` operation when using dask-sql with a dask-cuda cluster with JIT unspilling enabled.
For example:
```python
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
cluster = LocalCUDACluster(n_workers=16, device_memory_limit="15GB", enable_tcp_over_ucx=True, enable_nvlink=True, rmm_pool_size="29GB", jit_unspill=True)
client = Client(cluster)
import cudf, dask_cudf
from dask_sql import Context
c = Context()
df = cudf.DataFrame({"id":[1,4,4,5,3], "val":[4,6,6,3,8]})
ddf = dask_cudf.from_cudf(df, npartitions=1)
c.create_table("df", ddf)
query = "SELECT * FROM df ORDER BY id desc"
c.sql(query).compute()
```
returns:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_75923/4184803532.py in <module>
16 query = "SELECT * FROM df ORDER BY id desc"
17
---> 18 c.sql(query).compute()
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask_sql/context.py in sql(self, sql, return_futures, dataframes)
420 rel, select_names, _ = self._get_ral(sql)
421
--> 422 dc = RelConverter.convert(rel, context=self)
423
424 if dc is None:
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask_sql/physical/rel/convert.py in convert(cls, rel, context)
54 f"Processing REL {rel} using {plugin_instance.__class__.__name__}..."
55 )
---> 56 df = plugin_instance.convert(rel, context=context)
57 logger.debug(f"Processed REL {rel} into {LoggableDataFrame(df)}")
58 return df
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask_sql/physical/rel/logical/project.py in convert(self, rel, context)
25 ) -> DataContainer:
26 # Get the input of the previous step
---> 27 (dc,) = self.assert_inputs(rel, 1, context)
28
29 df = dc.df
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask_sql/physical/rel/base.py in assert_inputs(rel, n, context)
79 from dask_sql.physical.rel.convert import RelConverter
80
---> 81 return [RelConverter.convert(input_rel, context) for input_rel in input_rels]
82
83 @staticmethod
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask_sql/physical/rel/base.py in <listcomp>(.0)
79 from dask_sql.physical.rel.convert import RelConverter
80
---> 81 return [RelConverter.convert(input_rel, context) for input_rel in input_rels]
82
[Truncated]
572 Call the corresponding method based on type of argument.
573 """
--> 574 meth = self.dispatch(type(arg))
575 return meth(arg, *args, **kwargs)
576
~/anaconda3/envs/cudf-gpu-bdb/lib/python3.8/site-packages/dask/utils.py in dispatch()
566 lk[cls] = lk[cls2]
567 return lk[cls2]
--> 568 raise TypeError("No dispatch for {0}".format(cls))
569
570 def __call__(self, arg, *args, **kwargs):
TypeError: No dispatch for <class 'dask_cuda.proxify_device_objects._register_cudf.<locals>.FrameProxyObject'>
```
Environment:
dask - 2021.8.1
dask-sql - 0.3.10
cudf - 21.10
dask-cudf - 21.10
dask-cuda - 21.10
Answers:
username_1: Thanks @username_0 I can also reproduce. Here is perhaps a simpler reproducer:
```python
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
cluster = LocalCUDACluster(n_workers=1, jit_unspill=True)
client = Client(cluster)
import cudf, dask_cudf
from dask_sql import Context
c = Context()
df = cudf.DataFrame({"id":[1,4,4,5,3], "val":[4,6,6,3,8]})
ddf = dask_cudf.from_cudf(df, npartitions=1)
c.create_table("df", ddf)
query = "SELECT * FROM df ORDER BY id desc"
c.sql(query).compute()
```
Seeing `_percentile` come up in the traceback, I'm wondering if we need to add additional dispatches in Dask. @username_2 do you have thoughts here ?
username_2: Looking into it
username_2: id val
4 5 3
3 4 6
2 4 6
1 3 8
0 1 4
```
username_3: Yes, that is as expected. The proxy objects _leaks_ into userspace, unless setting `DASK_JIT_UNSPILL_COMPATIBILITY_MODE=False`. |
votinginfoproject/vip-specification | 116121917 | Title: Specify a convention for candidate ordering and office ordering for CandidateSelection and CandidateContest
Question:
username_0: A CandidateSelection, can list multiple candidates in case they are part of a ticket. There should be a convention specified to say that they must be in the same order as the officeIds in the CandidateContest that they are associated with. These candidates within each selection should be in the same order (e.g. c1[0] = 'Obama', c1[1] = 'Biden', c2[0] = 'Romney', c2[1] = 'Ryan').
This order should also be the same as the officeIds specified in CandidateContest element.
This seems much simpler than trying to explicitly describe the candidate->office mapping per ticket/selection.
Answers:
username_1: Hi all,
In the NIST spec, we mention this, but I suspect I could make it more
clear.
For the CandidateContest element, I write:
"For cases when the contest is associated with multiple offices, e.g., if
Governor and Lt. Governor are both separate offices, it is expected that
the generating application will list the multiple references to <Office>
according to any ordering scheme in place."
For the CandidateSelection element, I write:
"When multiple candidates are referenced for a ticket and the ordering of
the candidates is important to preserve, it is expected that the generating
application will list the occurrences of *<CandidateId>* according to the
ordering scheme in place."
Looking at this now, I can see I was attempting to be general, but it comes
at a cost to this specific issue, which is keeping the order of the
CandidateId elements the same as the order of OfficeId elements. I will
re-write the documentation to be more clear - I'll get back in touch with
some new language hopefully later today and I'd be appreciative of any
comments on it.
Cheers, John
*---*
*<NAME>*
<EMAIL>
301.640.6626 - cell
301.975.3411 - office
username_2: I will update the best practices document and the sample feed to make this explicit.
Status: Issue closed
username_2: Closed by #309. |
pardeike/Achtung2 | 723835616 | Title: some kind of error
Question:
username_0: the game refuses to open pawn controls when you select one with the mod in. tried it both at the top and bottom of the load order.
please advise?
hugs log: https://git.io/JTC0e
Answers:
username_1: The only possibility that this error happens is when some other mod messes with world components. Achtung saves all its state in such a component and it must be available at all times. Impossible to know for me why this happens.
Status: Issue closed
|
pnp/pnpjs | 621478354 | Title: pnpjs and jest / enzyme tests
Question:
username_0: ### Category
- [ ] Enhancement
- [ ] Bug
- [x ] Question
- [ ] Documentation gap/issue
### Version
Hello,
when I am reference pnpjs in my react typescript project, I am not able to run my tests anymore.
The Typescript does not get compiled. A thread about that can be found here:
https://github.com/pnp/pnpjs/issues/1058
My Question:
Is it a valid solution to use @pnp/common instead of pnpjs?
When I reference @pnp/common then my tests are running fine and the code also works.
What is the exact difference between pnpcommon and pnpjs? Is it just a subset of pnpjs functionalities or is it a copy of functionalities?
Thanks :-)
Answers:
username_1: I can't really say as I don't really know what that means.
Really sorry we can't be more helpful but without really knowing jest and having worked with it we really don't have a lot to go on.
username_0: Sorry for the confusing terminology.
With using pnp common I meant exaclty:
Is it valid to write imports like so
import { sp,..... } from "@pnp/sp-commonjs";
instead of
import { sp, .... } from "@pnp/sp";
While using the way as it was in version 1, the jest tests are running fine.
As far as I understood @pnp/sp references all packages but we are still able to import sp from @pnp/sp-commonjs
username_1: Well... that's a huge difference! @pnp/sp-commonjs != @pnp/common.
Yes, the commonjs libraries were built to use with nodejs (because nodejs doesn't play well with es modules) - https://pnp.github.io/pnpjs/nodejs-support/.
If you've managed to coordinate everything I don't personally know of any reason that you can't use the commonjs modules vs the es modules.
username_0: Thanks, that reply is very useful. I will close this question.
Status: Issue closed
|
SUSE/salt-netapi-client | 434269733 | Title: Using the asynchronous run method, the test found that the actual task was less than the original task.
Question:
username_0: SaltClient.class
By using the following asynchronous run method, saltstack has fewer actual tasks than it does in concurrent situations, What should I do? Are there any configurations that are not configured correctly?
public <T> CompletionStage<Map<String, Object>> run(String username, String password, AuthModule eauth, String client, Target<T> target, String function, List<Object> args, Map<String, Object> kwargs) {
Map<String, Object> props = new HashMap();
props.put("username", username);
props.put("password", <PASSWORD>);
props.put("eauth", eauth.getValue());
props.put("client", client);
props.putAll(target.getProps());
props.put("fun", function);
props.put("arg", args);
props.put("kwarg", kwargs);
List<Map<String, Object>> list = Collections.singletonList(props);
String payload = this.gson.toJson(list);
CompletionStage<Map<String, Object>> result = this.asyncHttpClient.post(this.uri.resolve("run"), payload, JsonParser.RUN_RESULTS).thenApply((s) -> {
return (Map)((List)s.getResult()).get(0);
});
return result;
}
Answers:
username_1: Hey @username_0, what exactly do you mean when you say "saltstack has fewer actual tasks than it does in concurrent situations"? How exactly do you use this method and what is the unexpected outcome vs. the expected?
username_0: Test purpose :
I want to generate 100 files in /tmp/test , but in fact it just generate 75 files(the nums is not exactly every times)
Test result:
[root@rhel72 test]# pwd
/tmp/test
[root@rhel72 test]# ls -l |grep '^-'|wc -l
75
Test code:
public static String ayscExecute(String ip, final String command) {
try {
CloseableHttpAsyncClient httpClient = HttpClientUtils.defaultClient();
SaltClient client = new SaltClient(URI.create(url), new HttpAsyncClientImpl(httpClient));
IPCidr target = new IPCidr(ip);
List<Object> list = new ArrayList();
list.add(command);
Runnable cleanup = () -> {
try {
httpClient.close();
} catch (Exception e) {
logger.error("close error",e);
}
};
CompletionStage<Map<String, Object>> result = client.run(ConstUtils.SALT_API_USER,
ConstUtils.SALT_API_PASSWORD, AuthModule.PAM,"local",
target,"cmd.run",list,new LinkedHashMap<>());
result
.thenAccept(t-> System.out.println(t))
.thenRun(cleanup);
}catch (Exception e){
logger.error("aysc exec failed",e);
return "fail";
}
return "ok";
}
public static void main(String[] args) {
ConstUtils.SALT_API_USER = "user";
ConstUtils.SALT_API_PASSWORD = "<PASSWORD>";
ConstUtils.SALT_API_URL = "http://192.168.6.138:8000";
String ip = "192.168.6.138";
ExecutorService executorService = Executors.newFixedThreadPool(1);
AtomicInteger count = new AtomicInteger();
executorService.submit(()->{
for(int i=0;i<100;i++) {
int num = count.getAndIncrement();
String command = String.format("touch /tmp/test/%s;echo %s",num,num);
System.out.println(command);
String result = SaltUtil.ayscExecute(ip,command);
System.out.println(result);
}
});
}
Thank you for you help! |
regen-network/regen-ledger | 1168781862 | Title: Add data module to stable app config
Question:
username_0: ## Summary
<!-- Short, concise description of the proposed feature -->
We are including the data module in the next upgrade and need to wire up the module in the stable app config.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned<issue_closed>
Status: Issue closed |
amjadafanah/Hotels-App-Test-Project | 338362087 | Title: Spring-REST-Example-2 : example_v1_hotels_@randominteger_get_auth_invalid
Question:
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 04 Jul 2018 19:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/1425718801
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
Answers:
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 04 Jul 2018 20:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/703366469
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 04 Jul 2018 21:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/1423587919
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 04 Jul 2018 22:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/1012573966
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Wed, 04 Jul 2018 23:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/1487387878
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 05 Jul 2018 00:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/17780390
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 05 Jul 2018 00:33:08 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/1697158118
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404]
username_0: Project : Spring-REST-Example-2
Job : Dev
Env : Dev
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 404
Headers : {X-Application-Context=[application:8090], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 05 Jul 2018 01:30:56 GMT]}
Endpoint : http://172.16.31.10:8090/example/v1/hotels/example/v1/hotels/226801789
Request :
Response :
null
Logs :
Assertion [@StatusCode == 401] failed, expected value [401] but found [404] |
slimphp/Twig-View | 99365701 | Title: Twig_EnvironmentInterface does not exist
Question:
username_0: [This interface](https://github.com/slimphp/Twig-View/blob/master/src/Twig.php#L135) does not exist in Twig.
Where is it defined? The Twig project only has [Environment](https://github.com/twigphp/Twig/blob/1.x/lib/Twig/Environment.php) and that one has no interface.
Answers:
username_1: No idea :) Feel free to PR a fix!
Status: Issue closed
username_1: [This interface](https://github.com/slimphp/Twig-View/blob/master/src/Twig.php#L135) does not exist in Twig.
Where is it defined? The Twig project only has [Environment](https://github.com/twigphp/Twig/blob/1.x/lib/Twig/Environment.php) and that one has no interface.
username_0: Ok, submitted #32
Status: Issue closed
|
desktop/desktop | 575128165 | Title: Expandable Commit Description
Question:
username_0: ### Make the description box expandable

When you are writing a long detailed description for a commit the tiny box makes it very hard to read what you are writing. This was reported as an issue before over two years ago and has since been marked closed with no changes.
### Proposed solution

We can already scale the section of the window horizontally which helps a little, but need to be able to scale the description part vertically.

Alternatively, you could have a _detail_ mode that goes full screen and gives you all the editing options you have in browser.
Answers:
username_0: you can change the size of the field manually,

username_1: Thanks for the feedback @username_0. I'm going to group this into #1646 where we are tracking improvements to the commit writing experience.
Status: Issue closed
|
ellisdg/3DUnetCNN | 310204538 | Title: about predict and evaluate.
Question:
username_0: Thank you for you share,and I have download the pre-trained model and brats2017 dataset.
when I use predict.py :
`UserWarning:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.
warnings.warn('Error in loading the saved optimizer `
but it still generate the prediction file
And when I use evaluate.py:
` FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
= <class 'numpy.ndarray'>
Traceback (most recent call last):
File "evaluate.py", line 66, in <module>
main()
File "evaluate.py", line 47, in main
scores[score] = values[np.isnan(values) == False]
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''`
I want to ask how can I solve the error?Thamk you! @username_1
Answers:
username_1: Were you able to get this working?
username_2: I have just started looking into this, so quite new. I am using pre-trained model mentioned in README.
when i issue predict.py from brats I'm getting an error that validation_ids.pkl no such file or dir?
Can you please shade some light? Want to see the prediction part before I deep dive.
username_3: @username_2 I have the same problem as you and how do you solve the problem"validation_ids.pkl no such file or dir", thank you very much and waiting for your reply~
username_4: Hi @username_2 and @username_3
You must run preprocessing script to generate validation_ids.pkl or you need to prepare the file youself.
Best,
-Azam.
username_3: @username_4 I have trained the model and now I use the test dataset of BRATs2018, I want to predict it, I don't know how to modify the code, only change the prediction.py? |
lampepfl/dotty | 159642446 | Title: Remove JLine
Question:
username_0: Should it be done? We're not really depending on it anymore thanks to Ammonite.
Answers:
username_1: Like that? I'm gonna send a PR if the direction is fine.
https://github.com/username_1/dotty/commit/24c7ef7dfc44625de6f76f51f8ee838ec2956296
username_1: On -Xnojline, doesn't it make sense to just use `System.console()` instead of having the flag?
http://docs.oracle.com/javase/7/docs/api/java/lang/System.html#console%28%29
username_0: Hi @username_1 - yes like that :) Would you mind making PR out of it?
Status: Issue closed
|
jqhph/dcat-admin | 735177210 | Title: 一对多关联编辑提交前报错
Question:
username_0: - Laravel Version: 5.8.38
- PHP Version: 7.1.3
- Dcat Admin Version: 1.7.7
因为subsidy_ratio需要格式化一下,所以才有了以下代码,但是点提交会报错,不知该如何处理。


Status: Issue closed
Answers:
username_0: 找到解决办法了,在model写setAttribute可以解决 |
microsoft/CsWinRT | 871728454 | Title: Address and improve light-up scenarios for Windows APIs
Question:
username_0: ## Background
With removal of WinRT support and the new TFMs in .NET 5, we need to clarify the story for light-up support and how we advise both library and app developers to write adaptive code. The scenarios involve C# libraries and apps in .NET 5+ that want to call Windows SDK projection APIs (WinRT APIs).
We can use this deliverable-sized issue to track light-up support work.
## Open Issues
- .NET 5 library authors want to light-up with WinRT APIs, but this imposes a TFM requirement on customers (referencing applications)
- Improve integration of ApiInformation checks with projected SupportedOSPlatform attributes from C#/WinRT, and improve guidance on using ApiInformation checks
- Project SupportedOSPlatform attributes downlevel, e.g. Windows 8
- Clarify combinations of the TFM, SupportedOSPlatformVersion, TPV/TPMV |
anyways-open/routing-api-test-frontend | 723180492 | Title: Add turn-by-turn
Question:
username_0: Add turn-by-turn instructions with their details/locations for debugging.
Answers:
username_0:  [More test frontend debugging features.](https://trello.com/c/TE9BkXHd/10-more-test-frontend-debugging-features) |
kubernetes/kubeadm | 928523554 | Title: re-order the imports in kubeadm source code files
Question:
username_0: the kubeadm code base under kubernetes/kubernetes/cmd/kubeadm has inconsistent Go imports in source code files.
for example we have:
```
import (
"fmt"
"strings"
"github.com/spf13/pflag"
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
bootstraptokenv1 "k8s.io/kubernetes/cmd/kubeadm/app/apis/bootstraptoken/v1"
kubeadmapiv1 "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3"
kubeadmconstants "k8s.io/kubernetes/cmd/kubeadm/app/constants"
)
```
this is not ideal and we should have better separation and order.
```
import (
// standard library imports are first (do not include these comments)
"fmt"
"strings"
// cmd/kubeadm imports are second (as these are local)
bootstraptokenv1 "k8s.io/kubernetes/cmd/kubeadm/app/apis/bootstraptoken/v1"
kubeadmapiv1 "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3"
kubeadmconstants "k8s.io/kubernetes/cmd/kubeadm/app/constants"
// other *.k8s.io imports are third
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
// external repositories are fourth
"github.com/spf13/pflag"
)
```
if you like to work on this:
- [use this Go version, currently 1.16.5](https://github.com/kubernetes/kubernetes/blob/master/build/dependencies.yaml#L101-L113)
- do not update generated files, usually starting with `z....`
- after you make the changes make sure that you run `/hack/update-gofmt.sh`
- create a single PR with a single commit that updates the kubeadm code base.
Answers:
username_1: /assign
username_2: @username_0 Is there a linter that we can enable to enforce this?
username_0: i don't know whether there is something that granular.
username_3: @username_0 i have a skeleton for a linter here https://github.com/kubernetes/kubernetes/pull/103418 in case someone wants to whip that into shape. the current output looks like this - http://paste.openstack.org/raw/807121/
username_0: that's great @username_3
one difference in the output seems to be that we add empty lines as separators between sections.
perhaps @username_4 you'd want to implement it as part of kuberentes/kubernetes/hack as a verify-* check?
username_4: Maybe we should add empty lines as separators between sections, otherwise `gofmt` will reformat these import lines.
before `gofmt`
```
"sort"
"strings"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
bootstraputil "k8s.io/cluster-bootstrap/token/util"
bootstrapsecretutil "k8s.io/cluster-bootstrap/util/secrets"
"github.com/pkg/errors"
```
after `gofmt`
```
"github.com/pkg/errors"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
metav1 "k8s.io/client-go/applyconfigurations/meta/v1"
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
bootstraputil "k8s.io/cluster-bootstrap/token/util"
bootstrapsecretutil "k8s.io/cluster-bootstrap/util/secrets"
"sort"
"strings"
"time"
```
username_5: Re-stating from the linked PR - I do not see ANY value in doing anything other than what `goimports` does by default. This just does not matter very much, and it's a big waste of energy. |
Saiyato/volumio-kodi-plugin | 215294817 | Title: Unable to install Kodi
Question:
username_0: Hi,
Thanks for your plugin.
I've a problem during the install:
2017-03-19T20:57:18.614Z - info: Install script return the error Error: Command failed: echo volumio | sudo -S sh /data/plugins//miscellanea/Kodi Krypton RC1/install.sh > /tmp/installog
[sudo] password for volumio: sh: 0: Can't open /data/plugins//miscellanea/Kodi
I try with:
volumio-kodi-plugin_jarvis_1.3.0.zip
volumio-kodi-plugin_krypton_rc1_2.1.4.zip
My version of volumio is 2.041.
Any idea ?
Thanks.
Answers:
username_1: Hey Mathieu,
I've just uploaded new versions, can you re-check please?
Status: Issue closed
username_1: I have tried different versions of Volumio images, can't reproduce the issue. It's probably fixed in the previous commit. |
daisysanson/Game-Marketplace | 893642091 | Title: Virtual environments
Question:
username_0: You miss a step in your README when setting up virtual environments, which I'm sure you know but would be worth being explicit about.
#### You say:
To install your virtual environment:
`python3 -m pip install --user virtualenv`
To run your virtual environment, type in the command line:
`source venv/bin/activate`
#### Correction:
To install your virtual environment:
`python3 -m pip install --user virtualenv`
To _create_ your virtual environment:
`virtualenv venv`
To run your virtual environment, type in the command line:
`source venv/bin/activate`
#### Point of note:
The command `source venv/bin/activate` only works if you use `bash` as your shell. If you don't, you need to change this to be `source venv/bin/activate.fish` or `source venv/bin/activate.zsh` etc. (Although to be fair, most people who know what it means to change your shell, would know this anyway!)
Answers:
username_1: Thanks for pointing this out - I have changed it now!
Yes, absolutely - I didn't state that! I was actually unsure whether to include the other options but didn't want to end up writing all the different possibilities. as per .... https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/
I guess I presumed if they read 'you will need virtual environment' and didn't panic, they probably be familiar with shell's and knew to make the changes. Could have included this link above in the README as a solution, I suppose. |
renpy/renpy | 108440498 | Title: Treat all screen prediction as slow.
Question:
username_0: Right now, we're calling predict_screen when predicting a say statement, and thus predicting it immediately - even when fast prediction is going on. We should defer this until slow prediction is allowed.<issue_closed>
Status: Issue closed |
s92025592025/checkstyle-idea | 421320020 | Title: Repository peer review
Question:
username_0: (1/4 pts) You can follow the instructions to reproduce the results in the paper.
The instructions in week10/report.pdf say to use Python 3 but I found that generate_figures.sh actually uses Python 2.7. Attempting to run the original with python 2.7 and changing the command to use python 3 yielded the same error: i.e. that jpg files are not supported.
```
Traceback (most recent call last):
File "./reports/week10/evaluation/generate_figures.py", line 209, in <module>
main()
File "./reports/week10/evaluation/generate_figures.py", line 146, in main
plot_time_to_complete(time_to_complete_dict(ttc), path.join(directory, 'figures'))
File "./reports/week10/evaluation/generate_figures.py", line 71, in plot_time_to_complete
plt.savefig(path.join(out_dir, 'average_completion_time.jpg'))
File "/home/eric/.local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 695, in savefig
res = fig.savefig(*args, **kwargs)
File "/home/eric/.local/lib/python2.7/site-packages/matplotlib/figure.py", line 2062, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/home/eric/.local/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2173, in print_figure
canvas = self._get_output_canvas(format)
File "/home/eric/.local/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2105, in _get_output_canvas
.format(fmt, ", ".join(sorted(self.get_supported_filetypes()))))
ValueError: Format 'jpg' is not supported (supported formats: eps, pdf, pgf, png, ps, raw, rgba, svg, svgz)
```
After changing all occurrences of jpg to png, the figures were reproduced on both python 2.7 and python 3, except for average_completion_time which appears to be different from the one present in the report:

(2/2 pts) These instructions are fully automated.
Besides the jpg error, the script does produce the results automatically. No further input is needed.
(4/4 pts) The code and experimental results are well documented.
All of the code appears to be well documented with ample javadoc style comments for methods and classes as well as comments for fields and within methods, when necessary. The experiments are well documented and it is clear who the participants are and what questions/tasks were used. The analysis of the results is also good: there are many graphs to clearly show the results, and the discussion focuses on both the supporting and opposing evidence and draws conclusions from the results appropriately, as well as mentioning future improvements based on the results.
Answers:
username_1: My mistake on the shell script, I incorrectly assumed that the command to run `Python 3` was the same on Windows and Unix!
In order for the `jpg` export to work without error, with some research I discovered `matplotlib` has an additional package dependency on `PIL`, so running `pip install pillow` or `pip3 install pillow` should do the trick.
I am not sure why the figure you got for `average_completion_time.jpg` is different than the one we have in the report, but it appears to have missed some of the data... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.