repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
elastic/apm-agent-rum-js | 837825090 | Title: Add session id and transaction number to transaction docs
Question:
username_0: Whilst working through the RUM Session implementation (https://github.com/elastic/apm-agent-rum-js/issues/634), we should at least add the session id to all transaction documents, so that we (and/or users) can manually report on the session data available (albeit likely with less than efficient queries). Similarly, it should be possible to identify the order of the transactions from just the transaction documents themselves (whilst we don’t have anything for sessions). This can also be used to compare the performance of first page hits to other pages.
ACs:
* Add the session id to transaction documents (TBC does this require an APM Server update?)
* Add a transaction number identifier to transaction documents for that session
* e.g. transactionNumber=0 for the first
* This will be related to the active session (e.g. if not browsing the site and the session has expired, the next page hit will generate a new session id, and the transaction number will be reset back to zero again)
Consider: what’s the precedence for this kind of value, do we index from `0`, or `1` for the first transaction (page) in the session?
Answers:
username_1: Suggestion: `transaction.sequence`
username_2: @username_1 , session id will be added only to transaction docs.
Status: Issue closed
|
lampepfl/dotty | 588502836 | Title: Change syntax of given whitebox macros
Question:
username_0: ### Status quo
```Scala
inline def whiteBox1 <: T = ???
inline given whiteBox2 as _ <: T = ???
```
### Critique
- Syntax is not consistent
- You could argue that the first syntax is too quiet and also misleading since at first glance `whiteBox1` is in the position where you would expect to see a type.
- `_ <: T` is technically wrong (or, rather, old style) since it suggests a lambda. We are after an existential instead.
### Proposal
Write `? <: T` (analogous to a wildcard type argument) to express both forms of whitebox macros. I.e.
```Scala
inline def whiteBox1: ? <: T = ???
inline given whiteBox2 as ? <: T = ???
```
Opinions?
Answers:
username_1: `? <: T` makes me think I can put something other than `?` on the left which is not the case. If we need to change the syntax I would rather use an extra keyword to denote whiteboxity (I think `whitebox` is fine since it's well-established Scala jargon).
username_0: Or `transparent`?
username_2: I actually like the `? <: T` syntax - it expresses the "unknown subtype of `T`" concept well.
Reg. trying to replace `?` with another type - to me doing that would result in something that reads like nonsense, so I'm not sure if very many people will try to do that. Consider:
`inline def foo : ? <: String` === "define an inline `foo` that returns some subtype of String"
`inline def foo : "a" <: String` == "define an inline `foo` that returns `"a"`, which is a subtype of String"
username_1: The problem is that:
```scala
inline def foo : ? <: String
```
is very different semantically:
```scala
inline def foo : List[? <: String]
```
The latter is not whitebox at all, even though it uses a very similar syntax.
username_2: Hm, you may be right and it may be too cute. How would the keyword syntax look? Like this?
```scala
inline whitebox def foo : String = …
```
username_1: `whitebox inline def` I'd say, `whitebox` is a modifier of `inline`.
username_2: The part that I am slightly miffed about is that we would lose the "this is not an actual type annotation" suggestion that the current syntax `foo <: String` expresses. Might not be a big loss, all things considered.
username_0: I'd go for `transparent inline`. `whitebox` is too much expert speak. Also, it works better in conjunction with `blackbox` but we do not have that one as a modifier.
username_3: How about ?
```scala
inline def whiteBox1 : ?String
inline given whiteBox2 as ?T = ...
```
This was propose and rejected for `null`able types.
username_4: Finally, I have always thought that this feature deserved a flag. We currently have no way to know if an inline method is whitebox or not after it is typed.
username_5: Scala 2 accidentally treats an empty refinement as "do not widen" indicator. Could we make that official and use it here,
```scala
inline def foo : String {} ...
```
username_4: We should have a syntax where not widening would not happen by mistake.
username_0: I am not sure about the flag. We are currently trying hard to reduce the number of flags that we pickle. Even without a flag., there could be a robust predicate on symbols indicating whether an inline function is blackbox or whitebox Namely, an inline function `f` such as
```scala
inline def f(): T = rhs
```
is blackbox if `rhs` is of the form `E: T1` where `T =:= T1` (we only need to test `T <:< T1`, the other direction holds anyway). If that's the case, the expansion of `f` cannot change the type, which is what
characterizes a blackbox macro. Now it's true that if someone writes
```scala
transparent inline def f(): T = E: T
```
that also gives a blackbox macro according to our definition, despite the `transparent`. But I'd argue that's OK, and even desirable: we capture that way the semantics of a blackbox macro, not the syntactic representation.
username_6: Why though? That sounds like a useless target metric. What's important is to pickle the semantics of the language, and not a particular encoding of the language. It should add simple as possible, sure, but not simpler.
You can argue that the semantics of blackbox macros is a whitebox macro with an ascription. But then that's also how you have to specify the language. And that means that you cannot explain blackbox macros without explaining whitebox macros. That doesn't seem like a good deal to me.
username_4: Note that we will need as a flag (or at least in the TASTy file) for #7825.
Status: Issue closed
username_0: What I had in mind was a method `isBlackboxInline` on symbols that implements the logic I outlined before. That method should give valid results for methods coming from Tasty as well. So we would not need a flag.
username_0: Here's the ruleset:
- A blackbox macro is a macro that on inline expansion keeps its type
- A normal inline method always keeps its type since its inline expansion is ascripted with its result type. That's a desugaring step, not relevant for Tasty.
- A `transparent` inline does not get ascripted, so it _might be_ a whitebox macro (and probably will be, in most cases). |
purescript/purescript | 565306845 | Title: Compiler bug
Question:
username_0: Hi,
I am looping through a RowList and when I forget to add the instance for `Nil` case I get the error below.
Code: https://github.com/username_0/purescript-swerve/blob/compiler-error-example/src/Swerve/Server/Internal.purs#L101
This is the command I use that produces the error:
`spago bundle-app -p examples/BasicServer.purs -m Examples.BasicServer`
```
purs: An internal error occurred during compilation: unifyTypes: unspecified skolem scope
Please report this at https://github.com/purescript/purescript/issues
CallStack (from HasCallStack):
error, called at src/Language/PureScript/Crash.hs:24:3 in purescript-0.13.6-4NEubueYkHcBs11fQO4ga7:Language.PureScript.Crash
internalError, called at src/Language/PureScript/TypeChecker/Unify.hs:106:28 in purescript-0.13.6-4NEubueYkHcBs11fQO4ga7:Language.PureScript.TypeCheck
er.Unify
purs: thread blocked indefinitely in an MVar operation
```
Answers:
username_1: Thanks for the report, regardless of cause whenever there's an error reported from the compiler like this it's definitely a bug 🙂
username_2: I've triggered this before by having `forall` in type synonyms for rows in instance constraints. I'd be interested to see what happens if you remove the `forall`s from you `Connection` synonym.
username_0: @username_2 I removed all the `forall`s and it compiled. I also tested without the `Nil` instance and it worked. Does this help?
This is how I removed thed the `forall`s:
```purescript
type Connection bdy prams
= ( body :: bdy
, params :: prams
)
```
username_3: The link to the reproducing code is broken. This is the same error message as #3681; given that and my inability to reproduce I'm closing in favor of that issue.
Status: Issue closed
|
loot/oblivion | 855094122 | Title: Reevaluate unaliased messages - xVersion
Question:
username_0: [Main Issue](https://github.com/loot/oblivion/issues/311)
Please see the main issue for more information.
---
```yaml
- name: '<NAME> DV.esp'
msg:
- type: say
content:
- lang: en
text: 'German version.'
- lang: de
text: 'Deutsche Version.'
- name: 'Q-Core - Icons.esp'
url: [ 'https://www.nexusmods.com/oblivion/mods/19378' ]
msg:
- type: say
content:
- lang: en
text: 'German version?'
- lang: de
text: 'Deutsche Version?'
- name: 'Oblivion WarCry.esp'
url: [ 'https://www.nexusmods.com/oblivion/mods/45570' ]
inc: [ 'FCOM_Convergence.esm' ]
msg:
# doNotClean
- <<: *useInstead
subs: [ '[Oblivion WarCry EV.esp](https://tesalliance.org/forums/index.php?/files/file/1294-fcom-convergence/)' ]
condition: *FCOMCond
- type: say
content:
- lang: en
text: 'German version.'
- lang: de
text: 'Deutsche Version.'
- name: 'AzurasServices_de.esp'
url: [ 'https://www.nexusmods.com/oblivion/mods/12187' ]
msg:
- type: say
content:
- lang: en
text: 'German Version'
- lang: de
text: 'Deutsche Version'
```<issue_closed>
Status: Issue closed |
LiskHQ/lisk-sdk | 448083072 | Title: Don't assign a transaction type as a private variable
Question:
username_0: ### Expected behavior
Transaction type to compare the type against, for example `const TRANSACTION_TRANSFER_TYPE = 0;` for type 0 transaction is assigned in a way that can be overridden by the transactions that inherit after `BaseTransction`.
An example would be implementing the valid type as a static field:
```
class TransferTransaction {
protected static TYPE = 10;
...
```
### Actual behavior
The valid type is assigned as a private variable. Each function comparing a type (`validate`) needs to be re-implemented and cannot be overridden if a BaseTransction cannot access a valid transaction type.
### Steps to reproduce
### Which version(s) does this affect? (Environment, OS, etc...)<issue_closed>
Status: Issue closed |
vuetifyjs/vuetify | 775032691 | Title: [Bug Report] New nested grid gutters in v2.4 is breaking change
Question:
username_0: ### Environment
**Vuetify Version:** 2.4.0
**Last working version:** 2.3.23
**Vue Version:** 3.0.4
**Browsers:** Google Chrome, Mozilla Firefox, Safari, Microsoft Edge
**OS:** Mac OS 10.15.6, Windows, Android, iOS, Mac OSX, Linux
### Steps to reproduce
This feature request #11408 was added to solve nested grid gutters. See supplied Codepen and previous issue
### Expected Behavior
Should still work with old behaviour. If not this should be flagged as a breaking change.
### Actual Behavior
Old grid styling is overridden by the new changes.
### Reproduction Link
<a href="https://codepen.io/thomaskientz/pen/qBOMdpY" target="_blank">https://codepen.io/thomaskientz/pen/qBOMdpY</a>
### Other comments
Please add this as a breaking change and supply either a migration path or breaking change notice in the release notes. For now we and everyone relying on the old behaviour is able to update to the latest release.
<!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
Answers:
username_1: That codepen is using v2.2, can you explain what the problem is?
username_0: I've added the Codepen from the original feature #11408 request. Therefore this reflects the change request from the old state to the new.
This is the pull request from the release notes https://github.com/vuetifyjs/vuetify/commit/d0f25fc59b29c385a6910dc48a111df811da3bdf
These were changes on the Grid layout. Specifically the gutters when nesting. This causes problems for anyone who used the nesting of row/cols recursively and used the no-gutters prop and padding/margin to correct unwanted behaviour.
I hope this make sense. If not I will try to produce better Codepen to demonstrate the issue.
username_1: Bug fixes generally tend to break the workarounds for the bugs, remove the workaround and it won't be broken any more. If we avoided changes like this nothing would ever get fixed.
Status: Issue closed
username_0: No problem. I appreciate bugs being fixed, which might cause an issue on the workaround. To be clear the issue was marked a feature not a bug. So we were not expecting any breaking changes when upgrading. For now we will stay on v2.3.x
Do you suggest a feature is allowed to make a breaking change on a minor revision? If so, would it not be right to add a notice in the release note? As I understand from semantic versioning this is not the expected behaviour for a feature on a minor change. But you might be using a more liberal approach to semantic versioning.
Thanks for your quick reply.
username_1: Can do. The beta release had a section on potentially breaking changes, not sure why it was removed.
username_0: Thanks again for clarifying!
This resolves the issue for us. We will update our code base. I have closed the issue accordingly.
username_2: In case anyone comes here looking for what exactly is the problem.
Before (v2.3.22)

After (v2.4.0)

username_3: They removed vertical padding of row @@
username_4: Actually they added top and bottom margin of -12px for nested rows
username_5: @username_4 how to deal with this? Because of the top and bottom margins on `.row` now my rows overlapping each other.
username_6: Solution without `!important.`
Add to your `App.vue `
```
<style>
.row {
margin-top: 0;
margin-bottom: 0;
}
.row + .row {
margin-top: 0;
}
</style>
``` |
egoist/eventstop | 201560550 | Title: Support wildcard?
Question:
username_0: ```js
const {on, emit} = eventstop()
on((type, msg) => {
if (type === 'hey') console.log(msg)
else // something else
})
emit('hey', 'foo')
```
might be useful but currently I don't have such use case.<issue_closed>
Status: Issue closed |
cmsdaq/DAQAggregator | 236975716 | Title: Negative throughputs from BUs
Question:
username_0: Hi,
there is a problem with mapping the unsigned long values from the BUs for the throughput in the aggregator. The values in the flashlist show e.g. '3286449865' Bytes/s, while the aggregator snapshot contains number like '-1132031326'.
Remi
Answers:
username_1: taking the BU flashlist and the aggregator snapshot at the same point in time:
- `/daqexpertflashlists/flashlists-dev/persistence/testbed/flashlists/BU/2017/6/19/19/1497899617932.smile`
- `/daqexpert/snapshots/dev/testbed/2017/6/19/19/1497899617932.smile`
one sees for an example `bu-c2e18-29-01` the following:
- "throughput" : 3294648689 in the flashlist .smile file
- "throughput" : -1000318607 in the DAQAggregator snapshot .smile file
in 32 bit two's complement hex these are:
- 3294648689 = 0xc4605971
- -1000318607 = 0xc4605971
so there seems to be (as you probably guessed it) some restriction to 32 bit going on even though the field `BU.throughput` is long (as well as the corresponding setter and getter).
username_1: the problem is probably here: https://github.com/cmsdaq/DAQAggregator/blob/6acb38bf07f9d7a3283969f62add774f4608e6ba/src/main/java/rcms/utilities/daqaggregator/data/BU.java#L124 which should be `.asLong()` instead of `.asInt()` (the same also holds for the field `rate`).
username_2: @username_0 thank you for reporting this problem, @username_1 thank you for pinpointing the bug. The hotfix is ready to be deployed as 1.13.1. Right now we have stable beams, @username_0 should I deploy it now or you want me to wait for interfill period?
username_0: Please wait for the next interfill period.
username_2: The hotfixed DAQAggregator 1.13.1 was successfully deployed to production at 11:23. Closing the issue.
Status: Issue closed
|
Corosauce/ZombieAwareness | 1110598996 | Title: crashes when loading the game
Question:
username_0: when i boot up the game it crashes.
the mod worked before but it dosent work anymore.
idk if it was because i changes somthing in the config or somthing like that, but it is really annoying.
i have the other mod u need btw.
the game crash report:
The game crashed whilst rendering overlay
Error: net.minecraftforge.fml.config.ConfigFileTypeHandler$ConfigLoadingException: Failed loading config file zombieawareness\PlayerRulesAndLists.toml of type COMMON for modid zombieawareness
Exit Code: -1
and heres the crash report:
---- Minecraft Crash Report ----
// Everything's going to plan. No, really, that was supposed to happen.
Time: 21.01.2022 16.25
Description: Rendering overlay
net.minecraftforge.fml.config.ConfigFileTypeHandler$ConfigLoadingException: Failed loading config file zombieawareness\PlayerRulesAndLists.toml of type COMMON for modid zombieawareness
at net.minecraftforge.fml.config.ConfigFileTypeHandler.lambda$reader$1(ConfigFileTypeHandler.java:61) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at net.minecraftforge.fml.config.ConfigTracker.openConfig(ConfigTracker.java:74) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at net.minecraftforge.fml.config.ConfigTracker.lambda$loadConfigs$1(ConfigTracker.java:64) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] {}
at java.util.Collections$SynchronizedCollection.forEach(Collections.java:2131) ~[?:?] {}
at net.minecraftforge.fml.config.ConfigTracker.loadConfigs(ConfigTracker.java:64) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at net.minecraftforge.fml.core.ModStateProvider.lambda$new$3(ModStateProvider.java:48) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading}
at net.minecraftforge.fml.ModLoader.lambda$dispatchAndHandleError$20(ModLoader.java:199) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at java.util.Optional.ifPresent(Optional.java:178) ~[?:?] {}
at net.minecraftforge.fml.ModLoader.dispatchAndHandleError(ModLoader.java:199) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at net.minecraftforge.fml.ModLoader.lambda$loadMods$14(ModLoader.java:183) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?] {}
at net.minecraftforge.fml.ModLoader.loadMods(ModLoader.java:183) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
at net.minecraftforge.client.loading.ClientModLoader.lambda$startModLoading$5(ClientModLoader.java:136) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading,pl:runtimedistcleaner:A}
at net.minecraftforge.client.loading.ClientModLoader.lambda$createRunnableWithCatch$4(ClientModLoader.java:127) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading,pl:runtimedistcleaner:A}
at net.minecraftforge.client.loading.ClientModLoader.startModLoading(ClientModLoader.java:136) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading,pl:runtimedistcleaner:A}
at net.minecraftforge.client.loading.ClientModLoader.lambda$onResourceReload$2(ClientModLoader.java:118) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading,pl:runtimedistcleaner:A}
at net.minecraftforge.client.loading.ClientModLoader.lambda$createRunnableWithCatch$4(ClientModLoader.java:127) ~[forge-1.18.1-39.0.5-universal.jar%2360!:?] {re:classloading,pl:runtimedistcleaner:A}
at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804) ~[?:?] {}
at java.util.concurrent.CompletableFuture$AsyncRun.exec(CompletableFuture.java:1796) ~[?:?] {}
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373) ~[?:?] {}
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182) ~[?:?] {}
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655) ~[?:?] {re:computing_frames}
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622) ~[?:?] {re:computing_frames}
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165) ~[?:?] {}
Caused by: com.electronwill.nightconfig.core.io.ParsingException: Invalid value: True
at com.electronwill.nightconfig.toml.ValueParser.parseNumber(ValueParser.java:102) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.toml.ValueParser.parse(ValueParser.java:64) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.toml.ValueParser.parse(ValueParser.java:69) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.toml.TableParser.parseNormal(TableParser.java:57) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.toml.TomlParser.parse(TomlParser.java:88) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.toml.TomlParser.parse(TomlParser.java:37) ~[toml-3.6.4.jar%238!:?] {}
at com.electronwill.nightconfig.core.io.ConfigParser.parse(ConfigParser.java:113) ~[core-3.6.4.jar%237!:?] {}
at com.electronwill.nightconfig.core.io.ConfigParser.parse(ConfigParser.java:219) ~[core-3.6.4.jar%237!:?] {}
at com.electronwill.nightconfig.core.io.ConfigParser.parse(ConfigParser.java:202) ~[core-3.6.4.jar%237!:?] {}
at com.electronwill.nightconfig.core.file.WriteSyncFileConfig.load(WriteSyncFileConfig.java:73) ~[core-3.6.4.jar%237!:?] {}
at com.electronwill.nightconfig.core.file.AutosaveCommentedFileConfig.load(AutosaveCommentedFileConfig.java:85) ~[core-3.6.4.jar%237!:?] {}
at net.minecraftforge.fml.config.ConfigFileTypeHandler.lambda$reader$1(ConfigFileTypeHandler.java:57) ~[fmlcore-1.18.1-39.0.5.jar%2357!:?] {}
... 24 more
[Truncated]
object_holder_definalize PLUGINSERVICE
runtime_enum_extender PLUGINSERVICE
capability_token_subclass PLUGINSERVICE
accesstransformer PLUGINSERVICE
runtimedistcleaner PLUGINSERVICE
mixin TRANSFORMATIONSERVICE
OptiFine TRANSFORMATIONSERVICE
fml TRANSFORMATIONSERVICE
FML Language Providers:
[email protected]
javafml@null
Mod List:
client-1.18.1-20211210.034407-srg.jar |Minecraft |minecraft |1.18.1 |COMMON_SET|Manifest: a1:d4:5e:04:4f:d3:d6:e0:7b:37:97:cf:77:b0:de:ad:4a:47:ce:8c:96:49:5f:0a:cf:8c:ae:b2:6d:4b:8a:3f
zombieawareness-1.18.1-1.12.1.jar |Zombie Awareness |zombieawareness |1.18.1-1.12.1 |COMMON_SET|Manifest: NOSIGNATURE
forge-1.18.1-39.0.5-universal.jar |Forge |forge |39.0.5 |COMMON_SET|Manifest: 22:af:21:d8:19:82:7f:93:94:fe:2b:ac:b7:e4:41:57:68:39:87:b1:a7:5c:c6:44:f9:25:74:21:14:f5:0d:90
worldedit-mod-7.2.8.jar |WorldEdit |worldedit |7.2.8+6008-1246d61 |COMMON_SET|Manifest: NOSIGNATURE
coroutil-1.18.1-1.2.37.jar |CoroUtil |coroutil |1.18.1-1.2.37 |COMMON_SET|Manifest: NOSIGNATURE
Crash Report UUID: 638c8fdb-ceed-4ad6-bccb-d64fb60dce57
FML: 39.0
Forge: net.minecraftforge:39.0.5 |
chenyinkai/blog | 299986452 | Title: 复制文本加上版权信息功能实现
Question:
username_0: 有些网站为了维护版权信息, 用户在复制完成后, 发现粘帖出来的内容往往都还有一些作者名字之类的版权信息, 那么这样的功能是怎么实现的呢? 其实原理也是非常的简单,就是监听一下剪切板事件.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
<div>前端好难学啊</div>
<script>
let oDiv = document.querySelector('div');
oDiv.oncopy = function(e) { // 复制事件
e.preventDefault();
let copyMsg = window .getSelection() + '商业转载请注明出处。'; // window .getSelection() 表示选择的内容
e.clipboardData.setData("Text", copyMsg); // 将复制信息添加到剪切板
}
</script>
</body>
</html>
``` |
ansible/workshops | 559418094 | Title: broke aws_check_setup : make sure workshop_type is set to a correct value
Question:
username_0: `
TASK [aws_check_setup : make sure workshop_type is set to a correct value] *************************************************************************************************************************************
fatal: [localhost]: FAILED! =>
msg: Incorrect type for fail_msg or msg, expected string and got <type 'list'>`
Answers:
username_1: Hey Chris. What branch are you on? Post your extra vars please.
username_0: fixed, needed to use a version of ansible that was newer than 2.8
Status: Issue closed
|
keptn/keptn | 755915762 | Title: Build Automation: Execute integration tests before merging changes into master
Question:
username_0: Currently, integration tests are executed right after a pull request has been merged into the master branch.
This has the risk of breaking the master branch any time a new feature or bug fix gets merged and is in contrast to our goal of having a stable master branch.
Let's discuss possible solutions. Integration tests shall be executed before the actual merge e.g. when opening a pull request.
Answers:
username_1: Ideally the test would run "as if the PR was merged". With GH Actions and Travis-CI this is currently not 100% possible, we would only run the test based on the commit history of the branch (so it depends on when the branch was created). By forcing users to rebase we would get a good way, but not 100% sure if this is a viable option.
username_0: agree, this is an inherent characteristic of this workflow. I think it is okay to advise the PR contributor to do a rebase/merge in order to not deviate too much from the current master branch.
username_1: One thing we could introduce right now would be to run the integration test "Linking Stages" as shown in the workflow here:
https://github.com/keptn/keptn/blob/26b57f50cd53578ddda51e1c2fb03e4163ba767b/.github/workflows/integration_tests.yml#L316-L322
We could run that with GH actions via K3s. It's reasonably fast (setup + Keptn install via K3s on GH Actions takes roughly 4 minutes, the test itself takes 30 seconds), and it would show whether something major was broken.
One thing though: it requiers a full CLI + Docker build, which takes 40 minutes (or longer, when MacOS builds are hanging...).
As a pre-requesit we could work on getting tests and our helm-chart comaptible with partial builds (so we don't need full builds).
username_1: This might be fixed with https://github.com/keptn/keptn/issues/2986 |
bournemouth/static-location-api | 57444653 | Title: Add a GitHub key for Composer
Question:
username_0: We can't deploy as Symfony pulls from GitHub and we've exceeded their rate. Add a key for GitHub so Composer can pull and deploy.
```
Could not fetch https://api.github.com/repos/symfony/HttpKernel/zipball/27abf3106d8bd08562070dd4e2438c279792c434, enter your GitHub credentials to go over the API rate limit
The credentials will be swapped for an OAuth token stored in /app/.composer/auth.json, your password will not be stored
To revoke access to this token you can visit https://github.com/settings/applications
```
Answers:
username_1: I think it might be all packages pulled in by composer :(. We need to setup an auth token stored in the env and get viaduct to create an `auth.json` file on build with `github-oauth`
http://seld.be/notes/authentication-management-in-composer
username_2: If you deploy with Bosh then you shouldn't have this issue :-)
username_1: :trollface: |
openstreetmap/operations | 537985916 | Title: Http error 500 upon sign up
Question:
username_0: I've tried this on Firefox 71.0 (Slackware), Firefox (Android), and Chromium (Slackware).
When I attempt to sign up for an account on http://openstreetmap.org, I enter an email, a username, and a password, and then click the SIGN UP button. In response, I'm taken to https://www.openstreetmap.org/user/terms and see this error message:
```
Application error
The OpenStreetMap server encountered an unexpected condition that prevented it from fulfilling the request (HTTP 500)
Feel free to contact the OpenStreetMap community if your problem persists. Make a note of the exact URL / post data of your request.
This may be a problem in our Ruby On Rails code. 500 occurs with exceptions thrown outside of an action (like in Dispatcher setups or broken Ruby code)
```
The error is, so far, 100% reproducible for me (although a user in osm-dev IRC was able to sign up while I was asking about the error, so 0% reproducible for some).
Answers:
username_1: Actually I think this is an operational issue with one server.
username_1: I think it should be fixed no.
Status: Issue closed
|
pytorch/pytorch | 613032325 | Title: Let future expose a then() API
Question:
username_0: ## 🚀 Feature
A future `.then()` API that satisfies the following contract would be useful:
```
fut = rpc.rpc_async(...).then(lambda result: some_other_potentially_async_fn(result)
fut.wait() # will be completed once rpc_async is completed and the above lambda has finished execution
```
The implementation could look something like:
```
def then(self, func):
ret = Future()
self.addCallback(lambda result: func(result) ; ret.markCompleted())
return ret
```
## Motivation
The original motivation for this is fixing the RPC profiling - currently, the profiling is done through attached callbacks; however, this does not guarantee that the callbacks are ran when the profiler exits. We already enforce that the future corresponding to the RPC must be awaited in order to be correctly profiled - so wrapping the callbacks in the `.then()` as above and then waiting on this future is a viable solution.
It may also be useful for async user functions in RPC (https://github.com/pytorch/pytorch/issues/36071), although I'll let @username_1 clarify whether it would be useful or not.
Note: `add_done_callback` is also being exposed to the user, see https://github.com/pytorch/pytorch/pull/37311. The user could of course implement the above with `add_done_callback`, but it's worth considering to support this natively.
Answers:
username_1: yes, I agree this will also be useful for async UDF.
Does this mean, we no longer need `add_done_callback(cb) -> None`, and will replace it with `then(cb) -> Future`?
username_2: Does it make sense to create both `add_done_callback(cb) -> None` and `then(cb) -> Future`?
(Given that the former is lighter weight?)
(Next, somebody will ask for `via(executor)` and `thenError()` :) )
username_1: self-assigned to reserve for new team member on-boarding
username_1: created a dedicated issue (#45552) for the followup `add_done_callback` API. closing this one
Status: Issue closed
|
Z3Prover/z3 | 947156219 | Title: cannot install z3-solver
Question:
username_0: (base) saras-MacBook-Pro-3:LfD_STL_single_goal saramohamadi$ pip3 install z3-solver
Collecting z3-solver
Downloading z3-solver-4.8.12.0.tar.gz (4.4 MB)
|████████████████████████████████| 4.4 MB 3.6 MB/s
Building wheels for collected packages: z3-solver
Building wheel for z3-solver (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/<KEY>pip-install-1s0f5rqm/z3-solver_f84511ef6b5347fa9de024d32e6241ff/setup.py'"'"'; __file__='"'"'/<KEY>pip-install-1s0f5rqm/z3-solver_f84511ef6b5347fa9de024d32e6241ff/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /<KEY>pip-wheel-unhwrutb
cwd: /<KEY>pip-install-1s0f5rqm/z3-solver_f84511ef6b5347fa9de024d32e6241ff/
Complete output (193 lines):
running bdist_wheel
running build
Configuring Z3
-- The CXX compiler identification is AppleClang 10.0.1.10010046
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Z3 version 4.8.12.0
-- Failed to find git directory.
CMake Warning at CMakeLists.txt:51 (message):
Disabling Z3_INCLUDE_GIT_DESCRIBE
Call Stack (most recent call first):
CMakeLists.txt:100 (disable_git_describe)
CMake Warning at CMakeLists.txt:55 (message):
Disabling Z3_INCLUDE_GIT_HASH
Call Stack (most recent call first):
CMakeLists.txt:101 (disable_git_hash)
-- CMake generator: Unix Makefiles
-- Build type: Release
-- Found PythonInterp: /Users/saramohamadi/.pyenv/shims/python
-- PYTHON_EXECUTABLE: /Users/saramohamadi/.pyenv/shims/python
-- Detected target architecture: x86_64
-- Platform: Darwin
-- Not using libgmp
-- Not using Z3_API_LOG_SYNC
-- Thread-safe build
-- Performing Test HAS_SSE2
-- Performing Test HAS_SSE2 - Success
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Performing Test HAS__Wall
-- Performing Test HAS__Wall - Success
-- C++ compiler supports -Wall
-- Treating only serious compiler warnings as errors
-- Performing Test HAS__Werror_odr
-- Performing Test HAS__Werror_odr - Success
-- C++ compiler supports -Werror=odr
-- Performing Test HAS__Werror_delete_non_virtual_dtor
-- Performing Test HAS__Werror_delete_non_virtual_dtor - Success
-- C++ compiler supports -Werror=delete-non-virtual-dtor
-- Performing Test HAS__Werror_overloaded_virtual
[Truncated]
-- Adding component fpa_tactics
-- Adding component fd_solver
-- Adding component portfolio
-- Adding component opt
-- Adding rule to generate "opt_params.hpp"
-- Adding component api
-- Adding component api_dll
-- Emitting rules to build Z3 python bindings
-- Emitting rules to install Z3 python bindings
-- CMAKE_INSTALL_PYTHON_PKG_DIR not set. Trying to guess
pyenv: version `my-venv-topic-classification' is not installed (set by /Users/saramohamadi/.pyenv/version)
CMake Error at src/api/python/CMakeLists.txt:109 (message):
Failed to determine your Python package directory
-- Configuring incomplete, errors occurred!
See also "/<KEY>pip-install-1s0f5rqm/z3-solver_f84511ef6b5347fa9de024d32e6241ff/core/build/CMakeFiles/CMakeOutput.log".
error: Unable to configure Z3.
----------------------------------------
ERROR: Failed building wheel for z3-solver
Answers:
username_1: I am not sure. Haven't seen this kind of error before.
The error message suggests your python installation is unable to determine where the package directory resides in the current environment.
Status: Issue closed
|
objectbox/objectbox-java | 278411055 | Title: Connection timed out exception during gradle sync.
Question:
username_0: Yes, both the link is accessible from my browser. Here is my gradle file-
Content of project level gradle-
```
buildscript {
ext.kotlin_version = '1.1.51'
ext.objectboxVersion = '1.3.2'
//ext.realm_version='4.2.0'
repositories {
jcenter()
google()
maven { url "http://objectbox.net/beta-repo/" }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
classpath 'com.google.gms:google-services:3.0.0'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath "io.objectbox:objectbox-gradle-plugin:$objectboxVersion"
}
}
allprojects {
repositories {
jcenter()
mavenCentral()
maven { url "https://jitpack.io" }
flatDir {
dirs 'libs'
}
google()
maven { url "http://objectbox.net/beta-repo/" }
}
}
ext {
// Sdk and tools
minSdkVersion = 16
targetSdkVersion = 26
compileSdkVersion = 26
buildToolVersion = '26.0.2'
// App dependencies
supportLibrary = '26.1.0'
retrofit = '2.3.0'
glide = '3.7.0'
photoview = '1.3.1'
stetho = '1.4.1'
junit = '4.12'
playService = '11.4.2'
picasso = '2.5.2'
multidex = '1.0.1'
facebookAndroid = '4.26.0'
constraintLayout = '1.0.2'
loggingInterceptor = '3.4.1'
stetho = '1.5.0'
leakcanary = '1.5.1'
rxandroid = '2.0.1'
dagger2 = '2.11'
guavaVersion = '18.0'
[Truncated]
compile "io.reactivex.rxjava2:rxandroid:$rxandroid"
compile "io.reactivex.rxjava2:rxjava:$rxandroid"
compile 'com.jakewharton.retrofit:retrofit2-rxjava2-adapter:1.0.0'
compile "com.google.dagger:dagger-android:$dagger2"
compile "com.google.dagger:dagger-android-support:$dagger2"
annotationProcessor "com.google.dagger:dagger-android-processor:$dagger2"
annotationProcessor "com.google.dagger:dagger-compiler:$dagger2"
compile "com.journeyapps:zxing-android-embedded:$rootProject.ext.zxingAndroidEmbedded"
compile "appice.io.android:sdk:$rootProject.ext.appIce"
compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
compile "org.jetbrains.anko:anko:$anko_version"
compile 'com.googlecode.libphonenumber:libphonenumber:7.2.2'
compile 'com.android.support.constraint:constraint-layout:1.0.2'
testCompile 'junit:junit:4.12'
}
apply plugin: 'com.google.gms.google-services'
```
Answers:
username_1: Please post the `repositories` part of your `build.gradle` or the whole `build.gradle` file.
username_0: Yes, both the link is accessible from my browser. Here is my gradle file-
Content of project level gradle-
```
buildscript {
ext.kotlin_version = '1.1.51'
ext.objectboxVersion = '1.3.2'
//ext.realm_version='4.2.0'
repositories {
jcenter()
google()
maven { url "http://objectbox.net/beta-repo/" }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
classpath 'com.google.gms:google-services:3.0.0'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath "io.objectbox:objectbox-gradle-plugin:$objectboxVersion"
}
}
allprojects {
repositories {
jcenter()
mavenCentral()
maven { url "https://jitpack.io" }
flatDir {
dirs 'libs'
}
google()
maven { url "http://objectbox.net/beta-repo/" }
}
}
ext {
// Sdk and tools
minSdkVersion = 16
targetSdkVersion = 26
compileSdkVersion = 26
buildToolVersion = '26.0.2'
// App dependencies
supportLibrary = '26.1.0'
retrofit = '2.3.0'
glide = '3.7.0'
photoview = '1.3.1'
stetho = '1.4.1'
junit = '4.12'
playService = '11.4.2'
picasso = '2.5.2'
multidex = '1.0.1'
facebookAndroid = '4.26.0'
constraintLayout = '1.0.2'
loggingInterceptor = '3.4.1'
stetho = '1.5.0'
leakcanary = '1.5.1'
rxandroid = '2.0.1'
dagger2 = '2.11'
guavaVersion = '18.0'
[Truncated]
compile "io.reactivex.rxjava2:rxandroid:$rxandroid"
compile "io.reactivex.rxjava2:rxjava:$rxandroid"
compile 'com.jakewharton.retrofit:retrofit2-rxjava2-adapter:1.0.0'
compile "com.google.dagger:dagger-android:$dagger2"
compile "com.google.dagger:dagger-android-support:$dagger2"
annotationProcessor "com.google.dagger:dagger-android-processor:$dagger2"
annotationProcessor "com.google.dagger:dagger-compiler:$dagger2"
compile "com.journeyapps:zxing-android-embedded:$rootProject.ext.zxingAndroidEmbedded"
compile "appice.io.android:sdk:$rootProject.ext.appIce"
compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
compile "org.jetbrains.anko:anko:$anko_version"
compile 'com.googlecode.libphonenumber:libphonenumber:7.2.2'
compile 'com.android.support.constraint:constraint-layout:1.0.2'
testCompile 'junit:junit:4.12'
}
apply plugin: 'com.google.gms.google-services'
```
username_2: Odd, I have another similar user report at https://github.com/objectbox/objectbox-examples/issues/31. |
Security-Onion-Solutions/securityonion | 860491822 | Title: ERROR: Unable Registry Docker
Question:
username_0: **System**: Centos 7 clean install and fully updated (VM)
**Error**: During install receive the following error.... `[ERROR ] Failed to pull ghcr.io/security-onion-solutions/registry:latest: Error 500: Get https://ghcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Comment: Failed to pull ghcr.io/security-onion-solutions/registry:latest: Error 500: Get https://ghcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
`
**Steps to Reproduce**:
1. Install Centos 7
2. yum update -y
3. git clone https://github.com/Security-Onion-Solutions/securityonion
4. bash so-setup-network
5. Select eval version
6. Select all defaults
Answers:
username_1: Is this still happening? Most likely GitHub's Container Registry was suffering from internal errors that have since been resolved.
username_0: Nope, no love. My co-worker was able to install just fine on his test VM so not sure what is going on. I've attached the sosetup.log file so maybe it will help you out. The is a completely clean install of Centos 7 and the only thing I've done is a yum update before I followed the Sec Onion instructions (yum install git, git clone, ....)
[sosetup.log](https://github.com/Security-Onion-Solutions/securityonion/files/6339591/sosetup.log)
username_1: Are you and your co-worker on the same LAN? This sounds like a firewall, proxy, etc appliance that is getting in the way.
username_0: Today was doing it from my home, but did it a couple of days ago from the office (same as co-worker) with the same result.
Status: Issue closed
|
openthread/openthread | 708631725 | Title: Simulation 1.2 / packet-verification-1-1-on-1-2 (pull_request) keeps failing in many PRs.
Question:
username_0: **Describe the bug** A clear and concise description of what the bug is.
This item keeps failing:

Many PRs are affected now.
Answers:
username_1: it's a duplication of #5572
Status: Issue closed
|
OptimalDesignLab/PumiInterface.jl | 110685791 | Title: Reverse Coloring Algorithm
Question:
username_0: The current coloring algorithm always assigns the current element to the *lowest* possible color, leading to a situation where the last element usually needs to create a new color because all its neighbors already use the existing color. So the question becomes: is there some method or heuristic that would avoid this?
Possible assigning elements to the largest existing color, rather the smallest would fix this? |
northwest-knowledge-network/mdedit | 268449649 | Title: Character limits
Question:
username_0: I am entering a record into the metadata editor now, and I keep running into the character limit. If fields that have character limits, can we include those values somewhere in the text. This way users will be able to adjust their text accordingly without needing to guess. Thank!
Answers:
username_1: I believe anything that isn't a multi-line text input box, like the title, has a character limit of 255. I don't remember if this limit was imposed specifically because the database string type could only be 255 characters long, or if that was an arbitrary limit, so I will investigate. I do remember a limit was decided on was to save space in the database: if records can have unlimited length inputs, then they take up a lot more space in the database then if they have a fixed size. But if users are running up against the limits then the limits need to be re-evaluated.
I will investigate, and make an update. For reference, how long is the text you are putting in? So I can get an idea on how long to make the new limits.
username_0: I think the limit should stay in place as is. I agree that we don't want too much text in the abstract. I am just trying to cut down this person's introduction and turn it into an abstract and it would be easier if I knew what the character limit was.
Thanks for letting me know.
Carrie
——
<NAME>, PhD
Environmental Data Manager
Northwest Knowledge Network (NKN) / University of Idaho
https://www.northwestknowledge.net<http://www.northwestknowledge.net/>
<EMAIL>
Office: 208.885.7129
username_1: I think the "abstract" field has a limit of 3000 characters, and the research method is unlimited. But I can change the limits on the title, etc. to make sure users can input what they need to.
Also, it is possible to have a "remaining characters" counter next to the description of form fields. This would let the user know how many characters they can put in to each field. Do you think this would be useful?
username_0: I think it is fine as is, but the "remaining characters" counter would be wonderful. Thank you!
——
<NAME>, PhD
Environmental Data Manager
Northwest Knowledge Network (NKN) / University of Idaho
https://www.northwestknowledge.net<http://www.northwestknowledge.net/>
<EMAIL>
Office: 208.885.7129 |
fullcalendar/fullcalendar | 137571576 | Title: eventDrop should have a counterpart
Question:
username_0: Event drop triggers only when there has been a change in the time of the event after repositionion.
DragEnd does not contain direct information whether the event has been repositioned or the repositioning has been cancelled (the new time is the same as the old one).
What we are using in our modified script:
line 3837:
// has been here in the original script
if (dropLocation) {
view.reportEventDrop(event, dropLocation, this.largeUnit, el, ev);
}
// what we added
else {
// For when the event has been dropped to the same position
view.reportEventDropOnSame(event);
}
Status: Issue closed
Answers:
username_0: Why do you close issues without any comment?
username_1: https://raw.githubusercontent.com/fullcalendar/fullcalendar/master/.github/ISSUE_TEMPLATE.md |
quarkusio/quarkus | 617883499 | Title: quarkus.http.port resolves to 8080 when running test
Question:
username_0: I have noticed that the configuration value `quarkus.http.port` is injected as `8080` during a test run, but the webserver actually starts up on `8081`. This is preventing me from properly creating urls to test against.
Log output:
```
# port in URL gotten from quarkus.http.port
2020-05-13 22:47:41,345 INFO [Test worker] (com.gjs.taskTimekeeper.webServer.server.LifecycleBean.onStart(LifecycleBean.java:74) ) - Base URL: http://localhost:8080
# log output from quarkus
2020-05-13 22:47:41,512 INFO [Test worker] (io.quarkus.runtime.Timing.printStartupTime(Timing.java:85) ) - Quarkus 1.4.2.Final started in 6.747s. Listening on: http://0.0.0.0:8081
```
I would say that `quarkus.http.port` should be accurate to the actual port being listened on.
Answers:
username_1: `quarkus.http.test-port` is the property that gives the port that tests run on.
Furthermore, in tests you can use the `test.url` MicroProfile Config value to get the URL of the running application.
Does that solve your issue, or are you looking for something more elaborate?
username_0: Interesting. Good to know, just that I was expecting to have `quarkus.http.port` being the source of truth on what port the server was running on, test/dev/prod.
I see why it is this way, but I would argue it makes sense to keep things consistent. I have some instances where I am creating urls to pass back to the user, using `quarkus.http.port` for the port in the url. This makes it slightly harder to test, as I will potentially have to replace the port if I wanted to use the url as-is (if it was a link to a webpage, for example, just plugging it into selenium would not work).
I can see some ways around it, so not a real issue I guess.
Perhaps have a new config value like `quarkus.http.running-port` would allow for the centrallized port for whatever profile is currently running?
username_1: The problem I see with that is that there isn't a way to make `quarkus.http.running-port` read-only.
I am sure that if we do add it, someone will expect to be able to set the port just by changing it
username_0: I agree, but maybe that could be desirable? Like if one wanted to force the port value in any profile.
At any rate, ideally if the documentation mentions the relationship I think it would largely be fine.
Part of me thinks that this is a thing that one can easily overthink as well.
username_1: Would you like to add some documentation mentioning it?
username_2: The test-port property documented here: https://quarkus.io/guides/getting-started-testing#controlling-the-test-port
But I guess this is not sufficient?
username_0: I suppose it does, but it's really hard to know sometimes what configuration is available and how it relates to others. Of course that page does explain that particular config entry well, but I was looking for something else. Though it could just be a not knowing where to look/ lack of google fu on my end.
username_3: I'm facing a similar situation where I'd like to know the actual port being used by Quarkus, _before_ any request is made (so for example on `StartupEvent`). This becomes even more important when in tests (or prod) the configured port is `0` for randomization.
I couldn't find a way to access that port number at runtime. I've noticed that it's being set on `io.vertx.core.http.HttpServerOptions` but AFAICT this is not injectable, nor might it be desriable to make it injectable.
username_4: Dunno how we can solve this but this looks like a very reasonable request.
/cc @username_8
username_1: We can certainly make a read-only version of `HttpServerOptions` a bean that users can inject. WDYT?
username_0: I mean I like the idea, just something that gives us access to current running port (wasn't sure if the question was directed at me, @username_1 )
username_1: It wasn't directed to anyone specifically.
A CDI bean like the one I mentioned above shouldn't be hard to create and should solve the problem
username_5: This would indeed be handy in an integration test app which needs to know on which port is it running. The main problem is that the MP config property to check is different when running in JVM and native mode. For JVM it is `quarkus.http.test-port` and for native it is `quarkus.http.port`. To figure out the effective port one has to select one e.g. based on `"executable".equals(System.getProperty("org.graalvm.nativeimage.kind"))`:
```
@ConfigProperty(name = "quarkus.http.test-port")
int httpTestPort;
@ConfigProperty(name = "quarkus.http.port")
int httpPort;
private String getEffectivePort() {
final boolean isNativeMode = "executable".equals(System.getProperty("org.graalvm.nativeimage.kind"));
return isNativeMode ? httpPort : httpTestPort;
}
```
Another option is to pass the port form `RestAssured.port` to the app via `@QueryParam` or similar.
username_6: Hello, I wanted to open a new issue but I saw this one.
In my case this is annoying. Because it's true that when quarkus.http.test-port is set to 0, the choosed port is not fallbacked to quarkus.http.port var.
In my code, I use quarkus.http.port in some services, and it give me 8080. Even @RestClient do not work in this case.
It's usefull to use that in case of // maven builds.
For me, first we should only use %test.quarkus.http.port and if it's set to 0, it's choosed by quarkus and then it naturally fallback to quarkus.http.port.
Just my 2 cent.
username_7: any progress or any guide? same issue here, need the the actual http port to do "consul http health check", but do not know how to find the running http port.
username_7: @username_8
is it possible to get the running port? thanks
username_8: https://github.com/quarkusio/quarkus/blob/d3325b53252fa33e8ca7c1103a95fc9149d37f62/extensions/vertx-http/runtime/src/main/java/io/quarkus/vertx/http/runtime/VertxHttpRecorder.java#L928
This is set as a system property, so should be be able to read it from the property, either directly or using MP Config.
username_5: Is it possible to get the `launchMode` from application code?
I think the most of the folks here are asking for a simple direct way to obtain the effective port from an application code (and perhaps also from a test). A single property (new or exiting one) that works over all modes would be nice. Or a read-only injectable `HttpServerOptions` mentioned by @username_1. Or we should stop relying on `quarkus.http.test-port` and instead recommend using `%test.quarkus.http.port` so that `quarkus.http.port` always bears the effective value as suggested by @username_6. All of the listed options would work for me.
username_3: @username_8 just adding my input here. From my point of view, I'm trying to run the application on a random port, using `http.port=0` and let quarkus decide which port at runtime. But I still need to know the actual port that was selected. So accessing the system property is not enough in that case.
Status: Issue closed
|
mbrn/material-table | 390871744 | Title: Please add functionality to make multi sort possible
Question:
username_0: **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Answers:
username_1: For us this as well this is super important.
I hope this gets implemented.
@username_2 Thanks all for the hard work, you are all great !
username_2: Hi @username_1 ,
I will apply many more features on table. But nowadays i am very busy and i can reply help-request issues in my limited time. Thanks for your patience
username_3: + 1 Multi-sort is also important for us. Great work.
username_4: @username_2 I needed this feature for a project and have things working great. Wanted to get your take as there are a few ways to go with this. Currently I added a new property called 'sortObject' to data manager which is passed onto table header through table as a prop. It is just a simple object { myFieldA: 'asc', myFieldB: 'desc', .... } . This really takes the place of orderBy and orderDirection, tracking all column sort state. Wasn't sure if you prefer I just replace orderBy and orderDirection with the new 'sortObject' prop as it would handle both single and multi-sort, but might be a breaking change, need to dig a bit further. Also, instead of a table option of 'multiSort' I've just used SHIFT key to track whether user wants current sort to be multi, if event.shiftKey is false it is single sort as usual, even if multi-sort had been active for numerous columns and user performs a new sort without holding SHIFT key then it reverts back to single sort. Any thoughts before I take a pull?
username_5: +1 for multi sort on shift click only, I would also enable multi sort by default |
hukkelas/DeepPrivacy | 577826230 | Title: "nvidia-docker" command is deprecated.
Question:
username_0: I tried use it with docker, but I am stuck on the command with nvidia-docker: `nvidia-docker run --rm -it -v $PWD:/workspace -e CUDA_VISIBLE_DEVICES=0 deep_privacy python -m deep_privacy.train models/large/config.yml`
After installation of nvidia-docker, I could command like this `docker run --gpus all nvidia/cuda:10.0-base nvidia-smi` but not like `nvidia-docker run `.
I am not familiar with nvidia-docker, I'd really appreciate that if you could tell me how to run your program on docker.
[nvidia-docker: command not found · Issue #1028 · NVIDIA/nvidia-docker](https://github.com/NVIDIA/nvidia-docker/issues/1028)
Answers:
username_1: If you're not familiar with docker, I recommend you to not use it (if not required by your system administration etc).
I'm planning to make a easier setup tutorial in the future but in short:
1. Make sure you have Python 3.6 or higher installed.
2. Install Pytorch version 1.0 or higher (we've tested with 1.0, 1.1, 1.2 and 1.3 and it looks OK). To install pytorch, follow the tutorial here: https://pytorch.org/get-started/locally/
3. Install requirements:
```
pip install -r docker/requirements.txt
```
3. Install NVIDIA Apex: https://github.com/nvidia/apex. You can do this with a pytorch only build:
```bash
pip install git+https://github.com/nvidia/apex
```
Can you check if this works instead?
Status: Issue closed
|
hyperf/hyperf | 1048358988 | Title: Upgrade the minimum php version to 8.0 for all components
Question:
username_0: 会的
Answers:
username_1: 是否会适配php8.1
username_0: 会的
username_1: 3.0版本个人认为不需要急着发新版,提议在3.0补充引入:完善分布式事务组件、物联网组件;然后才完美的发新版,如果时间允许也引入GRPC注册发现组件更好,这样就完美的php全家桶
username_0: 是的,不会着急发正式版本的。
username_2: 应该做2.2的这个版本把一些组件写做好,这个版本应该是很多人在用的。然后再迁移到3上面,hyperf不应该为了发版而发本版,现在更应该考虑一个lts版本。
username_0: @username_2 2.2 里哪些组件没做好?
3.0 应该是第一个 LTS
username_3: 同问 2.2 哪个组件没做好?
同时给大家补充一个你应该知道的信息,截止至今,PHP 7 全版本已不再处于官方的积极维护状态,PHP 官方积极维护的最低版本已是 PHP 8.0
http://php.net/supported-versions.php
username_2: 1、表达有误,应该是可以加入一些当下流行技术的组件,比如上面兄弟说的GRPC注册发现组件。为什么这么说呢,因为phper不会太关注数据类型,各系统的开发语言一般情况下不只php,在做交互的时候还是比较在意数据类型的。个人觉得引入grpc的注册发现组件还是很有必要的,而且目前grpc远比jsonrpc流行
2、php8的性能相较php7并没有比php7相较php5所带来的性能大幅提升,所以很多公司一般情况下并不会贸然切到php8上面去。再者,目前互联网相对php7的时候红利小太多,新业务也占比小,大多数开发者还是停留在php7上面
3、个人看法,如去发3.0,不如把目前的2.0的版本完善,做成上面兄弟说的真正意义上的全家桶。然后再上3.0,在使用的开发者用的也放心。比如我现在,一月前让大家用2.2把老系统重构,面向用户的用hyperf,高频的全部用golang,一直想官方能在2.0上面有一个lts版本,搞的现在总在想当初的决定是不是错的
username_0: grpc 的问题,很难。。这块需要吃掉我们超级多的时间,我原本打算这些都是在 3.0 里来做的。
2.2 的版本,更多是维护住现在的局面,就算要搞 grpc ,也是要等 3.0 做完,然后做一个 incubator 包放到 2.2 里,后续再合并到 3.0 中。
username_1: 分布式事务解决方案 在lts版本 也要加入才好
username_1: 分布式事务解决方案 在lts版本 也要加入才好
username_3: 就算有 Hyperf 3.0,那 2.2 也是处于 Security fixes only 的阶段,既然可以接受 PHP 7.4 出于 Security fixes only 阶段,为啥不能接受 Hyperf ?我觉得新组件跟哪个版本都不冲突,新组件永远都是走 Incubator 孵化的路径去产生的
username_4: 现在注解都是这样的格式, 不利于idea追踪
```
public function __construct(...$value)
{
parent::__construct(...$value);
$this->value = $this->formatParams($value);
}
```
Hyperf 3.0 使用PHP8原生注解 , 是否可以考虑把注解这样写 , 就不用引入`hyperf/ide-helper `
```
public function __construct(string $name, bool $required)
{
}
```
username_0: @username_4 其实我之前也考虑过。。 |
richardgirges/express-fileupload | 406719713 | Title: Path to temp files.
Question:
username_0: I've set tempFileDir : '/tmp/', so middleware should save files to the '/tmp/' folder.
But it saves it to the 'tmp' folder in the folder for the middlewhare module, mean '/app/node-modules/express-fileupload/tmp/'.
Is that correct or it saves files in the both folders or there are some errors in documentation?
Answers:
username_0: It looks like I got some old code from npm, cause I see that I have
const dir = __dirname + (options.tempFileDir || '/tmp/');
insted of
const dir = options.tempFileDir || process.cwd() + '/tmp/';
Which persist in a git repository.
username_1: try './tmp/
Status: Issue closed
username_0: Hii @username_1 , this issue was fixed it the last alpha. |
pavel-demin/red-pitaya-notes | 178327355 | Title: CIC compiler Redpitaya
Question:
username_0: Dear Pavel,
I am trying to send the output of a CIC filter to the DAC you developed for the RedPitaya.
To try this I used a external signal generating a 1kHz / 100 mV peak sine. The sinusoidal signal is transmitted toa CIC decimation filter through the ADC. The used CIC decimation have the following parameters:
decimator, 3 stages, differential delay 1, 1 channel, fixed rate 100
input sample frequency : 125 MHz
clock frequency : 125 MHz
input data width : 14 bits
quantization : truncation
output data width : 14 bits
all other parameters are unchanged
This filter output is send on a CIC filter interpolator prior to transmit to the DAC. This filter are tuned with the following parameters:
interpolator, 3 stages, differential delay 1, 1 channel, fixed rate 100
input sample frequency : 1.25 MHz
clock frequency : 125 MHz
input data width : 14 bits
quantization : truncation
output data width : 14 bits
The gain due to the cascade of the two CIC filter is ((100)^3)².
The trouble is the visualized signal on the oscilloscope is of 20mV and i think it would be 0.1((100)^3)²
Do you have an idea of where the problem might come from?
Thanks in advance for any help
Answers:
username_1: If I'm not mistaken, the normalized gain of the CIC filter can be calculated as ((R*M)^N)/(2^ceiling(N*log2(R*M))), where M is differential delay, N is number of stages, R is rate change factor.
So, for the filter that you describe, it should be 0.95.
100 mV * 0.95 * 0.95 = 90 mV
Looks like a factor 4 is still missing. Maybe the CIC compiler shifts its results by 1 bit.
Could you try to set output data width to 15 bits for both filters?
username_0: I set the output data to 15 bits for both filters and its works well.
I have another a question, Because of the passband droop, and therefore narrow usable passband, i used a single rate FIR compiler. The filter is synthesized in Matlab with a unity gain in the band pass.
The filter coefficient was:-7.109910e-04, -1.112864e-04, 1.108516e-03, 3.816352e-03, 8.833174e-03, 1.671161e-02, 2.754876e-02, 4.087898e-02, 5.567736e-02, 7.048093e-02, 8.360946e-02, 9.344460e-02, 9.871252e-02, 9.871252e-02, 9.344460e-02, 8.360946e-02, 7.048093e-02, 5.567736e-02, 4.087898e-02, 2.754876e-02, 1.671161e-02, 8.833174e-03, 3.816352e-03, 1.108516e-03, -1.112864e-04, -7.109910e-04
and the IP FIR filter has the following parameters.
single rate
input sample frequency : 1.25 MHz
clock frequency : 125 MHz
input data width : 15 bits
output rounding mode : convergent rounding to even
output data width : 15 bits.
The coefficient are quantized only in 32 bits
all other parameters are unchanged
When i visualize the output of the DAC, the amplitude was of 58 mV, the FIR filter reduce the amplitude of the input signal. while the gain should be equal to unity ( so the ouput should be equal to 100mV)
Thank you for your help
username_1: Setting FIR's input data width to 14 bits should solve this problem.
The idea is to boost only the outputs, not the inputs. So, all the inputs should be 14-bit wide. And all the outputs should be 15-bit wide.
Moreover, I think that it's better to use more bits for intermediate results to decrease the effect of the rounding errors:
CIC1: input 14 bits, output 25 bits
subset1: input 25 bits, output 24 bits
FIR: input 24 bits, output 25 bits
subset2: input 25 bits, output 24 bits
CIC2: input 24 bits, output 14 bits
Here is a link to a subset configuration:
https://github.com/username_1/red-pitaya-notes/blob/master/projects/sdr_transceiver/rx.tcl#L203
username_0: Yes it works with 1.9vpp.
The CIC decimator filter (CIC1) is connected at the FIR filter input. The FIR filter output is send to the CIC interpolator filter (CIC2).
in order to decrease the effect of the rounding errors.
This is my configuration
CIC1: input 14 bits, output 29 bits
subset1: input 29 bits [31:0], output 24 bits [31:8]
FIR: input 24 bits, output 32 bits
subset2: input 32 bits [31:0], output 24 bits [31:8]
CIC2: input 24 bits, output 24 bits
subset2: input 24 bits [23:0], output 14 bits [23:8]
In the Subset configuration, the data is processed by byte.
with this configuration , the amplitude of the DAC output was of 40 mV instead of 100mV.
What's wrong in my configuration?
username_1: I don't understand why you use [23:8] or [31:8].
Could you try the configuration from my previous comment?
Here it's with bit numbers:
CIC1: input 14 bits, output 25 bits
subset1: input 25 bits, output 24 bits [23:0]
FIR: input 24 bits, output 25 bits
subset2: input 25 bits, output 24 bits [23:0]
CIC2: input 24 bits, output 14 bits
username_1: Yes, the input width should be set to 4 bytes and the output width should be set to 3 bytes.
username_0: I use [23:8] because i thought that the data are in big endian mode.
In your configuration it seems that data are in little endian mode.
I test your configuration, but when the input width of the CIC2 filter is of 24 bits, the ouput width can't be lower than 24 bits.
So i try this configuration
CIC1: input 14 bits, output 25 bits
subset1: input 25 bits [31:0], output 24 bits [23:0]
FIR: input 24 bits, output 25 bits
subset2: input 25 bits [31:0], output 24 bits [23:0]
CIC2: input 24 bits, output 24 bits
subset3: input 24 bits [23:0], output 16 bits [15:0]
When i visualize the DAC output, i don't have a sine,
username_1: Thanks for the test.
I think that the problem is with subset3. If output of CIC2 is 24-bit wide, then the output of subset3 should indeed include the most significant bits [23:8] as you previously proposed.
Could you try the following modification?
CIC2: input 24 bits, output 25 bits
subset3: input 25 bits [31:0], output 16 bits [23:8]
username_1: After some thought, I changed the subset3 configuration in my previous comment. The idea is that the most significant (MSB) DAC bit (bit number 13) should be the MSB-1 (bit number 25-1=24) of the CIC2 output.
username_0: in this case could you think that the configuration should be:
CIC1: input 14 bits, output 25 bits
subset1: input 25 bits [31:0], output 24 bits [24:1] // in order to take the 24 MSB of the CIC1 output
FIR: input 24 bits, output 25 bits
subset2: input 25 bits [31:0], output 24 bits [24:1] // in order to take the 24 MSB of the CIC1 output
CIC2: input 24 bits, output 25 bits
subset3: input 24 bits [31:0], output 16 bits [26:11]
username_1: As you previously observed, the CIC filter should be multiplied by 2 to obtain the same amplitude. This multiplication by 2 is done by outputting 25 bits and taking only 24 of the least significant bits.
So, I think that the configuration should look like the following:
CIC1: input 14 bits, output 25 bits
subset1: input 25 bits [31:0], output 24 bits [23:0] // CIC1 output multiplied by 2
FIR: input 24 bits, output 25 bits
subset2: input 25 bits [31:0], output 24 bits [23:0] // FIR output multiplied by 2
CIC2: input 24 bits, output 25 bits
subset3: input 24 bits [31:0], output 16 bits [26:11] // CIC2 output multiplied by 2 and converted to 14 bits
username_0: your configuration works normally but when the amplitude of the sine at the ADC input is of 1vpp, the DAC output saturate.
Why its doesn't work with full scale amplitude of 2vpp?
and i don't understand your previous comment,
why the CIC filter should be multiplied by 2 to obtain the same amplitude?
I am agree with the fact that multiplication by 2 is done by left shifting to one bit (so 25 bits in our case)
but i would like to understand why we should take 24 LSB [23:0] and not 24 MSB [24:1] ?
username_1: I can propose two answers:
- I don't know. It's a question for the Xilinx developers.
- It's what your initial results showed. Without multiplying by 2 the outputs of the two CIC filter, the resulting amplitude was four times lower than expected.
username_1: Looks like there is an overflow somewhere.
I can propose to try one more configuration without intermediate shifts/multiplications:
CIC1: input 14 bits, output 24 bits
FIR: input 24 bits, output 24 bits
CIC2: input 24 bits, output 24 bits
subset: input 24 bits, output 16 bits [22:7] // CIC2 output multiplied by 8 and converted to 14 bits
username_0: i try this configuration, the DAC output is of 2.1 Vpp when the sine amplitude is of 1Vpp.
why the bits [22:7] are chosen in the ouput of the subset converter?
username_1: Look like we're almost there :-) So, if multiplying by 8 is too much, then let's try to multiply by 4:
subset: input 24 bits, output 16 bits [23:8] // CIC2 output multiplied by 4 and converted to 14 bits
Status: Issue closed
username_1: I'm closing this issue because I think that I've already answered all the questions about the CIC filters.
username_0: I try your last configuration. Its work well. Thanks for your help.
However i have a last question. I see that the DAC ouput is not symetric, there is an offset of 25 mV.
I try to calibrate the redpitaya using the command calib -w but its change anything in the DAC output.
Could you think that it is possible to compensate this offset using the redpitaya DAC code that you have developed.
or if you have another solution to eliminate this offset.
thanks for your help.
username_0: Thanks for all I'm trying to do that. |
zufrieden/e100 | 30714836 | Title: German excel corrections
Question:
username_0: Texte 66-70
Christophe et <NAME> [still in french]
Please translate et into German: und
Texts 36-40
<NAME> [still in french]
Please add:
Texts 56-60
<NAME> [wrong description]
Please delete –in: Jugendleiter<issue_closed>
Status: Issue closed |
ECLK/Nomination | 401244923 | Title: post: /election/divisions
Question:
username_0: post: /elections/divisions
```
Required-parameters:{
division_id:"32d250c8-b6b0-4aa6-9b14-4817dbb268d9",
Division_common_name:"Province",
Division_name:"western",
Division_code:"01",
no_of_candidate:"30",
}
}
```
Answers:
username_1: endpoint changed.
post: /election/divisions -> post: /modules/:module_id/divisions
Status: Issue closed
|
buefy/buefy | 328914897 | Title: Date and time pickers on safari
Question:
username_0: I have two date pickers for start and end date, as well as time pickers for start time and end time, I open the dropdown and it doesn't close when you open another picker on safari.
**Buefy** version: [0.6.5]
**Vuejs** version: [2.5.13]
**OS/Browser**: macOS Sierra/Safari
### Steps to reproduce
1. Integrate 2 buefy date/time picker
2. Open 1 picker and open another
### Expected behavior
Date/time picker recently open should close when opening another.
### Actual behavior
It doesn't hide the previously opened picker.
Status: Issue closed
Answers:
username_1: Fix https://github.com/buefy/buefy/commit/38c501705fed4bed4de5bf844b0b45bc492ae1e3 |
pat310/quick-pivot | 278855110 | Title: Row and column total header
Question:
username_0: Should row and column totals both have blank headers, or text headers of 'Totals' similar to PivotTable.js? Currently row total header is blank and column total header has text of 'aggregations'. See screenshot below.
<img width="546" alt="screen shot 2017-12-03 at 9 45 47 pm" src="https://user-images.githubusercontent.com/8146241/33534161-bda73918-d873-11e7-8fe0-960dbcb2cf44.png">
Answers:
username_1: Closed in #72
Status: Issue closed
|
goharbor/harbor | 482629962 | Title: job retry is broken by the status checking code
Question:
username_0: If you are reporting a problem, please make sure the following information is provided:
**Expected behavior and actual behavior:**
A retry job should correctly be launched and executed but directly returned.
**Steps to reproduce the problem:**
Trigger a job which returns an error
**Versions:**
Please specify the versions of following systems.
- harbor version: [x.x.x]
- docker engine version: [y.y.y]
- docker-compose version: [z.z.z]
**Additional context:**
`redis.go` line88-line93
```
if job.RunningStatus.Compare(job.Status(tracker.Job().Info.Status)) <= 0 {
// Probably jobs has been stopped by directly mark status to stopped.
// Directly exit and no retry
markStopped = bp(true)
return nil
}
```<issue_closed>
Status: Issue closed |
prometheus/alertmanager | 731835261 | Title: Missing very first Firing Alert
Question:
username_0: **What did you do?**
installed prometheus , alertmanager, blackbox_exporter
**What did you expect to see?**
expect to see Firing notification immediately (with respect to grouping and e.t.c) when probe fails and reoccuring email on repeat_interval.
**What did you see instead? Under which circumstances?**
missing first FIRING alert, and not all further FIRING alerts are coming. One can have Notification success in log, but no email is received. RESOLVED alerts are sent each and every time, probe comes into success state. I'm not suspecting smtp gateway,it's in cloud and sending much more intensive stream of emails from other systems without any loss.
**Environment**
* System information:
Linux 4.15.0-101-generic x86_64
* Alertmanager version:
```
alertmanager --version
alertmanager, version 0.21.0 (branch: HEAD, revision: 4c6c03ebfe21009c546e4d1e9b92c371d67c021d)
build user: root@dee35927357f
build date: 20200617-08:54:02
go version: go1.14.4
```
also checked on 0.18.0
* Prometheus version:
```
/bin/prometheus --version
prometheus, version 2.18.1 (branch: HEAD, revision: ecee9c8abfd118f139014cb1b174b08db3f342cf)
build user: root@2117a9e64a7e
build date: 20200507-16:51:47
go version: go1.14.2
```
* Alertmanager configuration file:
```
global:
resolve_timeout: "5m"
smtp_from: x@x
smtp_smarthost: host:465
smtp_hello: xxx.xxx.xxx
smtp_require_tls: false
smtp_auth_username: x@y
smtp_auth_identity: x@y
smtp_auth_password: <PASSWORD>
receivers:
- name: 'null'
- name: 'email-critical'
email_configs:
- to: '<EMAIL>, <EMAIL>'
send_resolved: true
tls_config:
insecure_skip_verify: true
[Truncated]
```
Oct 28 21:11:11 infra-p1 alertmanager[29892]: level=debug ts=2020-10-28T21:11:11.656Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:12:51 infra-p1 alertmanager[29892]: level=debug ts=2020-10-28T21:12:51.632Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:23:36 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:23:36.402Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:25:16 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:25:16.666Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:27:21 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:27:21.617Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:32:41 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:32:41.514Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:38:01 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:38:01.414Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:43:21 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:43:21.841Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:48:41 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:48:41.598Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:54:01 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:54:01.379Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 21:59:21 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T21:59:21.476Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 22:04:41 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T22:04:41.476Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
Oct 28 22:10:01 infra-p1 alertmanager[31336]: level=debug ts=2020-10-28T22:10:01.653Z caller=notify.go:685 component=dispatcher receiver=email-critical integration=email[0] msg="Notify success" attempts=1
```
but i've received
21.11 (RESOLVED)
21.25 (RESOLVED)
21:43 (FIRING) - half an hour delay of firing alert ..
Status: Issue closed
Answers:
username_0: time sync was not configured (ntpd) |
masesgroup/JCOReflector | 1025528954 | Title: Added method to register into JCOReflector to forward search path registration to JCOBridge
Question:
username_0: **Is your feature request related to a problem? Please describe.**
JCOReflector until now uses underlying framework features to found assemblies. In more complex scenarios the assemblies to be used can be in a generic location not known to JCOBridge (the real engine behind JCOReflector).
**Describe the solution you'd like**
Add a method to JCOReflector class to forward path registration. Even add a command-line switch to perform this operation.
**Describe alternatives you've considered**
N/A
**Additional context**
Linked to #52<issue_closed>
Status: Issue closed |
ma1co/Sony-PMCA-RE | 688137396 | Title: RE: Please Help: Sony Hx300 #198
Question:
username_0: @username_1 I am opening this new issue as the previous one is closed and didn't know if you will monitor my comment there.
#198 after you mentioned here that it supports HX300.
I tried installing the OpenTweak Memories Tweak,
It is still giving me the same error as above. Do I need a specific version of the pmca where you updated the support. Do I need to install the firmware? I am not so clear about the next steps. I just went ahead and downloaded the build from here and tried installing but it still gives the error.
Using drivers Windows-MSC, Windows-MTP
Looking for Sony devices
Querying MTP device
Sony Corporation DSC-HX300 is a camera in MTP mode
Switching to app install mode
Traceback (most recent call last):
File "pmca-gui.py", line 76, in do
File "pmca\commands\usb.py", line 277, in installCommand
File "pmca\commands\usb.py", line 37, in switchToAppInstaller
File "pmca\usb\sony.py", line 306, in switchToAppInstaller
File "pmca\usb\sony.py", line 244, in sendCommand
File "pmca\usb\sony.py", line 98, in sendSonyExtCommand
File "pmca\usb_init.py", line 81, in _checkResponse
pmca.usb.InvalidCommandException: MTP error 0x2006
Status: Issue closed
Answers:
username_1: Duplicate |
scitorch/scitorch | 399284289 | Title: Refactor tests for conversion module
Question:
username_0: I have implemented tests to cover a lot of possibilites when first implementing the conversion module where some if not a lot are duplicate/unnecessary.
E.g. the only thing that needs to be checked in test_to_kilobytes() is to test the conversion from bytes to kilobytes and the default values as everything else should be handled in to_bytes().<issue_closed>
Status: Issue closed |
crtahlin/medplot | 53105100 | Title: Razlike med grafoma na tabu Clustering
Question:
username_0: Na zavihku "Clustering" se prvi in drugi graf trenutno nekoliko razlikujeta po tem, kakšen je rezultat clusteringa. Preveril sem kodo in pri prvem je:
plot(hclust(as.dist(1-cor(dataSubset, use="c", method="s"))), ann=FALSE)
pri drugem pa:
pheatmap(cor(dataSubset,method="s", use="c" ), display_numbers=TRUE
Vprašanje: ali je v redu, da uporabiva pri enem "1-cor" pri drugem pa "cor"? Če ni, katera varianta naj bo pri obeh?
Answers:
username_0: @llaarraa : Reminder - rabim odgovor, kako naj "popravim", če je sploh potrebno. Če ne, potem je potrebno verjetno vsaj opis spodaj ustrezno popraviti. |
WarEmu/WarBugs | 386201649 | Title: [Quest] And Thus With Traitors (Saphery, destro) Quest end marker at wrong position
Question:
username_0: The end marker for the NPC to turn in the Quest is at the wrong position, it currently is at the position of the quest objective "Speak to Arkaneth Turncoat" (after he was killed ("spoken with")).
 |
dryuen/ist5313-f18-fp1 | 379004534 | Title: Module 1
Question:
username_0: **Module 1**
- Course is objectives laid out well for the learner. Recommendation is to possibly sequence the content. A learner can jump from item to the next. Suggest numbering your sequencing of content (1.0 - Intro. To measurement, 1.1 - Intro. To knives, etc.). Guides the adult learner to move from one lesson to the next.
- Module 1 – Types of Measurements - Scale “place a space after comma” ingredient,especially
- Module Measurement Activity - after submitting your answers, if you are not correct and hit the link to try again it makes you go to the Pan Activity
- Having a difficult time with your activities functioning correctly; either won’t let you drag or drop or doesn’t allow you to ret-try
- Last Quiz will not let me submit the answers
- Last Quiz – Question # 2 – Gap between last two answer choices
- Knife Activity – Maybe put directions or explain that it is similar to flashcards like on Quizlet
Answers:
username_1: Thanks for the feedback!!
username_1: I've checked off the ones I have completed. Thanks for the feedback!!
As for the sequencing of the course. We will keep it the way we have it because we want the user to have the freedom to navigate in any order. I think the quizzes and certificates are the only .html pages that are required to be "gated".
Status: Issue closed
|
UniversalDependencies/docs | 155450950 | Title: Coarse <-> Fine Tags
Question:
username_0: It has been pointed out to me that the Italian and Spanish treebanks have a non-deterministic mapping of fine to coarse tags. In particular, the same 'fine' tag is mapped to different 'coarse' tags depending on the context. In both languages the fine POS-tag NO (ordinal number) is mapped both to PRON and ADJ coarse POS tags. I don't know whether we ever explicitly said that coarse and fine tags need to form a hierarchy, but I think it would be desirable if they did. Otherwise the relationship would be inverted and suggest that the coarse tags are actually finer.
The reason this came up is that SyntaxNet adds the coarse tags by mapping the fine tag deterministically (i.e. no more prediction and we hence wouldn't have a process of figuring out which coarse tag to use). I think this is better than having a separate prediction for the coarse tags.
What do you think about fixing this by adjusting the data and enforcing such a contract going forward?
Answers:
username_1: The same is true for the English treebank, by the way. For example, the fine tag VB is mapped to VERB and AUX coarse tags, but there are many other cases as well.
Because of this, SyntaxNet is unusable on the UD English treebank, until one manually fixes these inconsistencies.
username_2: We don't have anything like "coarse" and "fine" tags in UD. We have UPOS and XPOS. The latter is optional, taken from a language-specific tagset, if people want to keep it. So it can be really anything. I don't think we need any hierarchy here; that would kill the freedom people have in preserving their "national" tagsets.
If SyntaxNet needs to interpret these two columns differently, then I think people need to adjust their data (as they often need with other tools, too). The easiest thing would be to just copy UPOS to XPOS.
username_2: Related: https://github.com/tensorflow/models/issues/118
username_2: Perhaps we should put a warning about this somewhere on universaldependencies.org? And there was another format-related problem when people used CoNLL-U in programs which expected CoNLL-X, such as MaltParser; that could be described in the same section. |
tipsi/tipsi-stripe | 364113548 | Title: Google pay production mode?
Question:
username_0: ## Before I have submitted the issue
[X] I have read an [installation guide](https://tipsi.github.io/tipsi-stripe/docs/installation.html)
[X] I know that for an iOS I need to install pods because I've read the [installation guide](https://tipsi.github.io/tipsi-stripe/docs/installation.html#cocoapods-ios)
[X] I have read a [linking guide](https://tipsi.github.io/tipsi-stripe/docs/linking.html) and checked that everything is OK like in [manual linking guide](https://tipsi.github.io/tipsi-stripe/docs/linking.html#manual)
[X] I know that before using `tipsi-stripe` I need to set options for my app as described in [usage guide](https://tipsi.github.io/tipsi-stripe/docs/usage.html)
## The problem
Tried to enable production mode using live stripe key only to get a blank error on app load. The config/setup has worked just great up to now with androidpaytest mode set accordingly:
`androidPayMode: 'test'`
But setting the above to 'production' and changing the publishable key to a live alternative immediately causes an error page on the app loading with no message at all.
As there are no guides on using tipsi-stripe in production mode, i was wondering if you could provide some guidance and also update the docs with the same information as its a little misleading to say the androidPayMode just needs to be changed from test to production which in my case doesn't seem to be true. I can only assume theres android xml changes that also need to be put it place?
## Environment
* `tipsi-stripe` version:
* Last `tipsi-stripe` version where the issue was not reproduced (if applicable):
* iOS or Android:
* OS version:
* React-Native version:
* (Android only) `com.google.firebase:firebase-core` version:
* (Android only) `com.google.android.gms:play-services-base` version:
## Links to logs and sources
Create a [GIST](https://gist.github.com) which is a paste of your _full_ logs or sources, and link them here.
If you are reporting a bug, _always_ include build or error logs!
* [ios/Podfile](https://gist.github.com/link_to_podfile)
* [android/build.gradle](https://gist.github.com/link_to_podfile)
For `Android`, please provide the following sections from `android/app/build.gradle`:
* android.compileSdkVersion
* android.buildToolsVersion
* android.defaultConfig.minSdkVersion
* android.defaultConfig.targetSdkVersion
* android.defaultConfig.multiDexEnabled (if exists)
## Screenshots, GIFs (Must to have)
Just drag-and-drop them to this textarea
## Code To Reproduce Issue (Good To Have)
Please remember that with sample code it's easier to reproduce the bug and it's much faster to fix it.
Answers:
username_1: @username_0 just go here https://developers.google.com/pay/api/android/guides/test-and-deploy/integration-checklist on the end of the page you will have a link to form, add your app, select tokenization method = stripe
Status: Issue closed
|
ampproject/amphtml | 554659246 | Title: amp-playon NEW MEDIA component
Question:
username_0: **## Describe the new feature or change to an existing feature you'd like to see**
We are a Spanish video company who provide video technology to the biggers newspapers, networks and sites in Spain and LATAM.
We have our own HTML5 and JS videoplayer, IAB & IMA full compliant, and we need a way to help our customers to implement it in their AMP sites, like they can do with the amp-youtube or amp-brightcove components.
Now the player can be added to a site using an <iframe> and <script> tags who calls a function, .js file and a few parameters (video_id, affiliate_id, player_width and player_height):
<iframe id="[id_video]_[random]" width="[width]" height="[height]" ></iframe><script type="text/javascript" src="//domain/file.min.js"></script><script type="application/javascript">(function(){IFRIENDLY_DATA.init({"idFrm":"[id_video]_[random]","url":"video/[id_user]/[id_video]/[width]/[height]","domain":"//domain"});})()</script>
**## Describe alternatives you've considered**
There´s no alternative as our customers want to use our tech and monetize videos using it.
**## Additional context**
Attached 2 screenshoots about the player working.
<img width="746" alt="player2" src="https://user-images.githubusercontent.com/60256308/73062773-70b63b80-3e9d-11ea-8958-84f46b78b951.png">
<img width="687" alt="player1" src="https://user-images.githubusercontent.com/60256308/73062777-714ed200-3e9d-11ea-9ebb-81a58b2f9cf2.png">
Answers:
username_1: @username_0
Thank you for your I2I.
Have you looked into using [`amp-video-iframe`](https://amp.dev/documentation/components/amp-video-iframe/#for-third-party-video-vendors)?
It will give you the harnesses you need. I believe you only need need to create a hosted document that can dynamically insert videos from URL using the code snippet you included.
username_0: Hello @username_1
Yes, we tried but it´s not possible: we have problems with demand platforms (ads).
We need a component like brightcove, jw-player or dailymotion use (as we do quite the same, but free for our publishers).
Thank you.
username_1: This requires more details:
- What's the HTML API for the proposed component?
- What does this `//domain/file.min.js` script do? AMP does not allow embedding third-party Javascript, this would require an intermediate 3p frame to embed it.
- Is this is an intent for you/your team to implement this, or a request for others to do so?
If you wish to implement this, please read through the [vendor-specific player spec.](https://github.com/ampproject/amphtml/blob/master/spec/amp-3p-video.md) for more information. |
silverstripe/cwp-watea-theme | 462457481 | Title: There is no permanently visible label for the search field.
Question:
username_0: **Medium priority:**
There is no permanently visible label for the search field. The placeholder text should not be used as a replacement for a label.

When a person enters information into an input, its placeholder content will disappear. The only way to restore it is to remove the information entered. This creates a challenging experience for users with cognitive impairment where guiding language is removed as soon as the person attempting to fill out the input interacts with it.
**Solution:**Provide a permanently visible "Search" label.
cc @silverstripeux
Answers:
username_0: We've been looking at some text fields here that could work design wise https://material-ui.com/components/text-fields/. When clicking into the input the placeholder text becomes the label. We'll look at some alternatives or iterating on this example to work in Starter + Wātea.
username_0: Feedback we received: "I wouldn't consider this a failure: the magnifying glass icon is arguably the permanently visual label. That, combined with the component's conventional location go some way to establishing it as a search field."
I still think the search component itself could be updated so leaving open as an enhancement rather than a bug.
username_1: I think how google material forms work are a little bit weird for a searchbox. Google themselves don't use material form for their searchboxes. For accessibility we search box should include a label with "Search" for screen readers, However this does not need to visible. |
gar-syn/congo-lab | 954522708 | Title: [FEATURE] Bronkhorst EL-PRESS
Question:
username_0: **Describe the solution you'd like**
Add EL-PRESS
**Additional context**
[917022--Manual general instructions digital laboratory-style and IN-FLOW.pdf](https://github.com/username_0/congo-lab/files/6890808/917022--Manual.general.instructions.digital.laboratory-style.and.IN-FLOW.pdf)
[917023-Manual-operation-instructions-digital-instruments.pdf](https://github.com/username_0/congo-lab/files/6890809/917023-Manual-operation-instructions-digital-instruments.pdf)
[917027--Manual RS232 interface.pdf](https://github.com/username_0/congo-lab/files/6890810/917027--Manual.RS232.interface.pdf)<issue_closed>
Status: Issue closed |
mengxiong10/vue2-datepicker | 724932856 | Title: How to have input show date range in format MM/DD/YY
Question:
username_0: I am using Luxon DateTime to get dates and manipulate those dates as necessary in my application.
Using the datepicker I using the range=true to have a date range picker. I am setting the range in my data as such:
```
data() {
return {
date_range : [ Local.minus( { days: 10 } ).toFormat( 'MM/dd/yyyy' ), Local.toFormat( 'MM/dd/yyyy' ) ],
}
}
```
The date range picker works as expected. On change of the range it is showing the input value as a string ' MM/DD/YYYY ~ MM/DD/YYYY' but then will change to 'MM/DD/YYYY'. Basically it will only show the this.date_range[0].
How can I get the input string to show 'MM/DD/YYYY ~ MM/DD/YYYY'?<issue_closed>
Status: Issue closed |
opnsense/core | 497889188 | Title: Incorrect startup sequence of plugins
Question:
username_0: **Describe the bug**
We're using OPNSense OPNsense 19.7.4_1 (and previous versions) on ten firewall systems since the beginning of 2019 with following plugins:
- os-frr
- os-haproxy
The RIP implementation from the os-frr plugin is used to distribute routes in our infrastructure and HAP from the os-haproxy plugin is used as load balancer for our various services.
Since we're using OPNSense we've had the problem, that during the startup sequence HAP starts before RIP. This causes the problem that HAP cannot find its backend systems and we have to wait about 30 minutes until all connection attempts to them are timed out. We can't even abort the initialization of HAP with (ctrl)+(c), because then RIP will also not be started.
We would be really appreciated if you can adjust the startup sequence in a proper way.
**To Reproduce**
Steps to reproduce the behavior:
1. Configure and activate RIP
2. Configure HAP with a backend service only available after RIP distributed the corresponding route
3. Reboot OPNSense
**Expected behavior**
We're expecting that RIP and other dynamic routing protocols will be initialized before other plugins like HAP will be started.
**Environment**
Software version used and hardware type if relevant.
e.g.:
OPNsense 19.7.4_1-amd64
FreeBSD 11.2-RELEASE-p14-HBSD
OpenSSL 1.0.2s 28 May 2019
os-acme-client (installed) = 1.25
os-dyndns (installed) = 1.17
os-frr (installed) = 1.11_3
os-haproxy (installed) = 2.18
os-vmware (installed) = 1.5
Answers:
username_1: From what I can tell the ports are set up as follows during startup:
FRR: # REQUIRE: netif routing
HAProxy: # REQUIRE: DAEMON LOGIN
LOGIN and DAEMON require NETWORKING which netif and routing are a part of... which means HAProxy starts after FRR, which means this is a different type of race condition.
You can likely confirm this by:
# sh -c "rcorder -s nostart -s firsttime /etc/rc.d/* /usr/local/etc/rc.d/* 2> /dev/null | grep -e ffr -e haproxy"
It should print the correct order.
Now, *if* one of the services fails to start on their boot spot that is another matter to look into. I know for FRR we try to restart later just to be sure.
What would be needed to debug this is the full console output of the boot sequence.
Cheers,
Franco
username_2: This part sounds interesting to me. What makes you so sure that the 30 minute timeout is due to HAProxy waiting for backend systems? Usually HAProxy's startup is not delayed by backend, servers, etc.
username_0: Root file system: /dev/gpt/rootfs
`
username_2: Two things I've spotted:
```
/usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for haproxy
```
As stated previously, HAProxy immediately fails to start when hostnames are unresolvable (instead of sitting there and waiting for something).
```
postfix/postfix-script: starting the Postfix mail system
/usr/local/etc/rc.d/haproxy: WARNING: failed precmd routine for haproxy
Invoking start script 'frr'
```
Indeed, it looks like they are not started in the order that is shown by `rcorder`.
It looks like the postfix and HAProxy plugins use the expected startup order.
But frr is started by a syshook, which means that `rcorder` is ignored:
https://github.com/opnsense/core/blob/69139fcbb28d5080f1f7d32ad3c306cd7bd24310/src/etc/rc.syshook#L28-L42
Just a guess, I don't know anything about the frr plugin :)
username_1: But that means frr failed to start normally also as 50-frr is only a fix for when it doesn't start ;)
We could start haproxy later, e.g. 60-haproxy... I'd like to keep default plugin priority where it is and move things back so core stuff can use 20-49 and plugins can space out the other range.
Status: Issue closed
|
rubygems/rubygems.org | 66248842 | Title: owners API endpoint not returning owners (sometimes)
Question:
username_0: [{},{"email":"<EMAIL>"},{"email":"<EMAIL>"}]%
```
Answers:
username_1: Good catch. This is most likely because of my change on https://github.com/rubygems/rubygems.org/commit/309908d17f472797d832e4e3a490981cfde6e5a0 .
We should still hide email, if the user opt-in to hide it, but I think we should add the handle in the API, so the hash doesnt look empty like this.
I will fix this. thanks
Status: Issue closed
|
kairoaraujo/PowerAdm | 62517487 | Title: Can´t find ticket
Question:
username_0: After created a ticket and save without execute, poweradm can´t find the ticket again to execute:
Please choose an option: 2
[LPAR creation]
Select the Change/Ticket to execute:
Traceback (most recent call last):
File "/home/poweradm/PowerAdm/poweradm.py", line 33, in <module>
main_poweradm()
File "/home/poweradm/PowerAdm-0.8.4-beta/poweradm/poweradm.py", line 62, in main_poweradm
exec_findlpar.selectChange()
File "/home/poweradm/PowerAdm-0.8.4-beta/poweradm/findchange.py", line 47, in selectChange
listChanges = fnmatch.filter(os.listdir("poweradm/changes/"), "*.sh")
OSError: [Errno 2] No such file or directory: 'poweradm/changes/'
Status: Issue closed
Answers:
username_1: After created a ticket and save without execute, poweradm can´t find the ticket again to execute:
Please choose an option: 2
[LPAR creation]
Select the Change/Ticket to execute:
Traceback (most recent call last):
File "/home/poweradm/PowerAdm/poweradm.py", line 33, in <module>
main_poweradm()
File "/home/poweradm/PowerAdm-0.8.4-beta/poweradm/poweradm.py", line 62, in main_poweradm
exec_findlpar.selectChange()
File "/home/poweradm/PowerAdm-0.8.4-beta/poweradm/findchange.py", line 47, in selectChange
listChanges = fnmatch.filter(os.listdir("poweradm/changes/"), "*.sh")
OSError: [Errno 2] No such file or directory: 'poweradm/changes/'
username_1: Release 0.8.5-beta is OK for this bug.;:
Status: Issue closed
|
creativecommons/creativecommons.org | 109215102 | Title: Restore search to website/blog
Question:
username_0: I don't think I'm imagining it - there doesn't seem to be a way to search the website/blog any more.
It would be good to restore it - very useful tool for users and staff.
Answers:
username_1: We made a decision to remove it as the vast majority of searches were for
the commons, not the site.
For a site search, your favorite search engine does a much better job than
we can hope to without a considerable effort.
Status: Issue closed
|
Azure/iotc-device-bridge | 707224202 | Title: Delay in matching the telemetry with the device template
Question:
username_0: Hi, I am seeing the issue where the telemetry data is not immediately modeled to the template. As seen in the attached screen shot the "data" was unmodeled for the first few data points.

I used Postman with this payload
{
"device": {
"deviceId": "iotcbridgetest"
},
"measurements": {
"data": 22.0
},
"timestamp": "2020-09-22T01:11:43.43Z"
}
Answers:
username_1: After a device is created, the model information takes a few minutes to update. During that time, data points will show up as unmodeled.
username_0: thanks @username_1 so is it a matter of time only? or does the device need to receive some telemetry messages before the modelling is done? In order not to miss any data points should we create a device first and wait some time before sending telemetry?
username_1: The delay is in relation to the time the device makes the first call to the provisioning service (DPS). This bridge does this call right before sending the data, to make the process transparent.
To ensure that the very first points will be modeled, you can send an initial message to the bridge for that device with an empty measurements section (`"measurements": {}`). This will generate a call to the provisioning service. After a a few minutes you should be able to send additional data and they should show up as modeled.
Status: Issue closed
username_0: Thanks @username_1 for the clarification |
Facepunch/garrysmod-issues | 50286568 | Title: Clientside models revent upon client "refresh"
Question:
username_0: Let's say that you create a clientside model with the model being "A". Later you change the model to "B". If the player goes inside the world (with noclip for example) and then comes back out, the clientside model will be drawing "A" even though when you call GetModel() it returns "B".<issue_closed>
Status: Issue closed |
department-of-veterans-affairs/va.gov-team | 961929317 | Title: 508-defect-2 [COLOR]: Links must not rely on color
Question:
username_0: # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
<!-- It's okay to delete the instructions above, but leave the link to the 508 defect severity level for your issue. -->
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. -->
**VFS Point of Contact:** _Josh_
## Details
Links must not rely on color. Underlines are provided only on hover, which is not available to mobile users.
## Acceptance Criteria
- [ ] Links do not rely on color to convey that they can be interacted with
## Proposed Solution (if known)
[As discussed in ticket #3556, these heading links should be underlined](https://github.com/department-of-veterans-affairs/va.gov-team/issues/3556).
[Design system ticket #497 has been logged for this](https://github.com/department-of-veterans-affairs/vets-design-system-documentation/issues/497)
## WCAG or Vendor Guidance (optional)
* [WCAG 1.4.1: Use of Color](https://www.w3.org/TR/UNDERSTANDING-WCAG20/visual-audio-contrast-without-color.html)
## Screenshots or Trace Logs
<img width="410" alt="Screen Shot 2021-08-05 at 11 11 24 AM" src="https://user-images.githubusercontent.com/14154792/128374529-e900bce3-9e2c-4e61-8874-122a13db7d31.png">
<img width="411" alt="Screen Shot 2021-08-05 at 11 11 30 AM" src="https://user-images.githubusercontent.com/14154792/128374536-6b6de340-5779-4045-8f60-aa23b189cf10.png"> |
webdriverio/webdriverio | 338447962 | Title: Cannot ignore certificate errors in firefox profile service
Question:
username_0: ## The problem
Hi,
I tried to ignore a firefox certificate error 'Your connection is not secure' which usually occurs if certificate is not trusted by using wdio-firefox-profile-service from wdio.conf.js.
services: ['firefox-profile'],
firefoxProfile: {
assumeUntrustedCertIssuer: true,
canAssumeUntrustedCertIssuer: true
},
But WebdriverIO fails to get past these errors. Are the key values ignoring certificates changed?
## Environment
* WebdriverIO version: ^2.0.2
* Node.js version: v8.11.2
* if wdio testrunner, running synchronous or asynchronous tests: running synchronous
* Additional wdio packages used (if applicable):
wdio-firefox-profile-service: ^0.1.2
Answers:
username_1: This issue was moved to webdriverio/wdio-firefox-profile-service#17
Status: Issue closed
username_0: Mr. Bromann,
I found a fix for this issue. But it was not via **wdio-firefox-profile-service** but **wdio.conf.js capabilities** by setting option **_acceptInsecureCerts_** to **_true_**
`capabilities: [
{
// maxInstances can get overwritten per capability. So if you have an in-house Selenium
// grid with only 5 firefox instances available you can make sure that not more than
// 5 instances get started at a time.
maxInstances: 1,
//
browserName: 'firefox',
acceptInsecureCerts : true,
},
]`
username_2: @username_0 thanks this worked for me! |
RocketChat/Rocket.Chat.Apps-engine | 735195428 | Title: Error occurs when trying to update message using IMessageBuilder.setData() method
Question:
username_0: When trying to update an existing message something like this:
```
const msgBuilder = (await this.modify.getUpdater().message(msgId, sender)).setData(newMessage);
await this.modify.getUpdater().finish(msgBuilder);
```
I'm getting an error saying
```
Error: Invalid message, can't update a message without an id.
at ModifyUpdater._finishMessage (/home/murtaza/github_repo/RocketChatRepos/Rocket.Chat/node_modules/@rocket.chat/apps-engine/server/accessors/ModifyUpdater.js:55:19)
at ModifyUpdater.finish (/home/murtaza/github_repo/RocketChatRepos/Rocket.Chat/node_modules/@rocket.chat/apps-engine/server/accessors/ModifyUpdater.js:45:29)
at PostMessageSentHandler.run (evalmachine.<anonymous>:87:48)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at D360Integration.executePostMessageSent (evalmachine.<anonymous>:22:9)
```
From what I understand, the [IMessageBuilder](https://github.com/RocketChat/Rocket.Chat.Apps-engine/blob/ab63c372a828470bdb8b6d736664300ad877d052/src/server/accessors/MessageBuilder.ts) class is deleting the `id` from the message object passed [here](https://github.com/RocketChat/Rocket.Chat.Apps-engine/blob/ab63c372a828470bdb8b6d736664300ad877d052/src/server/accessors/MessageBuilder.ts#L18)(when we call [IMessageBuilder.setData()](https://github.com/RocketChat/Rocket.Chat.Apps-engine/blob/ab63c372a828470bdb8b6d736664300ad877d052/src/server/accessors/MessageBuilder.ts#L17) method).
This behaviour is valid if we are creating an message bcs the message id is generated on server, however in case of update we don't need to delete it.
I am not sure how to fix this, any suggestions?
Answers:
username_1: Hi, @username_0!
Thanks for pointing and detailedly describing this issue. In order to fix it, we have just added the **`setUpdateData` method** in the `MessageBuilder`, which allows you to do the same as the `setData` method, but it keeps the message's ID so that you can use the `ModifyUpdater` object to update the message later.
Thus, you will be able to fix this error by calling the `setUpdateData` method (which also requires you to inform which user updated the message) instead of the `setData` method.
I suggest you to keep track of the **pull request #364**.
username_0: Great Thanks @username_1 |
ably/demo-mobile-phonegap-cordova | 98401284 | Title: Vendorise dependencies
Question:
username_0: Given this app will run in Phonegap, should we not vendorise the assets such as JQuery / Underscore etc.
Answers:
username_0: @username_1 mentioning you here to ensure you receive a notification
username_1: Agreed. In the future, when the app is more or less finished, I'll move the libraries into a ```libs``` folder and close the issue.
Status: Issue closed
username_1: OK, done in 03ccb712551f299e76d50b6f8eb89dc2273ab923 . |
croton-on-hudson/bicycle-pedestrian-committee | 403616279 | Title: Keep track of known hazardous intersections
Question:
username_0: ## Description
Notes and updates on hazardous intersections for pedestrians, bicyclists, and/or cars.
## Updates
* 27 January 2018 - <NAME> sent an initial list of known hazardous intersections that make up the initial list on this page.
* 26 January 2019 - <NAME> forwarded a note from a resident about a hazardous intersection at Truesdale and Franklin to the committee. |
SzFMV2018-Osz/AutomatedCar-A | 364408218 | Title: Translation Matrix létrehozása a renderhez
Question:
username_0: Ez a kamera kezelés alapjául szolgáló mátrix lett volna de mivel csoport társam addigra elkezdte a kamerát ez az issue törölve lett mivel a másik issue magába foglalja
Status: Issue closed
Answers:
username_1: - Mi az, hogy translation matrix?
- Miből lesz létrehozva?
- Mi lesz osztály szinten a kimenet?
username_0: Ez a kamera kezelés alapjául szolgáló mátrix lett volna de mivel csoport társam addigra elkezdte a kamerát ez az issue törölve lett mivel a másik issue magába foglalja
Status: Issue closed
username_0: Csak elfelejtetten lezárni az elvetett issue-t elnézés |
ExRam/ExRam.Gremlinq | 397023276 | Title: Provider bindings for other cloud graph databases.
Question:
username_0: With the provider binding for CosmosDb serving as a template, more bindings for more cloud graph databases would be needed (eg. Amazon Neptune). The binding for generic WebSockets might already be sufficient in some cases, but we expect every provider to require some workarounds.<issue_closed>
Status: Issue closed |
zhanghang1989/PyTorch-Encoding | 391671589 | Title: inconsistent split of pcontext with the benchmark?
Question:
username_0: Hi,
I have run the script/prepare_pcontext.py and get the train.pth and val.pth, while the train.pth only include 4996 imgs and val.pth include 5104 img and there are 4188 imgs for test, but the pascal context only have 10103 imgs and 4998 for training 5105 for val which also mentioned in your paper, and i use the single scale of encnet on the 5104 val imgs get miou 49.5. Is the performance reasonable and about the split could you give some suggestions? Thank you.
Answers:
username_1: 1. For mIoU of 49.5 is pretty close to the orginal paper.
2. About training and test splits. The training and test splits were slightly changed by the dataset organizers after EncNet submission. The reported splits in EncNet paper were used in the CVPR 2017 workshop. |
cilium/cilium | 1050103854 | Title: datapath: Redirect from bpf_overlay to egress gw SNAT netdev
Question:
username_0: Currently, when the egress GW node receives a packet from a remote pod which needs to be SNAT-ed to the egress GW IP, the packet is first handled by bpf_overlay@cilium_vxlan. The prog passes the packet to stock, and eventually it should make to bpf_host@eth0 which does the SNAT.
However, if there is an iptables SNAT rule in the host netns, it might get into the way of egress GW. This is the case for AWS EKS iptables rules:
```
# iptables-save | grep AWS-SNAT-CHAIN
:AWS-SNAT-CHAIN-0 - [0:0]
:AWS-SNAT-CHAIN-1 - [0:0]
-A POSTROUTING -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-0
-A AWS-SNAT-CHAIN-0 ! -d 192.168.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-1
-A AWS-SNAT-CHAIN-1 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 192.168.188.111 --random-fully
```
The SNAT rule was snatting the packet which made it to bypass the egress GW policy.
To prevent from this happening, we should do the `ctx_redirect()` from bpf_overlay. |
hbz/lobid-vocabs | 483946641 | Title: Move Regierungsbezirke, Kreise, Gemeinden etc. under 05
Question:
username_0: From @username_1 in https://github.com/hbz/lobid-vocabs/issues/91#issuecomment-516734934:
As discussed, we will have to change to notations from their current value as used in hbz01:
05 Westfalen -> 04 Westfalen
91 Euregio -> 91 question (We will have to ask editors about the new notation)
Answers:
username_0: From @username_1 in https://github.com/hbz/lobid-vocabs/issues/91#issuecomment-516734934:
As discussed, we will have to change to notations from their current value as used in hbz01:
05 Westfalen -> 04 Westfalen
91 Euregio -> 91 question (We will have to ask editors about the new notation)
username_0: Deployed to test: https://test.nwbib.de/spatial
username_0: Redeployed as discussed offline: https://test.nwbib.de/spatial#N05
username_1: This look great. We should deploy this now.
Status: Issue closed
|
electron/electron | 214007060 | Title: Why npm run clean removes node_modules and libchromiumcontent
Question:
username_0: * Electron version: 1.6.3
* Operating system: Windows 7
### Expected behavior
`npm run clean ` should only deletes the out directory as said in the docs https://electron.atom.io/docs/development/build-instructions-windows/#cleaning .
### Actual behavior
It deleted the node_modules ( no real problem here because npm packages are cached on my system ) and `vendor/brightray/vendor/download/libchromiumcontent` &
`vendor/brightray/vendor/libchromiumcontent/src` ( around 2 GB ) .
I should have checked `script/clean.py` before running , but I depended on the documentation .
Is this the right behavior to delete what is downloaded from the bootstrap command ?
Answers:
username_1: Yeah, `npm run clean` is a pretty comprehensive reset that will require a full re-bootstrap afterwards.
We could have two separate clean scripts, one to do a full clean when you want a full reset (often needed after a chrome upgrade or `node_modules` corruption) and a clean that just cleans the build folders so a `npm run build` will do a full recompile.
username_0: @username_1 I made a pull request #8955 to add a new command `npm run clean-build` , it will only remove `out` and `dist` directories
Status: Issue closed
|
132nd-vWing/ATRM_Brief | 560341158 | Title: Add reference to RAMROD in Authentication in Standing SPINS
Question:
username_0: RAMROD is currently in the same link as AET100 (it is on the same document), but there isn't a reference to it so it is hard to find. Can RAMROD also be referenced along with AET100 in the Authentication section so that those who don't know that they are combined can still find it.
Answers:
username_1: Fixed.
Closing issue
Status: Issue closed
|
eaplatanios/tensorflow_scala | 655957502 | Title: Load trained models
Question:
username_0: Is it possible to load in to tensorflow_scala models generated in python "main" tensorflow? I can't find anywhere who to do it.
Thanks
Answers:
username_1: ```
username_1: It seems like the saved_model class and related utilities are available only in python and java. May be this is written as an utility in those languages. I couldn't find a reference to this in C++ API. Considering that this library is a wrapper on C++, it is somewhat unlikely to have this functionality. It is probably still possible to write such an utility in this library as well, maintaining the same general classnames etc.
Here are 2 implementations in Scala and Java through the Java library:
- https://towardsdatascience.com/serving-tensorflow-model-in-scala-6caeadbb2d55
- https://stackoverflow.com/questions/61923351/how-to-invoke-model-from-tensorflow-java/
Waiting for @eaplatanios to respond if possible. |
mvextensions/mvbasic | 703035423 | Title: [BUG] Auto-Complete list of subroutines keeps growing, and across tabs
Question:
username_0: Each time the auto-complete list is displayed, the number of entries for every subroutine name in the program grows by one entry per subroutine.
For example, if the program has 2 subroutines **_SUB.01_** and **_SUB.02_**, the first time the auto-complete list is displayed you see:
SUB.01
SUB.02
The second time it shows:
SUB.01
SUB.01
SUB.02
SUB.02
The third time it shows:
SUB.01
SUB.01
SUB.01
SUB.02
SUB.02
SUB.02
And so on, and so on, for each time it is displayed.
<img width="287" alt="Screen Shot 2020-09-16 at 15 41 49" src="https://user-images.githubusercontent.com/1646753/93384698-2ec67500-f833-11ea-9168-627df2fd20d0.png">
The second problem is that the growing list is carried across separate documents (tabs) within the same VS Code window. i.e. All document tabs within a window, with the language mode set to "MultiValue Basic", will show the growing list of subroutines from other tabs.
So, you're editing one program, but seeing the subroutines (repeated) from a different program that's on a separate tab. Note that this does not happen across separate VS Code Windows, just the tabs within one window.
**O/S**
macOS 10.14.6 (Mojave)
**MV Extension**
2.0.11
**VS Code**
Version: 1.49.0
Commit: <PASSWORD>
Date: 2020-09-10T17:39:53.251Z
Electron: 9.2.1
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Darwin x64 18.7.0
Status: Issue closed
Answers:
username_1: Fixed in `develop` and will be included in release 2.1.3. Thanks @username_0 and @tcharts-boop! 🥳 |
google/yapf | 455547881 | Title: Recursive mode doesn't work on network shares
Question:
username_0: I need to run yapf on some code stored on a network drive. I have an SMB drive mounted and standard yapf commands without the `--recursive` flag work great. However using `--recursive` makes things hang indefinitely:
```
$ pipenv run yapf --recursive .
^CTraceback (most recent call last):
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/bin/yapf", line 10, in <module>
sys.exit(run_main())
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/yapf/__init__.py", line 335, in run_main
sys.exit(main(sys.argv))
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/yapf/__init__.py", line 207, in main
exclude_patterns_from_ignore_file)
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/yapf/yapflib/file_resources.py", line 110, in GetCommandLineFiles
return _FindPythonFiles(command_line_file_list, recursive, exclude)
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/yapf/yapflib/file_resources.py", line 172, in _FindPythonFiles
if IsPythonFile(filepath):
File "/Users/skainswo/.local/share/virtualenvs/research-OGGq2tNy/lib/python3.7/site-packages/yapf/yapflib/file_resources.py", line 198, in IsPythonFile
encoding = tokenize.detect_encoding(fd.readline)[0]
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib2to3/pgen2/tokenize.py", line 290, in detect_encoding
first = read_or_stop()
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib2to3/pgen2/tokenize.py", line 264, in read_or_stop
return readline()
KeyboardInterrupt
```
The exact same setup works without a hitch on the local filesystem (same files, same yapf version). I'm running on macOS 10.14.4 with yapf 0.27.0.
Answers:
username_1: I'm not familiar with SMB drives. It looks like it's hanging in `lib2to3` when calling `readline` on a filehandle from the `open` command. If you change the `open` in `file_resources.py:197` to something like `py3compat.open_with_encoding()` does it work?
Status: Issue closed
username_0: I believe it may in fact be working but just slow as molasses. I just tried waiting much, much longer and I'm now seeing a new error:
```
$ python3 -m yapf --exclude "._*" --recursive --diff /Users/skainswo/nu/skainswo/research/
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/skainswo/dev/yapf/yapf/__main__.py", line 18, in <module>
yapf.run_main()
File "/Users/skainswo/dev/yapf/yapf/__init__.py", line 335, in run_main
sys.exit(main(sys.argv))
File "/Users/skainswo/dev/yapf/yapf/__init__.py", line 220, in main
verbose=args.verbose)
File "/Users/skainswo/dev/yapf/yapf/__init__.py", line 270, in FormatFiles
in_place, print_diff, verify, verbose)
File "/Users/skainswo/dev/yapf/yapf/__init__.py", line 296, in _FormatFile
logger=logging.warning)
File "/Users/skainswo/dev/yapf/yapf/yapflib/yapf_api.py", line 84, in FormatFile
original_source, newline, encoding = ReadFile(filename, logger)
File "/Users/skainswo/dev/yapf/yapf/yapflib/yapf_api.py", line 196, in ReadFile
lines = fd.readlines()
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 709, in readlines
return self.reader.readlines(sizehint)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 618, in readlines
data = self.read()
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/codecs.py", line 504, in read
newchars, decodedbytes = self.decode(data, self.errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 37: invalid start byte
``` |
FSGpilot/frontend-styleguide | 317192778 | Title: First issue
Question:
username_0: <!-- Please feel free to remove whatever sections/lines in this aren’t relevant.
Use the title line as the title of your pull request, then delete these lines.
## Title line template: [Title]: Brief description
UI components: For issues that impact the look, feel, or functionality of the Standards themselves, please open an issue on the web-design-standards repo (https://github.com/18F/web-design-standards/issues/new).
-->
## Description
Include a high-level description of the issue. What did you expect to happen? What happened instead? What would you like to see changed?
Include any benefits, challenges, or considerations. This can be short and sweet.
## Steps to reproduce the issue
1. Step one
2. Step two
3. Step three
4. And so on
## Additional information [optional]
* Relevant research and support documents
* Screen shot images
* Notes
* And so on<issue_closed>
Status: Issue closed |
glasklart/hd | 169487053 | Title: FeedNews: AI curated social news for productivity
Question:
username_0: **App Name:** FeedNews: AI curated social news for productivity
**Bundle ID:** com.opera.iNews
**iTunes ID:** <a target="_blank" href="http://getart.username_3.at?id=1128377569">1128377569</a>
**iTunes URL:** <a target="_blank" href="https://itunes.apple.com/us/app/feednews-ai-curated-social/id1128377569?mt=8&uo=4">https://itunes.apple.com/us/app/feednews-ai-curated-social/id1128377569?mt=8&uo=4</a>
**App Version:** 1.0.2
**Seller:** Opera Software ASA
**Developer:** <a target="_blank" href=https://itunes.apple.com/us/developer/opera-software-asa/id363729563?uo=4>© Opera Software ASA</a>
**Supported Devices:** iPad2Wifi, iPad23G, iPhone4S, iPadThirdGen, iPadThirdGen4G, iPhone5, iPodTouchFifthGen, iPadFourthGen, iPadFourthGen4G, iPadMini, iPadMini4G, iPhone5c, iPhone5s, iPhone6, iPhone6Plus, iPodTouchSixthGen
**Original Artwork:**
<img src="http://is1.mzstatic.com/image/thumb/Purple60/v4/55/51/fa/5551faf0-54fe-8e4b-8d10-3cb8013bf278/source/1024x1024bb.png" width="150" height="150" />
**Accepted Artwork:**
\#\#\# THIS IS FOR GLASKLART MAINTAINERS DO NOT MODIFY THIS LINE OR WRITE BELOW IT. CONTRIBUTIONS AND COMMENTS SHOULD BE IN A SEPARATE COMMENT. \#\#\#
Answers:
username_1: I don't know about this one..

https://cloud.githubusercontent.com/assets/10730122/17466374/4250d962-5d0f-11e6-9670-33d21487cdf8.png
--- ---
Source:
https://cloud.githubusercontent.com/assets/10730122/17466375/4250d796-5d0f-11e6-946b-69376b06ad14.png
username_2: @username_1 can you invert it?
username_3: What about doing just the ***F*** ?

https://cloud.githubusercontent.com/assets/2068130/17545588/3439703e-5ede-11e6-8027-e2df9fad7853.png
--- ---
Source:
https://cloud.githubusercontent.com/assets/2068130/17545598/432898ae-5ede-11e6-8d67-f6d1f51b0c2c.png
username_1: I like just the ***F***.
Status: Issue closed
|
torchbox/wagtail-storages | 558205800 | Title: Potentially confusing name?
Question:
username_0: Raised during team meeting – as I understand this is named as an analogy to django-storages, which supports multiple storage backends (popular cloud providers), while this project currently only supports S3.
A way to address this would be to simply rename the project. But if the general goal is to support more storage backends if possible, the existing name could work, just adding documentation to the README to explain the current support?
Something along the lines of,
```markdown
## Supported storage providers
We currently only support AWS S3, as it has granular access control capabilities with ACLs. For other cloud providers,
- Azure has <limitations>, but they might be overcome.
- <other provider> has <more limitations>, and would not be currently usable
If you’re interested in contributing support for any of those providers, please open an issue to discuss implementation plans
```
---
If the confusion is a big issue it might also be worth updating the package’s description metadata to be suffixed with "– currently S3 only", or name drop S3 elsewhere in it. |
jaraco/keyring | 58744057 | Title: DBusException: Cannot create an item in a locked collection
Question:
username_0: #!python
File "/usr/lib64/python2.7/site-packages/keyring/core.py", line 42, in set_password
_keyring_backend.set_password(service_name, username, password)
File "/usr/lib64/python2.7/site-packages/keyring/backend.py", line 233, in set_password
True)
File "/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 70, in __call__
return self._proxy_method(*args, **keywords)
File "/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 145, in __call__
**keywords)
File "/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in call_blocking
message, timeout)
dbus.exceptions.DBusException: org.freedesktop.Secret.Error.IsLocked: Cannot create an item in a locked collection
----------------------------------------
- Bitbucket: https://bitbucket.org/kang/python-keyring-lib/issue/69
- Originally reported by: Anonymous
- Originally created at: 2012-08-03T00:46:39.141
Status: Issue closed
Answers:
username_0: I apologize, but I can't do much with just this error message. Can you clarify with some context and detail:
# What were you doing when the error occurred? What steps can someone else take to recreate the error?
# What did you expect to happen?
# What was your operating system and Python version?
# What other detail can you provide that might hint at the cause of the problem?
I'm going to close this ticket as wontfix, only because there's little that can be done without more information, but don't let that discourage you. Just provide more information and re-open the ticket.
----------------------------------------
Original comment by: <NAME>bs
username_0: Hi,
In Gnome 3.6, when the key ring is locked:
* `get_password()` always returns `None`, even if the key ring *has* a matching password,
* `set_password()` raises the exception shown above.
If the key ring is then unlocked through another application (eg. Empathy), everything starts working as expected.
Expected result: when the key ring is locked, the `keyring` module should have Gnome show the usual password prompt to unlock it.
----------------------------------------
Original comment by: <NAME>
username_0: Same here (openSUSE, keyring-1.2.2).
----------------------------------------
Original comment by: <NAME>ke
username_0: This issue is fixed in my pull request #25.
----------------------------------------
Original comment by: <NAME> |
hotosm/HDM-CartoCSS | 63247694 | Title: Render island names
Question:
username_0: I think I never see island names render at any zoom level in Vanuatu, which is not good.
Answers:
username_1: yep, it's on a TODO list; we should get around to it one of these days , already reported at #124
username_0: Sorry for not looking, just wanted to get it in before I forgot about in in the middle of merging some data. One of these days I'll have some time to see if I can help on this. I keep pointing people to the project for outreachy because it seems like an enjoyable and important project.
Status: Issue closed
|
andreypopp/webpack-package-loaders-plugin | 143048677 | Title: Handle loader query in JSON object format
Question:
username_0: It would be nice if this plugin supported the object-style way of configuring loader queries. For example:
```json
{
...,
"webpack": {
"loaders": [{
"loader": "babel-loader",
"test": "**/*.js",
"query": {
"plugins": [
"transform-object-rest-spread"
],
"presets": [
"es2015",
"react"
]
}
}]
}
}
```
If this is something you'd like to support, I could take a stab at the implementation.
Answers:
username_1: @username_0 reasonable, I'd accept a pull request. |
Marfusios/websocket-client | 617589624 | Title: Improper conditional subscription examples
Question:
username_0: In the main [ReadMe](https://github.com/username_1/websocket-client/blob/master/README.md) there is the following example
```
client
.MessageReceived
.Where(msg => msg.StartsWith("{"))
.Subscribe(obj => { code1 });
```
In the above msg is not a string but a [ResponseMessage](https://github.com/username_1/websocket-client/blob/master/src/Websocket.Client/ResponseMessage.cs) so it does not have a StartsWith method.
Should be `msg.Text.StartsWith`
Maybe add a comment saying it needs a null check or check MessageType to make sure it is text. I haven't seen what happens when a single websocket sends both text and binary so can't comment too much on this.
Answers:
username_1: Thanks @username_0, good catch.
Readme was not updated somewhere in the past when streamed message type changed.
Fixed.
username_0: I'm also looking more into MSDN's [How to: Implement an Observer](https://docs.microsoft.com/en-us/dotnet/standard/events/how-to-implement-an-observer), if I get something that looks clean I'll throw a new example at you. Copy pasta advanced examples are always helpful for usability.
Status: Issue closed
|
goblint/analyzer | 933897881 | Title: inf-recursion now runs ~6x longer before stack overflow
Question:
username_0: See https://github.com/goblint/analyzer/commit/af93e8ef032129d763b55ba5f98e988763adea1f.
```diff
- // ~15s (8MB default stack size)
+ // 1m24s (8MB default stack size) |rho|=36951 |called|=12320 |contexts|=6161
- // 1m07s (16MB, ulimit -Ss 16384)
+ // 8m48s (16MB, ulimit -Ss 16384) |rho|=73959 |called|=24656 |contexts|=12329
```
As I see it this could mean that either
1. it's now slower, so it takes longer to reach the stack overflow, or
2. it now uses less stack space.
I guess 2. is unlikely. If so, did you notice a similar regression somewhere else?
I added the numbers what the solver covered above. Could check with the commit when I added them initially.
Answers:
username_1: So I guess this regression must be somewhere between da6b1649111a6b9c93a75783cb07d9cd16f205de and af93e8ef032129d763b55ba5f98e988763adea1f?
username_1: From 25f8eb4eb39438e3bee23618d8a9724d1a320069 to bce46d085af6dccdc4d20eaff42b464642038865 the runtime until StackOverflow increases from `23s` to `37s` on my machine, the difference is (#225).
However, the brunt of the change is this:
From 2a9f5c00843f6498a31e45d6501caf402d1398da to 8a74107384c466ddbbf8dffa18256365a77f9e44 the runtime until StackOverflow increases from `40s` to `02m20s` on my machine, the difference is (#227), **so it seems like the derived implementations are worse than our own?**
(The number of evaluations stays comparable 36787 before, 36808 after.)
username_0: It's probably worth fixing this. I just went through the changes in #227 and most of the generated implementations should be the same performance-wise as the manual ones.
It's likely just a few implementations that were replaced where there was some relevant optimization.
Some candidates:
1. I assume the generated code for tuples (`Lift2`, `Lift3`, etc.) should be the same (*lazy* left to right)?
2. changes in `basetype.ml`
2. https://github.com/goblint/analyzer/pull/227/files#diff-07d12ebad40bbdbec08af13595a6f444b4f696ec1d9304694ee6d56d5b61f389L326-L351
2. https://github.com/goblint/analyzer/pull/227/files#diff-1d69bca13b9106d6d763169d16f563b0678aaa5a24bc1c045ce441d969703287L57-L66
3. https://github.com/goblint/analyzer/pull/227/files#diff-1d69bca13b9106d6d763169d16f563b0678aaa5a24bc1c045ce441d969703287L127-L139
username_2: According to https://github.com/ocaml-ppx/ppx_deriving#plugins-eq-and-ord:
* "eq and ord are short-circuiting", so they should be lazy for tuples;
* "For variants, ord uses the definition order", so it should be essentially the same as the manual implementations were.
So I don't immediately know what the difference would be in these cases.
username_0: Concerning `| MyCFG.FunctionEntry f, MyCFG.FunctionEntry g -> compare f.vid g.vid`:
`compare f.vid g.vid` should become `CilType.Fundec.compare f g` -> `CilType.Varinfo.compare f.svar g.svar` -> `compare f.svar.vid g.svar.vid`? So `FunctionEntry`'s arg changed from `varinfo` to `fundec`?
Might be that it's those extra calls if they're not inlined.
username_2: I just tried it on my machine:
* https://github.com/goblint/analyzer/commit/da6b1649111a6b9c93a75783cb07d9cd16f205de (times first added) takes 27.3s
* https://github.com/goblint/analyzer/commit/af93e8ef032129d763b55ba5f98e988763adea1f (times updated to slower) takes 1min 56.9s
* https://github.com/goblint/analyzer/commit/d882d5d9bfe543c8394771257ae69734da9105b7 (current master) takes 7.2s
Not sure what to make of this. The stats for the last one are as expected:
```
runtime: 00:00:07.097
vars: 37803, evals: 37801
|rho|=37803
|called|=12604
|stable|=37803
|infl|=25199
|wpoint|=0
Found 6303 contexts for 2 functions. Top 5 functions:
6301 contexts for entry state of f on 48-inf-recursion.c:11
2 contexts for entry state of main on 48-inf-recursion.c:18
Memory statistics: total=7350.91MB, max=54.45MB, minor=7345.71MB, major=56.48MB, promoted=51.28MB
minor collections=3508 major collections=12 compactions=0
```
username_2: I bisected my last difference and that improvement came from https://github.com/goblint/analyzer/commit/cd33c35f1b4856869f21ebea00df26deaf904ef7. I suppose on current master it actually will widen something with the new `side_widen` stuff? Probably not related to the original slowdown then, although this test might deserve some `PARAM` to keep the behavior consistent.
username_0: What's the time with `--sets exp.solver.td3.side_widen cycle`? That was the behavior before https://github.com/goblint/analyzer/commit/cd33c35f1b4856869f21ebea00df26deaf904ef7.
The new default `sides` should create fewer widening points - I'll have to think about what the reason could be why it's faster here.
username_2: I dug deep at the slow point https://github.com/goblint/analyzer/commit/af93e8ef032129d763b55ba5f98e988763adea1f using `perf` and found where 90% of the time is spent:
```
- camlConstraints__tf_proc_7561
- 92,75% camlBatList__map_546
- camlConstraints__one_function_7648
- 91,40% camlStdlib__list__iter_258
- camlTd3__side_3664
- 91,24% camlStdlib__hashtbl__fold_502
camlStdlib__hashtbl__do_bucket_507
- camlBatInnerPervasives__neg_185
- 79,42% camlStdlib__hashtbl__mem_in_bucket_726
+ 35,39% camlConstraints__fun_17163
+ 33,95% camlAnalyses__fun_5385
1,49% caml_curry2_1
0,87% camlPrintable__equal_869
- 11,80% camlStdlib__hashtbl__mem_722
- 10,77% camlStdlib__hashtbl__key_index_636
- 7,46% camlAnalyses__hash_1050
3,24% camlPrintable__hash_836
3,23% camlConstraints__hash_8347
```
It's quite difficult to follow this because the names are mangled and certain levels have been inlined I guess, but here's what I make of it.
In `Constraints.FromSpec.tf_proc` via `one_function` via `tf_normal_call` (inlined?) it it reachces this `List.iter` and side effect, which goes into `Td3.side`:
https://github.com/goblint/analyzer/blob/af93e8ef032129d763b55ba5f98e988763adea1f/src/framework/constraints.ml#L665
Then it spends 91% of the total runtime on some `Hashtbl.fold`, which I think is from here inside `Td3.side`:
https://github.com/goblint/analyzer/blob/af93e8ef032129d763b55ba5f98e988763adea1f/src/solvers/td3.ml#L195-L196
The fold is inside `exists_key` and calls via `neg` the `Hashtbl.mem` passed as argument above:
https://github.com/goblint/analyzer/blob/af93e8ef032129d763b55ba5f98e988763adea1f/src/solvers/td3.ml#L55
There the standard implementation is the following (https://github.com/ocaml/ocaml/blob/a7f13f9acbcf2caa65aa14070cad196792a22253/stdlib/hashtbl.ml#L448-L454):
```ocaml
let mem h key =
let rec mem_in_bucket = function
| Empty ->
false
| Cons{key=k; next} ->
H.equal k key || mem_in_bucket next in
mem_in_bucket h.data.(key_index h key)
```
`key_index` does the `hash` calculation, which takes 10% time in total and all the expensive stuff (80% time) must be the `equal` calls.
Due to the name mangling, I don't know what `camlConstraints__fun_17163` and `camlAnalyses__fun_5385` are (since they don't have an actual function name I'm guessing some lambdas, possibly those derived for eq. I collapsed that in `perf` output because they don't contain any insightful subcalls, just some OCaml own runtime alloc and GC functions.
So it looks like the derived `equals` for constraint system variables is somehow unusually expensive.
This point in `Td3.side` also explains why it doesn't happen on master: we now take a much less expensive side_widen choice which doesn't involve folding over the entire hashtable and `mem`-ing everything in another hashtable.
username_2: I tried changing the two places I just found back to use the old manually written `equal` functions and the runtime went from 1min 56.9s to 1min 10s. So there's some weird overhead for sure, but that's not the whole story. I'm guessing that same overhead might be in every derived tuple and variant `equal` then...
username_2: The behavior before that commit was called `cycle`, but it also changed the meaning of `cycle`. The corresponding case would be `unstable_called`, which gave me 2min 12.6s. My guess is that that increase (compared to the slowest one I gave above) is due to #251 which changes int evaluation to be via queries. And this program is basically just int evaluation, so that's not too surprising.
The new `cycle` terminates in 7.3s for me, so it's definitely that crazy `exists_key (neg (HM.mem stable)) called`. Since it's no longer the default and the new side_widen choices are more precise anyway, I think we don't have to worry about this degenerate case any more.
Of course it highlights overhead in derived implementations, but it's just such weird program with the weird side_widen that makes just `equal` calls for 70% of its runtime. Undoing #227 on master probably wouldn't give a measurable improvement with `sides`.
username_0: Why is the generated code more expensive?
Maybe it makes sense to patch how it's generated?
Does is try physical equality first?
username_2: No idea, I didn't actually try looking at the code it generates. I think it's generating this
```ocaml
let equal = fun (x1, x2) (y1, y2) -> B1.equal x1 y1 && B2.equal x2 y2
```
instead of
```ocaml
let equal (x1, x2) (y1, y2) = B1.equal x1 y1 && B2.equal x2 y2
```
which seems like it should compile to the same code in the OCaml compiler, but maybe it doesn't?
Other than that, I don't know what the difference could be.
username_2: There could be some OCaml compilation difference because I now remember that the derived version showed some `caml_curry2_1` calls in prof, but the manually written version didn't if I remember correctly.
username_0: Ok, strange. Maybe add a `bench/deriving.ml` to compare tuples/variants.
Related: https://github.com/goblint/analyzer/blob/master/bench/hashcons/equality_bench.ml.
Maybe worth to always generate `... let equal a b = a == b || equal a b`?
username_2: @username_0 I suppose we can close this issue now? Maybe you want to confirm that `--sets exp.solver.td3.side_widen unstable_called` now has the same performance as the equivalent `cycle` value used to.
Status: Issue closed
|
rosenpin/always-on-amoled | 482771601 | Title: Build fails with Could not resolve project :DiscreetAppRate
Question:
username_0: Hello.
Thanks for making this app. I read your statement about stopping to release now code here and I see that you want to make some money with the app now. No problem with that.
For that I guess you still would not be willing to push new code to this repository, right? So I tried to build at least the last version you published here ( in the hope that issue #2405 might be fixed) but I fail to build it.
Could you please take a look at the build error and tell me how to fix it? Would be much appreciated.
Build error:
```12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] FAILURE: Build failed with an exception.
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * What went wrong:
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] A problem occurred configuring project ':app'.
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Could not resolve all dependencies for configuration ':app:_debugApkCopy'.
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Could not resolve project :DiscreetAppRate.
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Required by:
12:08:41.637 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] project :app
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Project :app declares a dependency from configuration '_debugApkCopy' to configuration 'default' which is not declared in the descriptor for project :DiscreetAppRate.
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Could not resolve project :IssueReporter.
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Required by:
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] project :app
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] > Project :app declares a dependency from configuration '_debugApkCopy' to configuration 'default' which is not declared in the descriptor for project :IssueReporter.
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Try:
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Run with --stacktrace option to get the stack trace. Run with --scan to get full insights.
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Get more help at https://help.gradle.org
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildResultLogger]
12:08:41.638 [ERROR] [org.gradle.internal.buildevents.BuildResultLogger] BUILD FAILED in 1s
12:08:41.638 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Completing Build operation 'Run build'
12:08:41.639 [DEBUG] [org.gradle.internal.operations.DefaultBuildOperationExecutor] Build operation 'Run build' completed
```
Answers:
username_1: Sorry, the code in this repo will no longer be updated since it was repeatedly cloned and reuploaded to the play store.
This is unfortunate but you could consider this app closed source, BTW, the code in this repo is very old and probably won't even work for new Android versions, so I wouldn't recommend using it, if you are interested though, I published a simple demo always on
Status: Issue closed
|
pytorch/vision | 770982742 | Title: Cannot Build With FFmpeg Support
Question:
username_0: ## Cannot Build With FFmpeg Support
Hi.
While trying to build `torchvision` from source, I've seen this output:
```
+ python3 setup.py build
Building wheel torchvision-0.8.2
PNG found: True
libpng version: 1.6.37
Building torchvision with PNG image support
libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: False
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: False
running build
running build_py
creating build
(omitted)
```
It showed that **`FFmpeg found: False`**. I tried `apt install ffmpeg` and built again, it still showed FFmpeg not found.
Then I tried:
```shell
apt update
apt install ffmpeg \
libavformat-dev libavcodec-dev libavdevice-dev \
libavutil-dev libswscale-dev libavresample-dev libavfilter-dev
# deps of python package av
pip install ffmpeg av
```
But it showed `FFmpeg found: False` once again.
I could not find any instructions in [README](../blob/master/README.rst) about installing `ffmpeg` dependencies for building `torchvision` yet, so how could I do that, or where could I find it?
Thanks.
Answers:
username_1: Hi @username_0, could you please check if the `ffmpeg` command is available?
username_0: Hi, I just checked and it is available.
**Install deps**
```
root@9e6bf2c705fc:~# uname -a
Linux 9e6bf2c705fc 5.9.14-sunxi64 #20.11.3 SMP Fri Dec 11 20:34:34 CET 2020 aarch64 aarch64 aarch64 GNU/Linux
root@9e6bf2c705fc:~# apt update
root@9e6bf2c705fc:~# apt install ffmpeg libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libavresample-dev libavfilter-dev python3-pkgconfig
root@9e6bf2c705fc:~# pip install ffmpeg av
root@9e6bf2c705fc:~# pip install torch -f https://torch.maku.ml/whl/stable.html
```
**Check ffmpeg**
```
root@9e6bf2c705fc:~# apt list --installed | grep ffmpeg
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
ffmpeg/focal-updates,focal-security,now 7:4.2.4-1ubuntu0.1 arm64 [installed]
root@9e6bf2c705fc:~# ffmpeg -version
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
```
**Build**
```
root@9e6bf2c705fc:~# git clone https://github.com/pytorch/vision
root@9e6bf2c705fc:~# cd vision
root@9e6bf2c705fc:~/vision# VER="0.8.2"
root@9e6bf2c705fc:~/vision# export BUILD_VERSION="$VER"
root@9e6bf2c705fc:~/vision# git checkout "v$VER"
root@9e6bf2c705fc:~/vision# export MAX_JOBS=1
root@9e6bf2c705fc:~/vision# python3 setup.py build
Building wheel torchvision-0.8.2
PNG found: False
Running build on conda-build: False
Running build on conda: False
JPEG found: False
FFmpeg found: False
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/torchvision
copying torchvision/extension.py -> build/lib.linux-aarch64-3.8/torchvision
...
```
---
I don't know if the architecture `aarch64` or the virtualization env `docker` has any effect on this...
username_1: It seems that setup.py is unable to find ffmpeg in the PATH, is it possible that the PATH is being modified in some way?
username_0: Probably not.
I ran **`echo $PATH`** right after the initialization of docker container, and after installing dependencies, and the output are both **`/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin`**.
`which ffmpeg` outputs `/usr/bin/ffmpeg`.
---
I also tried to install `ffmpeg` and other dependencies, like `jpeg` and `libpng`, using [miniforge](https://github.com/conda-forge/miniforge), which might be more recommended for building.
In this case, `which ffmpeg` shows `/root/miniforge3/bin/ffmpeg`, but `setup.py` still could not detect it, while `conda` is detected:
```
(base) root@8546b919cb25:~/pytorch/vision# python setup.py build
Building wheel torchvision-0.8.2
PNG found: True
libpng version: 1.6.37
Building torchvision with PNG image support
libpng include path: /root/miniforge3/include/libpng16
Running build on conda-build: False
Running build on conda: True
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: False
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/torchvision
```
username_1: Could you please check in a Python interpreter the output of `distutils.spawn.find_executable('ffmpeg')`?
username_0: Hi, here's the output:
```
root@01bda82eae37:~# python3 -c "import distutils.spawn; print(distutils.spawn.find_executable('ffmpeg'))"
None
# install deps
root@01bda82eae37:~# ffmpeg -version
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --arch=arm64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
root@01bda82eae37:~/vision# which ffmpeg
/usr/bin/ffmpeg
root@01bda82eae37:~# python3 -c "import distutils.spawn; print(distutils.spawn.find_executable('ffmpeg'))"
/usr/bin/ffmpeg
# clone torchvision, set environment and build
root@01bda82eae37:~/vision# python3 setup.py build
Building wheel torchvision-0.8.2
PNG found: True
libpng version: 1.6.37
Building torchvision with PNG image support
libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: False
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: False
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/torchvision
copying torchvision/extension.py -> build/lib.linux-aarch64-3.8/torchvision
...
```
Looks like that `distutils` can also find `ffmpeg`.
---
I just ran the exact same commands on a `x86_64` docker container. The results are also same.
[Truncated]
root@d9916b29461b:~/vision# python3 setup.py build
Building wheel torchvision-0.8.2
PNG found: True
libpng version: 1.6.37
Building torchvision with PNG image support
libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: False
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: False
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/torchvision
copying torchvision/utils.py -> build/lib.linux-x86_64-3.8/torchvision
...
```
username_1: It seems that `distutils.spawn.find_executable` is not finding ffmpeg
username_1: Could you please check if `shutil.which('ffmpeg')` works?
username_0: It doesn't work:
```
root@01bda82eae37:~# python3 -c "import shutil;shutil.which('ffmpeg')"
root@01bda82eae37:~#
```
username_2: The latest commit on 0.8.2 in setup.py looks to have disabled ffmpeg?
```diff
- has_ffmpeg = ffmpeg_exe is not None
+ # Disable ffmpeg by default
+ no_ffmpeg = os.environ.get("NO_FFMPEG", True)
+ has_ffmpeg = ffmpeg_exe is not None and not no_ffmpeg
```
If there some strange way to set no_ffmpeg to False, I'm missing what that is...
username_3: I ran into this today too. It should either
- be documented that the user should set `NO_FFMPEG=`
- or the absence of `NO_FFMPEG` should be taken to mean that ffmpeg should be used if found, which makes a lot more sense to me:
```diff
ffmpeg_exe = distutils.spawn.find_executable('ffmpeg')
- # Disable ffmpeg by default
- no_ffmpeg = os.environ.get("NO_FFMPEG", True)
+ no_ffmpeg = os.environ.get("NO_FFMPEG", False)
has_ffmpeg = ffmpeg_exe is not None and not no_ffmpeg
print("FFmpeg found: {}".format(has_ffmpeg))
```
username_4: Yes, this is fixed on the master. If all are ok I'll close this issue for now, and document installation process better.
Status: Issue closed
username_0: Just to mention: `apt install ffmpeg` is not enough for building with FFMPEG support, both PyTorch and Torchvision, it will fail with `Headers missing` errors.
You may want to manually install (`make`) [from source](http://ffmpeg.org/releases). Check [this post](https://stackoverflow.com/a/47024694/10714490) for instructions.
username_5: In the Readme it says that ffmpeg can be installed with conda. If I do so and then install tochvision from sources with `python setup.py install` I still get:
FFmpeg found: False |
lgarron/first-world | 1128355760 | Title: Converting an Apple Music song to a non-DRM iTunes purchase is slow and tricky
Question:
username_0: 1. Make sure to be on macOS (desktop).
2. Remove the local download of the song.
3. Make sure `Music.app` is not set to high-quality (e.g. Dolby Atmos) downloads.
3. Right-click and "Show in iTunes Store".
4. Buy and download.
Even with all these steps, the iTunes store will sometimes have pathological edge cases, like trying to show you a different song in a different album by the same artist. In that case, I've sometimes had to resort to finding the iTunes result on the web in order to open it in `Music.app`. |
stryker-mutator/stryker-net | 617531583 | Title: Cannot install without Dotnet Core 3.1 installed with latest version(0.18.0)
Question:
username_0: **Describe the bug**
I have docker images I am creating for Stryker runs on different versions of dot net Core(2.1, 2.2, 3.1). Since version 0.18.0 of Stryker.NET was released I am unable to install stryker on any image that doesn't contain Dotnet core 3.1.
**Logs**
Step 14/19 : RUN dotnet tool install -g dotnet-stryker
---> Running in b8063bd4273a
error NU1202: Package dotnet-stryker 0.18.0 is not compatible with netcoreapp2.2 (.NETCoreApp,Version=v2.2) / any. Package dotnet-stryker 0.18.0 supports: netcoreapp3.1 (.NETCoreApp,Version=v3.1) / any
The tool package could not be restored.
**Expected behavior**
I am able to install Stryker without Dotnet core 3.1 installed
**Desktop (please complete the following information):**
- OS: Windows
- Type of project: Dotnet Core
- Framework Version Core 2.1, 2.2
- Stryker Version: beta 0.18.0
Answers:
username_1: Dotnet core 3.1 is the latest lts. We have decided to upgrade to it with stryker 0.18.0. Therefor dotnet core 3.1 runtime is required.
username_0: I can indeed still test the projects but I'd have to install 3.1 on the other images now. I'll likely just move forward with one image at this point since I no longer can create self-contained versions for each Dotnet core version. I was essentially just trying to keep the docker images smaller and lean. Thank you for the clarification!
Status: Issue closed
username_1: Awesome! Are these public docker images? :)
username_0: They are not, they are internal. I am working on injecting Stryker into our repos PR pipeline. Really been enjoying the features you guys have built into it! Especially the git diff config option.
username_1: Cool! The git diff option should be getting a lot more awesome in the next two months! We have an internship student working on making sure that you can use the git diff option but also have a full html report available after the testrun. |
UniversalDependencies/docs | 208199721 | Title: Contributing to UDv2.0 language documentation
Question:
username_0: What is the proper way to contribute to language documentation for the upcoming version?
Should I wait for the new release? Or I should I continue to change existing documentation and after release the message "This page still pertains to UD version 1." will be removed automatically? What about pages for new or renamed features?
Answers:
username_1: We are working on this and will provide more detailed guidelines as soon as we can (but producing the data has priority right now). In the meantime, you can start changing the existing documentation. All the static pages can be edited as usual, and you can also create new pages for new or renamed features. But the overview tables will have to be generated by the documentation team later.
username_2: Since I've been updating the Chinese pages to v2 just recently, according to @username_3 you can remove the yellow v1 warning banner to the green v2 banner by adding `udver: '2'` in the header at the top of the page. If I understood correctly, and to clarify, the v1 documentation is currently frozen and cannot be modified, the ones you see now on the UD main pages (that have yellow banners) are technically placeholder copies waiting to be updated to v2, whereas the [archived v1 documentation](http://universaldependencies.org/docsv1/index.html) have red banners instead.
username_3: @username_2 is right. The only issue here is that people writing v2 documentation would benefit if we managed to automatically modify the tables of labels, generate or rename templates for features etc. But as Joakim said, we are unfortunately unable to attend to this right now.
username_0: It is good to know that I will not break the version 1 documentation by contributing now. Thanks.
I got the understanding I needed, feel free to close this issue or leave it open until till the documentation force has time for updating documentation or whatever is appropriate.
username_1: The v1 documentation has been archived here: http://universaldependencies.org/docsv1/index.html
Status: Issue closed
|
ant-design/ant-design | 474376573 | Title: 使用 browserHistory, 服务器能处理这个 URL 返回 index.htmlt跳转到404页面
Question:
username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate.
### Reproduction link
[https://github.com/username_0/ant-design-pro-egg](https://github.com/username_0/ant-design-pro-egg)
@chenshuai2144 @username_1
### Steps to reproduce
1. 打包前端页面
```
yarn build
```
2. 将打包好的dist文件夹这个拷贝到egg-demo项目的/app目录下,dist改名为public
3. egg-demo的config.default.js文件中添加静态资源配置
```
config.static = {
maxAge: 31536000,
prefix: '/',
};
```
4. 启动egg-demo
```
yarn dev
```
5. 访问页面localhost:7001/index.html,为404
<img width="969" alt="屏幕快照 2019-07-30 上午11 45 58" src="https://user-images.githubusercontent.com/24558814/62100392-270cdf80-b2c4-11e9-8a36-c8cfcc906585.png">
点击“back Home”跳到首页localhost:7002,正确显示
<img width="954" alt="屏幕快照 2019-07-30 下午12 09 12" src="https://user-images.githubusercontent.com/24558814/62100385-2116fe80-b2c4-11e9-9b89-f8cee91a9a14.png">
点击刷新页面,又错了
<img width="899" alt="屏幕快照 2019-07-30 下午12 16 10" src="https://user-images.githubusercontent.com/24558814/62100371-12304c00-b2c4-11e9-8eb6-12baf217c3a5.png">
看了官网[ant-design-pro](https://pro.ant.design/docs/deploy-cn)的解释:还是没明白怎么处理,:

我的demo代码仓库地址:
[https://github.com/username_0/ant-design-pro-demo](https://github.com/username_0/ant-design-pro-demo)
[https://github.com/username_0/ant-design-pro-egg](https://github.com/username_0/ant-design-pro-egg)
### What is expected?
访问/index.html正确显示或刷新其他页面路由
### What is actually happening?
1. 404
2. 页面的路由被当作了egg接口路由
| Environment | Info |
|---|---|
| antd | 3.20.7 |
| React | react |
| System | mac 10.14.5 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Status: Issue closed
Answers:
username_1: 这不是 antd 的问题,这里只反馈组件库问题。
username_0: @username_1 这个和打包没关系吗? |
3Hren/msgpack-rust | 1014142561 | Title: Error "invalid type: byte array, expected a sequence" when it works with serde_json
Question:
username_0: I posted a minimal repro here: https://github.com/jonasbb/serde_with/issues/372#issuecomment-932804303
Answers:
username_0: it works with bincode also:
```
// bincode
let encoded: Vec<u8> = bincode::serialize(&a).unwrap();
println!("bincode encoded: {:?}", encoded);
let decoded: A = bincode::deserialize(&encoded[..]).unwrap();
println!("bincode decoded: {:?}", decoded);
println!();
```
username_1: Given [this comment](https://github.com/jonasbb/serde_with/issues/372#issuecomment-93281141) I'm not sure if there's anything this crate needs to change?
username_0: Looks like it! Sorry about that ^^
Status: Issue closed
|
fourtwothree/daily-code | 312383757 | Title: 关联查询必须要select id(关联键),才能将关联关系查询出来,否则为空
Question:
username_0: ```
$qb = self::selectRaw
('
businesses.id as businesses_id,
businesses.name as businesses_name,
businesses.full_name,
businesses.designation,
businesses.type as business_type,
businesses.audit_status,
business_contacts.name as business_contacts_name,
business_contacts.type as business_contacts_type,
business_contacts.phone
')
->Search($conditions)
->leftJoin('customers', 'businesses.customer_id', '=', 'customers.id')
->leftJoin('business_contacts', 'businesses.customer_id', '=', 'business_contacts.customer_id');
``` |
flutter/flutter | 723452430 | Title: Remove or update devicelab tests that listen to adb stream
Question:
username_0: To simplify the devicelab code, we need to reduce the amount of one-off and special handling that is in the runners, and move it to either a driver test script or the flutter tool itself when appropriate. To that end, I'd like to remove the special handlers for `adb logcat`.
There are several tests that use this pattern:
* All memory info benchmarks
* image list duration tests
These could be rewritten as driver tests in the applications under test. For memory info, the flutter_tool exposes a VM service protocol method (https://github.com/flutter/flutter/blob/master/packages/flutter_tools/test/integration.shard/vmservice_integration_test.dart#L56 ) which returns the data in a structured format.
The image list duration tests could use events or custom requests in the flutter_driver. |
django/djangoproject.com | 177791856 | Title: Models' default value is not apply in a database.
Question:
username_0: name = models.CharField('Name', max_length=255, default='Audi')
max_speed = models.IntegerField('Max Speed', default=220)
Answers:
username_1: This is the ticket tracker for the djangoproject.com website. Please see https://code.djangoproject.com/wiki/TicketClosingReasons/UseSupportChannels for ways to ask usage questions.
Status: Issue closed
|
segmentio/kafka-go | 668677758 | Title: pause / resume on the topic partition
Question:
username_0: **Describe the bug**
I'm working on implementation of DLQ pattern via multiple retries queue
please advice how can I call ```pause / resume on the topic partition```
**Kafka Version**
What version(s) of Kafka are you testing against?
I'm using kafka client 0.3.4
Answers:
username_1: Hello @username_0!
To pause consuming from a partition, you can simply stop reading messages. Kafka does not have a concept of pausing or resuming in its protocol, the responsibility is given to clients to decide what to read and when. |
spring-projects/spring-boot | 168538015 | Title: Release 1.4.0. Sample spring-boot-sample-actuator-log4j2 works but not as expected
Question:
username_0: I am running your sample spring-boot/spring-boot-samples/**spring-boot-sample-actuator-log4j2**/,
**release 1.4.0**. The sample is git-cloned from your repository. So, i hope it exactly as it should be.
But i am getting always
**ERROR** StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
I have checked that all required dependencies are there.
Could you please check or explain ....
And some relevant question.
Release 1.3.7.RELEASE had the artifact **org.springframework.boot:spring-boot-starter-log4j**
I cannot find similar one in the the RELEASE.1.4.0. Are yor not supporting **Log4j12** in the release 1.4.0?
Thank you so much
<NAME>anger
Answers:
username_1: As for `SampleActuatorLog4J2Application`, it appears to be working both on our CI system and locally within my IDE. Can you explain the exact steps to reproduce the problem?
Status: Issue closed
username_0: Sorry for a large delay with this answer.
Even more, i bag your pardon for disturbances: the problem was at my local host.
I had a very dirty maven local repository. I have cleaned it up, a now i have no problems any more
Please close the issue, and again, sorry for disturbances |
vuejs/vetur | 279396985 | Title: Formatting conflict with ES-Lint
Question:
username_0: - [ ] I have searched through existing issues
- [ ] I have read through [docs](https://vuejs.github.io/vetur)
- [ ] I have read [FAQ](https://github.com/vuejs/vetur/blob/master/docs/FAQ.md)
## Info
- Platform: Win
- Vetur version: 0.11.3
- VS Code version: 1.18.1
## Problem
<!-- Include error message from Panel -> Output -> Vue Language Server -->
<!-- With screenshot / gif if possible -->

## Reproducible Case
<!--
just try a sample from Element-UI 2
-->
Status: Issue closed
Answers:
username_1: Issues without repro cases will be closed. Thanks. |
lwolf83/Project2-Banque | 563995273 | Title: Extact method to get password
Question:
username_0: https://github.com/username_0/Project2-Banque/blob/f641b495b241ee1e66d104b49626c8c09c51cb0d/ConsoleApp/Program.cs#L149-L161
Déplacer cette partie dans une nouvelle fonction.
Cette fonction ne s'occupe que de gérer la saisie au clavier du mot de passe.
Answers:
username_1: fixed issue#40
Status: Issue closed
|
MiKTeX/miktex-packaging | 383969511 | Title: Is there a bug with the `lipsum` package in the last update of MikTeX?
Question:
username_0: I have the full version of MikTeX updated today. When testing the code on page 396 of the following tcolorbox manual 4.14, it is requested to install the lipsum package which is normally included in MikTeX. I installed it, but the compilation still fails.
https://tex.stackexchange.com/q/461526/138900
Answers:
username_1: Duplicate of #67
Status: Issue closed
|
hrsh7th/vim-vsnip | 666397761 | Title: Is it possible to use # in "prefix"?
Question:
username_0: I would like `#env` to expand to `#!/usr/bin/env ...`.
I haven't found a way to escape the `#` yet.
Is this possible?
Somewhat related to #36
Answers:
username_1: Thank you for your reporting.
It seems a bug of vsnip.
I will fix it.
username_1: I sent the PR to fix the problem.
Could you test it?
username_0: Yes, it works fine for me 👍
Status: Issue closed
|
godotengine/godot | 136883241 | Title: Cannot scale VehicleBody
Question:
username_0: **Operating system or device:**
Linux 4.4.1 64 bit
**Issue description** (what happened, and what was expected):
Created VehicleBody object in editor and applied scale. Scale and position are ignored during run time.
Expected behavior:
Scale and position is maintained during run time.
**Steps to reproduce:**
Create a VehicleBody object and apply scale via the editor.
**Link to minimal example project** (optional but very welcome):
http://git.bragafvl.com/username_0/race-game/tree/4315c2c5ec6ae45450c8ff8dcddaeda2afc4ef8b
Answers:
username_1: @username_0 ~ BTW, [What is the proper letter case for spelling Godot?](http://godotengine.org/qa/172/what-is-the-proper-letter-case-for-spelling-godot) (on Godot QA)
username_2: Simple body scaling is not supported yet, but as far as I know there are plans to support it in the future.
username_2: related issue: https://github.com/godotengine/godot/issues/1505
username_2: Related issue: https://github.com/godotengine/godot/issues/1505
From this one it seems that body scaling will be possible only or uniform shapes
username_3: Duplicate of #1505 which was fixed in 3.0.
Status: Issue closed
|
SonarSonic/Calculator | 209234810 | Title: Simple Greenhouse still not working. Wont plant and wont demolish correctly
Question:
username_0: Please fill out the details below before contributing
* Forge Version: 2221
* Calculator Version: 3.1.9
* SonarCore Version: 3.2.5
* Multiplayer or Singleplayer: SIngleplayer
* Crash Report Link: No Crash
* Affected Features: Simple Greenhouse Planting and Demolish
* Description: Simple Greenhouse still not working. Wont plant and wont demolish correctly. Will not plant ANY seed from any mod. Demolish still leaves all the blocks on the ground but with the inability to pick them up. Greenhouse builds and functions fine otherwise. I have deleted all of the config files in an attempt to reset everything and still no joy.
Answers:
username_1: Could you please check logs (SSP: `log/fml-client-latest.log`, SMP: `logs/fml-server-latest.log` on the server) if they have any exceptions regarding to SonarCore or Calculator? Thanks.
username_0: This is all I can come up with...all other logs have no errors I can find.
This was a fresh world with no mods other than Calculator and Sonar
Same issues/problems were present in this world with no other mods...
[13:50:45] [Client thread/ERROR]: +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
[13:50:45] [Client thread/ERROR]: The following texture errors were found.
[13:50:45] [Client thread/ERROR]: ==================================================
[13:50:45] [Client thread/ERROR]: DOMAIN calculator
[13:50:45] [Client thread/ERROR]: --------------------------------------------------
[13:50:45] [Client thread/ERROR]: domain calculator is missing 1 texture
[13:50:45] [Client thread/ERROR]: domain calculator has 2 locations:
[13:50:45] [Client thread/ERROR]: mod calculator resources at C:\Users\XXX\Documents\Curse\Minecraft\Instances\TEST\mods\Calculator-1.10.2-3.2.0.jar
[13:50:45] [Client thread/ERROR]: mod sonarcore resources at C:\Users\XXX\Documents\Curse\Minecraft\Instances\TEST\mods\SonarCore-1.10.2-3.2.6.jar
[13:50:45] [Client thread/ERROR]: -------------------------
[13:50:45] [Client thread/ERROR]: The missing resources for domain calculator are:
[13:50:45] [Client thread/ERROR]: textures/blocks/reinforcedstone.png
[13:50:45] [Client thread/ERROR]: -------------------------
[13:50:45] [Client thread/ERROR]: No other errors exist for domain calculator
[13:50:45] [Client thread/ERROR]: ==================================================
[13:50:45] [Client thread/ERROR]: +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
[13:50:59] [Server thread/INFO]: Starting integrated minecraft server version 1.10.2
username_2: Issue appears to also occur with the Advanced Greenhouse.
It appears to build / demolish fine, but it will not replant...
Found on versions:
Calculator-1.10.2-3.2.0
SonarCore-1.10.2-3.2.6
I found no errors for Calculator in my log.
username_3: As I really need to get this fixed, I investigated and found the following:
https://github.com/SonarSonic/Calculator/blob/1.10.2/src/main/java/sonar/calculator/mod/common/tileentity/TileEntityGreenhouse.java#L224
You changed plantCrops into 2 separate subfunctions, but the pos variable isn't the one from L196 which it is supposed to be. You need to add pos as a parameter to those 2 new functions.
username_0: me too... my entire favorite world I play is based around the greenhouse. |
NeverSinkDev/NeverSink-Filter | 860586409 | Title: T2 Rare overrides Custom Show/Hide Rules
Question:
username_0: T2 Rare overrides custom rules defined in Custom Show Hide Rules



When I remove `Siege Axe` from the T2 Rares list, it correctly takes my custom rule.
Answers:
username_1: Your custom rule affects t3 and lower items, Siege axe is t2, does changing your custom rule to ALL RARES make a difference?
username_0: @username_1 that was it, thanks!
Status: Issue closed
|
SharePoint/sp-dev-docs | 547620484 | Title: The HeaderlessSearchResults ClientSidePageLayoutType fails and can't be used
Question:
username_0: #### Category
- [ ] Question
- [ ] Typo
- [x] Bug
- [ ] Additional article idea
#### Expected or Desired Behavior
Using this tutorial : https://docs.microsoft.com/en-us/sharepoint/dev/spfx/building-search-extensions, I should be able to assign a page as my search center and change its type to see the headerless search results page layout type.
#### Observed Behavior
I can use PnP PowerShell to change the LayoutType using `Set-PnPClientSidePage -Identity Results.aspx -LayoutType HeaderlessSearchResults` and it goes smoothly. As soon as I access my page though, I get a server error displayes in the SharePoint Error page :

#### Steps to Reproduce
1. Create a new page in your Site Pages library
2. Make it the Search Center URL
3. Assign the HeaderlessSearchResults layout type using `Set-PnPClientSidePage -Identity Results.aspx -LayoutType HeaderlessSearchResults`
4. Navigate to this page
Answers:
username_1: Hey Sebastien, can you send me the Tenant URL? I'm trying to look up the error, but I need to start with the tenant.
username_0: Tenant : username_0365
CorrelationId : 31da299f-807d-a000-a089-ea1fb055a42a
username_2: I have the same issue, with the same error message.
username_1: OK, tracked down the issue. A flight has not finished rolling to 100% of first release / targeted release. We're pushing that along now. I'll post back here when it hits 100% of TR.
username_0: It's resolved on Targeted Release for me also. I agree with you @username_2, this should be seemless and we should not see the flickering effect (or how @username_1 likes to say, the rabbit on caffeine). This is still beta / early code, let's hope it gets better for GA!
username_3: I had a different issue, also related to the HeaderlessSearchResults, and this page helped so posting it here in case it can be useful. @username_1 's suggestion above to look for the error code helped, but I had to use Fiddler to find the relevant .js file. In the end, my issue was that I hadn't yet set the Search Results Page for the site collection. I've blogged more, in case it's of use to anyone, at https://hilton.giesenow.com/2020-04-29-headerlesssearchresults-cant-show-this-page-with-its-current-layout |
spiral-project/ihatemoney | 298104545 | Title: Import date as well as bill date
Question:
username_0: Hi,
There is currently a single date field on bills, reflecting the date of the bill.
It is usually helpful to have the date of addition as well. This is especially useful when adding long past bills so that the date reflects the real date of the bills, but to allow a way to sort and see last additions.
What do you think about adding a new date field on the bill to store date of last update? I can write a PR for this if you are interested :)
Thanks,
Answers:
username_1: Hi @username_0, if you have a use case where this new date field would be useful, go ahead!
username_0: Hi,
I made a basic PR to propose it and discuss it a bit. My typical use case is that I'd like the date of the bill to be the real date, that is the date I paid on (for easy matching with my bank account for instance).
We tend to enter expenses sometimes a long time after we actually paid (typically adding a bunch of utilities bills at the same time, for the past months), and then it is super difficult for the others to see that I indeed added something. Being able to filter on `imported_date` would put these bills at the top, while keeping the correct date for the expense.
Status: Issue closed
|
aspnet/PlatformAbstractions | 148123385 | Title: Switch to using System.Runtime.InteropServices.RuntimeInformation where appropriate
Question:
username_0: (this can be done in RTM)
/cc @username_3
Answers:
username_1: @username_4 can you figure out what we need to do here?
username_2: We can't do anything until we get new BCL packages.
username_1: Ok, hopefully today.
username_3: This should be doable now.
username_3: @username_4 unblocked?
username_4: Nope, we need to get the RC3 System.* packages
username_5: This is the cause of https://github.com/aspnet/EntityFramework/issues/5483
username_4: OSVersion isn't supported yet https://github.com/dotnet/corefx/issues/4741
RuntimeType does not exist in RuntimeInformation, we use if to test for CLR, CoreCLR, or Mono
username_6: @username_4 where do we use those two properties?
username_4: Testing, EntityFramework, MusicStore, Performance, Hosting
username_4: Should I leave the spots that use OsVersion and RuntimeType and remove the references to the other types on the RuntimeEnvironment type?
username_7: :shipit:
username_4: Done, will make an issue in Common for removing the RuntimeEnvironment.Sources package once RuntimeInformation has more features.
Status: Issue closed
|
timesler/facenet-pytorch | 502583542 | Title: MTCNN batching bug
Question:
username_0: Inputting the batch of 1280x720 images to MTCNN, the PIL format of the images has the width of 1280 and the height of 720, however after converting images to tensors in detect_face function on line 16, the wo is 720 and ho is 1280 which is flipped.
Then you proceed to concatenate the batch on the shorter side, in my case (batch 16) to [1,3,11520,1280] tensor. Which is in my view an arbitrary decision and it does not really matter which side is height or width.
However, come to the splitting the detected boxes into the batch on the lines 118-125, you choose the correct axis but incorrect dimension, because you pick the max(wo, ho) where in fact it needs to be a min(wo,ho), as you previously chose to concatenate on the shorter side.
I don't really have the problem with mixing H, W naming as I know that it is hard to be consistent while working with pytorch, cv2 and PIL, but it made it harder to find the bug.
Answers:
username_1: Thanks for reporting this @username_0, I will investigate and push a fix soon.
username_1: Fixed by #32
Status: Issue closed
|
MicrosoftDocs/WSL | 326012631 | Title: Incorrect 'Publish Date'
Question:
username_0: As of 5/24/2018, the publish date of this article is '08/22/2018'. This means either this article is from the future, or the code that writes the published date flubbed up during a leap second :)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f2668f02-b96f-7f16-6fc5-950a1e5f86b9
* Version Independent ID: 75f3a2fb-99bd-0161-28ba-5c86e917f4d5
* Content: [Install the Linux Subsystem on Windows 10](https://docs.microsoft.com/en-us/windows/wsl/install-win10#feedback)
* Content Source: [WSL/install-win10.md](https://github.com/MicrosoftDocs/WSL/blob/live/WSL/install-win10.md)
* Service: **windows-subsystem-for-linux**
* Product: **windows-subsystem-for-linux**
* GitHub Login: @scooley
* Microsoft Alias: **scooley**<issue_closed>
Status: Issue closed |
PaddlePaddle/Paddle | 618793572 | Title: AssertionError: Not compiled with CUDA
Question:
username_0: - 版本、环境信息:
1)PaddlePaddle版本:PaddlePaddle 1.7.1 paddleHub 1.6,2
4)系统环境:win10 python3.7
- 训练信息
1)单机,单卡
Traceback (most recent call last):
File "F:/学习资料/计算机/大二/下/人工智能/Text_Analysis/loadModule.py", line 70, in <module>
metrics_choices=["f1"])
File "E:\software\program\Python37\lib\site-packages\paddlehub\finetune\task\classifier_task.py", line 179, in __init__
metrics_choices=metrics_choices)
File "E:\software\program\Python37\lib\site-packages\paddlehub\finetune\task\classifier_task.py", line 49, in __init__
metrics_choices=metrics_choices)
File "E:\software\program\Python37\lib\site-packages\paddlehub\finetune\task\base_task.py", line 311, in __init__
self.place = self.places[0]
File "E:\software\program\Python37\lib\site-packages\paddlehub\finetune\task\base_task.py", line 450, in places
_places = fluid.framework.cuda_places()
File "E:\software\program\Python37\lib\site-packages\paddle\fluid\framework.py", line 314, in cuda_places
"Not compiled with CUDA"
AssertionError: Not compiled with CUDA
Answers:
username_1: 出现这个错误是因为所安装的 paddle 不支持 cuda.
可以参考
https://www.paddlepaddle.org.cn/install/quick
选择安装支持 gpu 的版本。 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.