repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
peak/s5cmd | 1177780908 | Title: Installation on OSX using Homebrew
Question:
username_0: Hi there! I try to install s5cmd tool using Homebrew. But it doesn't work.
```
➜ ~ brew tap peak/s5cmd https://github.com/peak/s5cmd
==> Tapping peak/s5cmd
Cloning into '/usr/local/Homebrew/Library/Taps/peak/homebrew-s5cmd'...
remote: Enumerating objects: 10403, done.
remote: Counting objects: 100% (1528/1528), done.
remote: Compressing objects: 100% (779/779), done.
remote: Total 10403 (delta 959), reused 1120 (delta 723), pack-reused 8875
Receiving objects: 100% (10403/10403), 21.66 MiB | 2.03 MiB/s, done.
Resolving deltas: 100% (4816/4816), done.
Error: Invalid formula: /usr/local/Homebrew/Library/Taps/peak/homebrew-s5cmd/Formula/s5cmd.rb
s5cmd: Calling bottle :unneeded is disabled! There is no replacement.
Please report this issue to the peak/s5cmd tap (not Homebrew/brew or Homebrew/core):
/usr/local/Homebrew/Library/Taps/peak/homebrew-s5cmd/Formula/s5cmd.rb:9
Error: Cannot tap peak/s5cmd: invalid syntax in tap!
```
Is there some solution for homebrew installation?
Status: Issue closed
Answers:
username_1: Duplicate of #374 |
Vendic/EAVCleaner | 579191487 | Title: There are no commands defined in the "eav:media" namespace
Question:
username_0: Hi there,
When I try to run php bin/magento eav:media:remove-unused
I get the error There are no commands defined in the "eav:media" namespace.
Could you help? I know the script works as I have run it in the past but I needed to restore to a previous backup and now I can't get it to run.
Best regards,
Dave
Answers:
username_1: @username_0 i've tested this on a fresh install of Magento 2.3.4 and all seems fine. Maybe you didn't install the module correctly? Is the module enabled (`module:status`)? Did try running `setup:upgrade` and `setup:di:compile`?
username_0: Thanks! After running those commands and messing around a little it did eventually work on my test server, although I'm not sure what was causing it or what fixed it. I'll try again in the next few days on the main instance and let you know if I get stuck again,.
Thanks again for your help. Much appreciated, and really useful script!
username_1: Okay great!
Status: Issue closed
|
bestguy/sveltestrap | 577321530 | Title: No data binding for component <Input type="select">
Question:
username_0: Hi,
Data binding does not work with component **Input type="select"**.
Here is an example showing the issue:
https://svelte.dev/repl/f7bf7a00940e428c85767bf6f0fc78d4?version=3.19.2
```
<script>
import { Input } from 'sveltestrap';
let selected = 'second';
</script>
<h1>SvelteStrap "select" data binding issue</h1>
<h2><strong>Bound variable value:</strong> {selected}</h2>
<h3>SvelteStrap select (<Input type="select">)</h3>
<p>Not working: "Select" does not update the bound variable value and is not correctly initialized</p>
<Input type="select" id="select_component" name="select_component" bind:value={selected}>
<option value="first">First item</option>
<option value="second">Item that should be selected as initial value</option>
<option value="last">Last item</option>
</Input>
<h3>Regular select (<select>)</h3>
<p>Working: "Select" correctly update the bound variable value and is correctly initialized</p>
<select id="select_html" name="select_html" bind:value={selected}>
<option value="first">First item</option>
<option value="second">Item that should be selected as initial value</option>
<option value="last">Last item</option>
</select>
```
Looking at the SvelteStrap code I noticed that `bind:value` is missing in the implementation of the component.
https://github.com/username_1/sveltestrap/blob/5dff47559e4101d1e38302f01b4253651196cd22/src/Input.svelte#L326
Thank you for sharing SvelteStrap ;)
Regards
Answers:
username_1: Hi @username_0 ,
This should be corrected with #133 , released in [email protected]
Can you please confirm and update this issue?
username_0: Hi @username_1 ,
The data binding works fine now.
Thank you for very fast support ;)
Best regards
PS: Got this warning in console `<Input> was created without expected prop 'readonly'`. Is `readonly` a mandatory attribute ?
username_1: Weird, not sure! Thanks for the heads up, will look
username_2: There is one more thing: if the value is `undefined` initially, the `Input` component (of type `select`) is not picking up the first value. I can confirm this works as expected if the HTML `select` element is used instead.
Status: Issue closed
username_1: Looks to be working correctly in v5: https://svelte.dev/repl/5bcf210db26e42fda6faa0b31beab47a?version=3.38.3
Please comment if still seeing issues. |
RT-Thread/rt-thread | 482120104 | Title: SMP
Question:
username_0: K210 BSP下使能SMP、AT ESP8266串口中断接收和轮训发送,经常发现会让wifi模组进入待收数据的状态,分析得知没能正确收到“>”回复;把SMP功能关掉后,就能正常接收。因此怀疑是否是SMP调度影响到了串口中断接收?
Answers:
username_1: 有更多的一些log吗?
username_0: 需要哪些log?分析此问题的时候增加了一些,发现通讯异常的时候,进入at client 敲at指令,esp8266模组不响应,必须重新上电,试着修改波特率,发现115200的要比921600的稳定一些
username_1: 921600,这么高的波特率本身就需要对驱动做更细致的设计、实现了。当使用115200,是否基本上就没问题了?
username_0: 115200基本上可以,高的波特率下那驱动更细致化的设计,从哪些方面考虑呢?
username_2: 不知能否给个简化的项目代码我们这边复现一下,以便能迅速定位问题
Status: Issue closed
|
neighbourhoodie/voice-of-interconnect | 221072762 | Title: Fix issues with deploy-to-bluemix button
Question:
username_0: Follow up for https://github.com/neighbourhoodie/voice-of-interconnect/pull/84#issuecomment-287493941
* Build stage is working 🎉
* Region is specified in the `pipeline.yml` file, need to look and see if I can use an environment variable for this
* It's still installing npm depedencies redundantly in Bluemix (even though I delete the `.cfignore` file as part of the build) ☹️
* Still need to make `dbUrl` work
* Still need to add more memory in `manifest.yml`
* Still need to add the `FFMPEG_PATH` environment variable to `manifest.yml`
* Still need to add the `NPM_CONFIG_PRODUCTION` environment variable to `manifest.yml` |
zam264/CommonSquirrel | 825266276 | Title: Achievement unlock requirement validation issue
Question:
username_0: I unlocked the 100ft achievement with only 84.8ft traveled. There could be a validation issue here, or it could be adding my new high score of 58 to 84.8 to get > 100. Not sure, haven't looked at the code yet.
<issue_closed>
Status: Issue closed |
apache/pulsar-helm-chart | 1180921333 | Title: installation problem with kubernetes 1.22.5 and cert-manager 1.5
Question:
username_0: **installation problem with kubernetes 1.22.5 and cert-manager 1.5**
```
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Certificate.spec): unknown field "keyAlgorithm" in io.cert-manager.v1.Certificate.
spec, ValidationError(Certificate.spec): unknown field "keyEncoding" in io.cert-manager.v1.Certificate.spec, ValidationError(Certificate.spec): unknown field "keySize" in io.cert-manager.v1.Certificate.spec, ValidationError(Certi
ficate.spec): unknown field "organization" in io.cert-manager.v1.Certificate.spec]
```
**To Reproduce**
Steps to reproduce the behavior:
1. create a cluster with kubernetes version highter then 1.21
2. install cert manager
3. enabled tls section in values
```yaml
## TLS
## templates/tls-certs.yaml
##
## The chart is using cert-manager for provisioning TLS certs for
## brokers and proxies.
tls:
enabled: true
ca_suffix: ca-tls
# common settings for generating certs
common:
# 90d
duration: 2160h
# 15d
renewBefore: 360h
organization:
- pulsar
keySize: 4096
keyAlgorithm: rsa
keyEncoding: pkcs8
```
5. install pulsar
**to fix**
Modify template `tls-certs-internal.yaml`
change all sections organization
```yaml
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
```
to
```yaml
subject:
organizations: {{ .Values.tls.common.organization }}
```
change all keySize, keyAlgorithm and keyEncoding
```yaml
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
```
to
```yaml
[Truncated]
usages:
- server auth
- client auth
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations: {{ .Values.tls.common.organization }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
``` |
dolittle/Bifrost | 18375130 | Title: Provide a set of reusable Mocks for Bifrost JavaScript artifacts such as commands, queries and other things
Question:
username_0: Should basically be shells supporting the ability to check wether or not a call is getting done.
Configuration of expectations and then the ability to verify the expectations.
Thanks to @jarlef for the idea and input into it!<issue_closed>
Status: Issue closed |
luciencd/spoiless | 178423756 | Title: User Authentication.
Question:
username_0: Due to my complete inability to write secure websites, I'd rather have someone else or something else take care of my Authentication.
Heard Oauth is good and i can assume google authentication would be useful, but main thing is this:
I don't know anything about it.
When you sign into my application using google, does it create a new account on my database? Where does the password go?
What gives me permission to access user's data in my database if I don't have their password to compare it to..
Would I pass the username and password into a rest API which then sends back a True or False depending on if you entered it correctly or not.
If that's the case, won't i have the physical password in order to send it (at least the front end would), I guess you would need to have a javascript function send it to the backend via an ajax request.
Someone help. |
wangqiangneu/MT-PaperReading | 523274833 | Title: 19-EMNLP-Exploiting Monolingual Data at Scale for Neural Machine Translation
Question:
username_0: ## 简介
利用monolingual data做数据增强。不仅仅用target monolingual,也证明了source monolingual有帮助。
* 方法
- 类似BT,对source mono和target mono分别用已有的系统(文章里是`WMT`,是双语数据的一小部分)翻译,得到(x, BT_x), (BT_y, y)
- 对(x, BT_x) + (BT_y, y)的`源语端`加noise, 再+已有的全部双语(x*, y*),一起训练一个模型M
- 在M的基础上finetune,finetune的数据集包括全部双语数据(x*, y*),以及**不加noise**的(x, BT_x)和(BT_y, y),但是注意:这块BT是用`WMTPC`的双语数据又训练了一个模型做的BT,**不是最一开始用`WMT`的系统**。另外,finetune的BT的数据量小一些,文章里是noise中数据量的1/3。这块的理解是,finetune的数据跟之前M训练的数据差异大一点效果更好
* 有意思的点
- 单纯灌source monolingual不好使,加target monolingual数据量足够大后也开始掉(e.g. 比bitext比例大很多),但是用noise training的方式,source mono + target mono都能随着数据量的增加稳定提高。我能理解对source加noise是让encoder提取特征的时候更robust
- finetune那块比较tricky,但是仔细想想似乎也有道理
- single model就已经比WMT 18 ensemble的结果好了,还是比较狠的,量大出奇迹
## 论文信息
* Author: MSRA
* [Paper](https://www.aclweb.org/anthology/D19-1430.pdf)
## 总结
* 我有一点感觉是,source mono的作用,可以理解成,它提供的target是相对更容易学的(跟target mono或者bitext的里target相比),因为不考虑noise的话用source mono本身就是distillation嘛。从这个角度说,让distillation跟real data distribution一起学,是不是起到一定的帮助作用?(直接学real distribution并不容易,即使是transformer-big)
* 可以想象的到,下届WMT拼数据量会是基本操作了。
* 虽然我有点感觉文章的最一开始的动机,可能就是为了wmt评测,想在BT at scale那篇基础上继续堆源语的单语数据。。。 |
pmem/rpma | 783128073 | Title: Let server to access a connection private data before the connection from a client is established
Question:
username_0: Currently, the application running a server side (ep) does not have access to a connection private data until a connection is established.
In this case, the application does not have the possibility to tune the connection environment (memory regions, etc.) based on the peer application's data.
rpma_conn_private_data is already a part of rpma_con_req. But no interface makes them available for the application.
The proposal of the solution has been drafted in #646.
The final solution should cover UT and an example.
Answers:
username_1: Ref: #911
Status: Issue closed
|
bitfinexcom/bitfinex-api-node | 215312569 | Title: RangeError: out of range index
Question:
username_0: I'm getting this error but I can't understand the problem, do you have any hint?
```
RangeError: out of range index
at RangeError (native)
at fastCopy (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:386:24)
at Receiver.add (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:86:3)
at TLSSocket.realHandler (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/WebSocket.js:800:20)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at TLSSocket.Readable.push (_stream_readable.js:134:10)
at TLSWrap.onread (net.js:543:20)
/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:386
default: srcBuffer.copy(dstBuffer, dstOffset, 0, length); break;
```
Answers:
username_1: Hi @username_0, thanks for reporting!
I was not able to reproduce the issue.
Can you provide a code example how the error happens?
username_0: I'm closing this because my app doesn't suffer it anymore but I didn't remember what module was that I updated to solve that issue. (or may be I updated Node.js? sorry didn't remember)
Status: Issue closed
username_2: I'm getting the same error. I'll see if I can get you a reproducible test case.
username_1: @username_2 thank you.
can you report which channels you subscribe? do you use the v2 API?
username_0: My code was something like this:
```
var bitfinexWebsocket = new bitfinexApiNode.WS(),
Ticker = mongoose.model('Ticker');
bitfinexWebsocket.on('open', function(err) {
bitfinexWebsocket.subscribeTicker();
});
bitfinexWebsocket.on('ticker', function(pair, data) {
try {
var now = moment.utc().format(),
ticker = {
code: 'BTC-USD',
price: parseFloat(data.lastPrice).toFixed(2),
high: parseFloat(data.high).toFixed(2),
low: parseFloat(data.low).toFixed(2),
date: now
};
console.log(chalk.green('Got ticker data at ' + now));
console.log(chalk.white(JSON.stringify(data)));
Ticker.findOneAndUpdate(ticker, ticker, {
upsert: true,
new: true
}, function(err, newTicker) {
if (err) return console.log(chalk.red(err));
});
} catch (e) {
if (e) console.log(chalk.red(e));
failure++;
}
});
bitfinexWebsocket.on('error', function(err) {
console.log(chalk.red('BITFINEX ERR', err));
return;
});
```
username_1: I'm getting this error but I can't understand the problem, do you have any hint?
```
RangeError: out of range index
at RangeError (native)
at fastCopy (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:386:24)
at Receiver.add (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:86:3)
at TLSSocket.realHandler (/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/WebSocket.js:800:20)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at TLSSocket.Readable.push (_stream_readable.js:134:10)
at TLSWrap.onread (net.js:543:20)
/home/xxx/node_modules/bitfinex-api-node/node_modules/ws/lib/Receiver.js:386
default: srcBuffer.copy(dstBuffer, dstOffset, 0, length); break;
```
username_1: After some investigation i found out that this is related to the websocket library we use. It happens in relation to certain node releases. See https://github.com/websockets/ws/issues/778
we will publish a new version soon, in the meantime you can try our master branch directly:
```
npm install https://github.com/bitfinexcom/bitfinex-api-node
```
Thanks for your patience!
username_2: @username_1 Thanks! And I was using v2.
Status: Issue closed
|
dotnet/orleans | 142682006 | Title: a possible bug in AzureGatewayListProvider.cs
Question:
username_0: I was just browsing the code and saw [this](https://github.com/dotnet/orleans/blob/master/src/OrleansAzureUtils/Storage/AzureGatewayListProvider.cs#L28). it doesn't lock anything, since we're returning a task. It should either be an async compatible lock or removed altogether.
Answers:
username_1: Indeed looks like a harmless bug - the lock doesn't protect anything.
username_2: Fixed via #1602.
Status: Issue closed
|
kalexmills/github-vet-tests-dec2020 | 758091066 | Title: kubevirt/vmctl: vendor/k8s.io/kubernetes/test/integration/scheduler/util.go; 5 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/kubevirt/vmctl/blob/8a4579a335b1e8c08cf00fd1596d817985bb33ad/vendor/k8s.io/kubernetes/test/integration/scheduler/util.go#L396-L400)
<details>
<summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary>
```go
for _, taint := range taints {
if !taintutils.TaintExists(node.Spec.Taints, &taint) {
return false, nil
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 8a4579a335b1e8c08cf00fd1596d817985bb33ad<issue_closed>
Status: Issue closed |
treeform/fidget | 761137246 | Title: fidget causes `nim-lang/Nim` packages CI to fail
Question:
username_0: See https://github.com/nim-lang/Nim/commit/73299b048f57838d83a3a8159909386a0905e553 and [the log](https://github.com/nim-lang/Nim/runs/1529569736#step:11:531):
```
Compiling /home/runner/work/Nim/Nim/pkgstemp/fidget/tests/test (from package fidget) using c backend
/home/runner/.nimble/pkgs/typography-0.6.0/typography/font.nim(25, 22) Error: undeclared identifier: 'Segment'
```
It looks like this is due to a recent change in https://github.com/username_1/bumpy/ or https://github.com/username_1/pixie/.
I'll ping @username_4 here, since they aren't watching this repo and it looks like they might also know how to fix it.
Answers:
username_1: I think I fixed this. CI is green now. Please confirm in the Nim repo.
username_2: see https://github.com/nim-lang/Nim/pull/16319
username_3: before closing this, would be worth investigating why CI didn't catch this
username_4: @username_3 CI doesn't run across repos. The issue was typography needed a new release to depend on updated repos.
Status: Issue closed
|
seqan/seqan | 219234210 | Title: YARA Mapper to output CIGAR strings for secondary alignments
Question:
username_0: I am using SeqAn 2.3.2 with the YARA mapper version 0.9.10 [d80e19a]. I use the YARA mapper to find all occurrences of a query sequence in a library of around 6 million other sequences. Unfortunately all secondary alignments are lacking the CIGAR string, which would be very useful to have for all alignments, not just the primary one. Is that possible or could it be added?
Answers:
username_1: @esiragusa Can you answer this?
username_2: One particular situation where this creates a problem is when you use samtools to processes the output of yara. Samtools raises a warning that says `mapped sequence without CIGAR` and sets the `read unmapped` flag. So the secondary alignments records are not preserved after any kind of processing using samtools.
username_2: This is not easily possible with the current design of Yara. Alignments, which is necessary for computing CIGAR strings, are computed for primary matches only. It was decided to against it mainly because of:
- secondary alignments are rarely interesting,
- computing alignments for all possible mapping location is costly and
- CIGARS do not contain enough information to discriminate among alignments. Instead, one could use the number of mismatches provided by `NM:i` tag.
Nevertheless, It could be implemented as an optional feature. We welcome pull requests for that.
username_2: @username_1 @h-2 I will backlog the issue now.
username_3: I have submitted PR #2309, which fixes this issue.
username_3: This is fixed now and can be closed
Status: Issue closed
|
killbill/killbill | 103362011 | Title: JAXRS Metrics are incorrect sometimes
Question:
username_0: We rely on `@Timed` annotation to gather automatically metrics from requests.
For most cases, this is very convenient and works well, but there are some situations where this breaks:
1. Multi purpose API calls like [createComboPayment](https://github.com/killbill/killbill/blob/master/jaxrs/src/main/java/org/killbill/billing/jaxrs/resources/PaymentResource.java#L616). This call allows to do AUTH, PURCHASE and CREDIT, and all these operations would end up within the same counter
2. Multiple paths for the same call. An example is `voidPayment ` and `voidPaymentByExternalKey `. Although the implementation is slightly different, the functionality is the same, and it *probably* make sense to combine those.<issue_closed>
Status: Issue closed |
intel/compute-runtime | 365074966 | Title: Load gmmlib using soname
Question:
username_0: @username_1, @drprajap: FYI
Please, be aware that gmmlib got a fix for intel/gmmlib#28 which changes gmmlib install targets:
1. libigdgmm.so.<major>.<minor>.<patch>
2. libigdgmm.so.<major> -> libigdgmm.so.<major>.<minor>.<patch>
3. libigdgmm.so -> libigdgmm.so.<major>
libigdgmm.so still presents, but is a symbolic linker name which belongs to developer packages. Please, consider switching to load gmmlib with the soname (libigdgmm.so.<major>).
Status: Issue closed
Answers:
username_1: Neo is using symbol GMM_UMD_DLL defined in file [GmmLibDllName.h](https://github.com/intel/gmmlib/blob/master/Source/GmmLib/inc/External/Common/GmmLibDllName.h). It is on GmmLib side to define proper name of shared library. |
alibaba/canal | 213250707 | Title: 运行客户端 batchId偶尔出现冲突问题
Question:
username_0: batchId:2176966 is not the firstly:2176965
at com.alibaba.otter.canal.client.impl.SimpleCanalConnector.receiveMessages(SimpleCanalConnector.java:302) at com.alibaba.otter.canal.client.impl.SimpleCanalConnector.getWithoutAck(SimpleCanalConnector.java:279)
at com.alibaba.otter.canal.client.impl.SimpleCanalConnector.getWithoutAck(SimpleCanalConnector.java:252)
at com.yagou.yggx.provider.datasync.start.AbstractCanalStart.process(AbstractCanalStart.java:115)
下面是代码:
int batchSize = 5 * 1024;
while (running) {
try {
MDC.put("destination", destination);
connector.connect();
connector.subscribe();
while (running) {
// 获取指定数量的数据
Message message = connector.getWithoutAck(batchSize);
long batchId = message.getId();
int size = message.getEntries().size();
List<Entry> entrys = message.getEntries();
if (batchId == -1 || size == 0) {
// try {Thread.sleep(1000);
// } catch (InterruptedException e) { }
} else {
DataSourceListenerInformation.comeOutEntry(entrys);
}
connector.ack(batchId); // 提交确认
Status: Issue closed
Answers:
username_1: 同样碰到这个问题,请问你这个最后是怎么解决的
username_0: 一个destination,对应一个消费端。多个消费端同时消费同一个destination,就会报batchId不存在或不是位点 |
chargebee/chargebee-checkout-samples | 491993444 | Title: how to apply media query styling to input components
Question:
username_0: hi! i've checked the documentation regarding the question, but i was unable to find any clue how to achieve this one. I basically want to apply some styles on the input element or `.CardInput` class, inside the iframe.
My end goal is to make the font smaller on mobile, and bigger on desktop. Is this possible w/ the react implementation?
Answers:
username_1: Hi Shierro,
It is possible to dynamically change the font-size on the react implementation. Although, media queries are not supported.
Refer to this [example](https://github.com/chargebee/chargebee-checkout-samples/blob/master/components/react-app/src/components/example1/Example1.js): Here you can modify the fontSize in the state, to achieve corresponding result.
Let me know if this solves your query.
Thanks.
- Dinesh
username_0: hi Dinesh,
thanks for taking time to reply.
yup, that works for me. is it possible to support media queries in the near
future?
username_1: Support for media queries is already in our backlog. But there is no fixed ETA as of now. :)
username_1: Closing this issue. Please raise a new issue if there are any problems.
Thanks.
Status: Issue closed
|
ActivisionGameScience/assertpy | 351115543 | Title: regular expression flags on matches function
Question:
username_0: It would be very beneficial to be able to give regular expression flags like MULTILINE as input to the 'matches' function.
Answers:
username_1: @username_0 you can already specify regex flags using "inline flags" in your pattern...
For example:
```py
import re
s = """bar
foo
baz"""
# matches
a = re.search(r'^foo$', s, re.MULTILINE)
# matches, using inline flag (?m)
b = re.search(r'(?m)^foo$', s)
# no match (aka returns None)
c = re.search(r'^foo$', s)
```
Status: Issue closed
|
docksal/docksal | 250463422 | Title: docksal/magento example is broken
Question:
username_0: https://github.com/docksal/magento
```
$ fin init
Step 1 Recreating services...
Removing containers...
Stopping magento_web_1 ... done
Stopping magento_cli_1 ... done
Stopping magento_db_1 ... done
Removing magento_web_1 ... done
Removing magento_cli_1 ... done
Removing magento_db_1 ... done
Removing network magento_default
Removing volume magento_project_root
Volume docksal_ssh_agent is external, skipping
Starting services...
Creating network "magento_default" with the default driver
Creating volume "magento_project_root" with local driver
Creating magento_db_1 ...
Creating magento_db_1
Creating magento_cli_1 ...
Creating magento_cli_1 ... done
Creating magento_db_1 ... done
Creating magento_web_1 ... done
Waiting for magento_cli_1 to become ready...
Connected vhost-proxy to "magento_default" network.
Waiting 10s for MySQL to initialize...
Step 2 Installing site...
[InvalidArgumentException]
There are no commands defined in the "setup" namespace.
```
Note: https://github.com/docksal/magento-demo installs just fine. |
ArctosDB/arctos | 298725564 | Title: Wildcard search on part barcode to find records that are not NULL for part barcode
Question:
username_0: It would be nice to be able to do a wildcard type search that returns all records that have any non-NULL value in their parts barcodes field(s). We have thousands of records without barcodes (from a big data import) and we've been slowly finding these, editing them, adding barcodes etc. It's hard to keep track of this progress, and manage these records because there's no easy way to summarize # of records with or without barcodes.
Status: Issue closed
Answers:
username_1: added NULL option to part barcode |
realbigplugins/theme2016 | 191311423 | Title: Undefined offset error showing up on frontend
Question:
username_0: https://realbigplugins.com/docs/learndash-gradebook/#learndash-gradebook
Answers:
username_1: Given the relevant section that the error is stemming, I'm thinking it is related to the capabilities check for showing the edit links.
```
// edit_post breaks down to edit_posts, edit_published_posts, or
// edit_others_posts
case 'edit_post':
case 'edit_page':
$post = get_post( $args[0] );
if ( ! $post ) {
$caps[] = 'do_not_allow';
break;
}
if ( 'revision' == $post->post_type ) {
$post = get_post( $post->post_parent );
if ( ! $post ) {
$caps[] = 'do_not_allow';
break;
}
}
``` |
belgattitude/soluble-japha | 192015623 | Title: Add getClassName() in Adapter
Question:
username_0: Expose the convenience method `getClassName()` in Adapter
```php
$javaString = $this->adapter->java('java.lang.String', 'Hello World');
$className = $this->adapter->getClassName($javaString);
// echo "java.lang.String"
```<issue_closed>
Status: Issue closed |
galaxyproject/training-material | 362249294 | Title: GTN - Instructor community - first meeting
Question:
username_0: Hi all,
We would like to start a group of instructors that would be interested into supporting other instructors.
One of the goal of such group could to be to discuss and collect best pedagogical and technical recommendations for Galaxy training workshops and start a handbook for training with Galaxy.
Another goal could be to start regular discussion meetings in which one or two experienced instructors could help and give advices to less experienced instructors (a bit on the same idea as the Instructor Discussion Sessions of the Carpentries).
If you are interested to participate to such group, feel free to join one of 2 planned meetings (to fit different time zones and constraints, but similar discussions):
- September 27, 3pm (UTC+2)
- [Timing for your time zone](http://arewemeetingyet.com/Berlin/2018-09-27/15:00/GTN%20-%20Instructor%20community%20meeting)
- [Hangout](https://hangouts.google.com/hangouts/_/calendar/Z2FsYXh5dW5pZnJlaWJ1cmdAZ21haWwuY29t.6cd3d5k6fihmid85fd5tejv98u?authuser=1)
- [Google event](https://calendar.google.com/event?action=TEMPLATE&tmeid=NmNkM2Q1azZmaWhtaWQ4NWZkNXRlanY5OHUgZ2FsYXh5dW5pZnJlaWJ1cmdAbQ&tmsrc=<EMAIL>)
- October 5, 11am (UTC+2)
- [Timing for your time zone](http://arewemeetingyet.com/Berlin/2018-10-05/11:00/GTN%20-%20Instructor%20community%20meeting)
- [Hangout](https://hangouts.google.com/hangouts/_/calendar/Z2FsYXh5dW5pZnJlaWJ1cmdAZ21haWwuY29t.4omjbjbgtcbnbf57n0bi1ptbj2?authuser=1)
- [Google Event](https://calendar.google.com/event?action=TEMPLATE&tmeid=NG9tamJqYmd0Y2JuYmY1N24wYmkxcHRiajIgZ2FsYXh5dW5pZnJlaWJ1cmdAbQ&tmsrc=<EMAIL>)
We will also take notes during the meeting and send the notes afterwards.
Answers:
username_0: The notes about the two meetings are available there: https://docs.google.com/document/d/14psj691fAlwpkFKKSzWRhkSTZdaiLYDEgxOD9jBpIgU/edit?usp=sharing
To summarize:
- Creation of a new topic dedicated to the instructors with
- Handbooks for instructors and workshop organizers, inspired by and extended from [the Carpentries handbook](https://docs.carpentries.org/topic_folders/hosts_instructors/index.html), with pedagogical/technical recommendations/checklist, for different targets (instructors, workshop organizers, etc)
- Develop the community of instructors
- Organization of regular online discussion meetings, where instructors share experiences from teaching and obtain information while preparing to teach, on the model of [the Carpentries Instructor Discussion Sessions](https://docs.carpentries.org/topic_folders/mentoring/discussion_session.html)
- One per month
- Different host (member of the community who facilitates a discussion session) every month to limit overload
- Not mandatory to join, office hours for experienced trainers where new trainers can ask questions
- Instructor meetings during the quaterly CoFest (not only about the training material)
- Elixir for staff exchange so less experienced instructors can join more experienced instructors and learn from them in a live setting
- Move the directory of instructors to the training website
- Create a `INSTRUCTORS.yaml` file for people who want to register
- Add a page (similar to Hall of Fame) for instructors by individual and/or organization and map?
- Display training events from the hub on the training page
- Add more notes in the tutorials: tips in the hands-ons and the instructor notes in slides
- Add post-mortems/blog post framework for after training feedbacks
Status: Issue closed
|
emberjs/ember.js | 55898710 | Title: Documentation for Ember CLI, HtmlBars and the Ember.Handlebars.Compile function
Question:
username_0: I found out that using the HTML Bars, Ember CLI and the use of the `Ember.Handlebars.Compile` function does not work out of the box.
I got a view where I have the following code:
```
template: Ember.Handlebars.compile('<h2>{{day}}</h2>')
```
This does not work out of the box with HtmlBars. So maybe this behaviour should be documented elsewhere?
Maybe @rwjblue know where this documentation belongs?
Answers:
username_1: we don't include the template compiler in the browser by default, it is primarily meant as a build tool steps. Primarily because it is slow
Status: Issue closed
username_0: @username_1 Thank you!
I think if the compatibility is not that of Ember itself, I will rework my views to use normal templates via `templateName` in my views. |
signintech/gopdf | 151095488 | Title: An issue with precisely aligning characters over a baseline
Question:
username_0: Hi, username_1,
I've encountered a problem when aligning characters of different sizes/typefaces over a same baseline. Characters do not position *accurately*.
Is there a way to precisely put arbitrary characters(of arbitrary sizes/typefaces) over a same baseline?
Maybe I'm doing something wrong? Is `TypoAscender()` the right metric in this case?
```go
import (
"github.com/username_2/gopdf"
"github.com/username_2/gopdf/fontmaker/core"
)
func getSpecTypoAsc(fontPath string, fontSize float64) float64 {
var parser core.TTFParser
parser.Parse(fontPath)
typoAsc := float64(parser.TypoAscender()) * 1000.00 / float64(parser.UnitsPerEm())
return typoAsc * fontSize / 1000.0
}
func main() {
arialPath := "./ttf/arial.ttf"
dejavuPath := "./ttf/DejaVuSerif.ttf"
pdf := gopdf.GoPdf{}
pdf.Start(gopdf.Config{Unit: "pt", PageSize: gopdf.Rect{W: 595.28, H: 841.89}})
pdf.AddPage()
pdf.AddTTFFont("arial", arialPath)
pdf.AddTTFFont("deja", dejavuPath)
pdf.Curr.X = 0
pdf.Curr.Y = 0
pdf.SetFont("arial", "", 60)
pdf.Cell(nil, "h")
pdf.SetFont("arial", "", 30)
pdf.Curr.Y = getSpecTypoAsc(arialPath, 60) - getSpecTypoAsc(arialPath, 30)
pdf.Cell(nil, "h")
pdf.SetFont("deja", "", 20)
pdf.Curr.Y = getSpecTypoAsc(arialPath, 60) - getSpecTypoAsc(dejavuPath, 20)
pdf.Cell(nil, "h")
pdf.WritePdf("baseline.pdf")
}
```

Answers:
username_1: Hi username_0, Thanks for your issue report. I will check it out.
Anyway please replace ```pdf.Curr.X``` and ```pdf.Curr.Y``` with ```pdf.SetX()``` and ```pdf.SetY()``` accordingly because the pdf.Curr should be the private variable but I was new with golang at the time that gopdf was launched so that I forgot to correct (I have plan to clean some legacy code soon)
however this is not related to this issue.
username_0: Ok, I've replaced it in example to not confuse someone else.
username_0: I found a detailed article about vertical font metrics maybe it can help:
https://www.glyphsapp.com/tutorials/vertical-metrics
username_1: Hi username_0, I've fixed the character height problems. It should correct now.
Also, I've add ```pdf.Text(txt string)``` function that renders text using Y as baseline.
```go
pdf.SetX(10)
pdf.SetY(50)
pdf.SetFont("arial", "", 60)
pdf.Text("h")
pdf.SetFont("arial", "", 30)
pdf.Text("h")
pdf.SetFont("deja", "", 20)
```
username_0: Thanks, it would significantly simplify working with text. I'll test it more closely this week. Does master includes now `pdf.Text()`, kerning support and `MeasureTextWidth() with kerning taken into account`?
username_1: Yes, it all fixed ( and merged into master ).
Status: Issue closed
|
kubernetes/test-infra | 421165363 | Title: Move Prow to a repo under kubernetes-sigs
Question:
username_0: We're getting close to being ready for the move. Items we know we need to accomplish is here:
- [ ] enable seamless config movement for prow.k8s.io https://github.com/kubernetes/test-infra/pull/11781
- [ ] move remaining config files to `config/`
- [ ] ensure all references to original config locations are updated
- [ ] remove previous config entries for updating config in current location for prow.k8s.io
- [ ] move `prow/cluster` to `config/cluster`
- [ ] ensure all auto-bump etc automation around `prow/cluster` is updated
- [ ] submit KEP to create repo
/cc @username_2 @username_4 @cjwagner @username_3 @krzyzacy
Answers:
username_0: /assign
username_1: /remove-lifecycle stale
username_2: Need to see what PR's modify config.yaml and plugin.yaml and decide if we want those to go through rebase fun
username_2: /milestone v1.16
/assign
username_2: /assign @cjwagner
username_2: Rescoping based on what we thought was feasible for this quarter, moving to its own repo can be done as a followup
username_3: you actually just have to file an issue in k/org after getting consensus from the SIG on wanting it, no KEP necessary. the process is pretty lightweight. moving code is a little more involved to do cleanly.
username_4: Moving prow/cluster files into config/prow or somewhat like that sgtm
username_3: we can actually do this with much less than a KEP, SIGs can sponsor a repo without going full KEP 🙃
username_3: I don't think there's any config lingering under prow/ now. some scripts / makefile stuff.
username_5: Yeah, the config is pretty much moved out of the prow directory. What would be needed to move prow into its own repo is:
* The repo itself
* Exporting the code to the new repo
* Setting up all prow-related jobs onto the new repo
* Announcing this in case other projects reference prow code (kubetest2?)
* Updating all references outside of /prow to prow
The point that is presumably most of the work is setting up the jobs on the new repo. It would be great to make this happen though, to better separate concerns and make certain tasks like updating dependencies easier :upside_down_face:
username_2: /remove-lifecycle stale
/lifecycle frozen
/priority important-longterm
/area config
/area jobs |
doitsujin/dxvk | 983775809 | Title: Grand Theft Auto IV, missions won't load since 1.9
Question:
username_0: I get infinite loads when starting a mission. This only occurs on 1.7.3+ versions. 1.7.3 is fine. I also noticed that the game's VRAM usage jumps down to 750 mb (the value when game starts).
Didn't manage to run apitrace, yet I tried.
### Software information
GTA IV 1.0.7.0
### System information
- GPU: GTX 780 Ti
- Driver: 471.11
- Windows 10 21H1
- DXVK version: 1.9.1, 1.9, 1.8.1, 1.8
### Log files
`8268 11:24:49.208 Starting DxWrapper v1.0.6387.21
8268 11:24:49.209 To be filled by O.E.M. To be filled by O.E.M. To be filled by O.E.M. (Desktop)
8268 11:24:49.209 Intel X79 To be filled by O.E.M. (Desktop)
8268 11:24:49.209 NVIDIA GeForce GTX 780 Ti
8268 11:24:49.209 Windows 10 Enterprise 64-bit (10.0.19041)
8268 11:24:49.209 "GTAIV.exe" (PID:11084)
8268 11:24:49.210 Steam game detected!
8268 11:24:49.210 Disabling High DPI Scaling...
8268 11:24:49.210 Loaded library: user32.dll
8268 11:24:49.210 Loaded library: shcore.dll
8268 11:24:49.210 Loading 'ddraw.dll'...
8268 11:24:49.211 Hooking ddraw.dll APIs...
8268 11:24:49.211 Enabling DDrawCompat
8268 11:24:49.211 Process path: E:\Games\Grand Theft Auto IV\GTAIV.exe
8268 11:24:49.211 Loading DDrawCompat from E:\Games\Grand Theft Auto IV\dxwrapper.asi
8268 11:24:49.211 Loaded library: uxtheme.dll
8268 11:24:49.212 DDrawCompat v0.2.1 loaded successfully
8268 11:24:49.212 Enabling d3d9 wrapper
8268 11:24:49.212 Loading 'd3d9.dll'...
8268 11:24:49.212 Hooking d3d9.dll APIs...
8268 11:24:49.212 DxWrapper loaded!
8268 11:24:53.507 d9_Direct3DCreate9
8268 11:24:53.507 Redirecting 'Direct3DCreate9' ...
8268 11:24:53.552 Creating interface m_IDirect3D9Ex::m_IDirect3D9Ex(06EC0BC8)
8268 11:24:53.571 UpdatePresentParameter Setting WndProc: WND(000B07FA,grcWindow,{-8,-20,1928,1099})
8268 11:24:53.601 m_IDirect3D9Ex::CreateDevice Failed to enable AntiAliasing!
8268 11:24:53.601 UpdatePresentParameter Setting WndProc: WND(000B07FA,grcWindow,{-8,-20,1928,1099})
8268 11:24:53.605 UpdatePresentParameter Setting WndProc: WND(000B07FA,grcWindow,{-8,-20,1928,1099})
8268 11:24:53.818 Setting MultiSample 8 Quality 3
8268 11:24:53.818 Creating interface m_IDirect3DDevice9Ex::InitDirect3DDevice(06EF55D8)
8268 11:24:53.818 Creating interface m_IDirect3DSwapChain9::m_IDirect3DSwapChain9(06D08788)
8268 11:24:53.818 Creating interface m_IDirect3DVertexDeclaration9::m_IDirect3DVertexDeclaration9(06D08938)
8268 11:24:53.818 Creating interface m_IDirect3DVertexDeclaration9::m_IDirect3DVertexDeclaration9(06D08E60)
8268 11:24:53.819 Creating interface m_IDirect3DVertexShader9::m_IDirect3DVertexShader9(007272E0)
8268 11:24:53.819 Creating interface m_IDirect3DVertexShader9::m_IDirect3DVertexShader9(00727400)
8268 11:24:53.819 Creating interface m_IDirect3DVertexShader9::m_IDirect3DVertexShader9(007272C8)
8268 11:24:53.820 Creating interface m_IDirect3DPixelShader9::m_IDirect3DPixelShader9(0FA92E70)
8268 11:24:53.820 Creating interface m_IDirect3DPixelShader9::m_IDirect3DPixelShader9(0FA92CD8)
8268 11:24:53.820 Creating interface m_IDirect3DPixelShader9::m_IDirect3DPixelShader9(0FA92F48)
8268 11:24:56.015 Redirecting 'Direct3DCreate9' ...
8268 11:24:56.023 Creating interface m_IDirect3D9Ex::m_IDirect3D9Ex(22087CD0)
8268 11:24:56.108 m_IDirect3D9Ex::~m_IDirect3D9Ex(22087CD0) deleting interface!
8268 11:24:56.109 d9_Direct3DCreate9Ex
8268 11:24:56.109 Redirecting 'Direct3DCreate9Ex' ...
8268 11:24:56.116 Creating interface m_IDirect3D9Ex::m_IDirect3D9Ex(0FB07308)
[Truncated]
8268 11:24:58.294 Creating interface m_IDirect3DVertexBuffer9::m_IDirect3DVertexBuffer9(220D1668)
8268 11:24:58.296 Creating interface m_IDirect3DTexture9::m_IDirect3DTexture9(220D1698)
8268 11:24:58.296 Creating interface m_IDirect3DTexture9::m_IDirect3DTexture9(220D17D0)
8268 11:24:58.296 Creating interface m_IDirect3DTexture9::m_IDirect3DTexture9(220D1518)
8268 11:24:58.339 Creating interface m_IDirect3DSurface9::m_IDirect3DSurface9(0FAC1F10)
8268 11:24:58.778 Creating interface m_IDirect3DVertexBuffer9::m_IDirect3DVertexBuffer9(220D1560)
8268 11:24:59.244 Creating interface m_IDirect3DVertexDeclaration9::m_IDirect3DVertexDeclaration9(22760C70)
8268 11:24:59.766 Creating interface m_IDirect3DSurface9::m_IDirect3DSurface9(24DA9868)
8268 11:24:59.766 Creating interface m_IDirect3DSurface9::m_IDirect3DSurface9(24DA99F8)
8268 11:24:59.894 Creating interface m_IDirect3DIndexBuffer9::m_IDirect3DIndexBuffer9(2536DDE8)
8268 11:24:59.894 Creating interface m_IDirect3DIndexBuffer9::m_IDirect3DIndexBuffer9(2536DEC0)
8268 11:24:59.894 Creating interface m_IDirect3DIndexBuffer9::m_IDirect3DIndexBuffer9(2536DD40)
8268 11:26:22.608 Quiting DxWrapper
8268 11:26:22.608 Detaching DDrawCompat
8268 11:26:22.620 DDrawCompat detached successfully
8268 11:26:22.620 Unloading libraries...
8268 11:26:22.620 Reseting screen resolution
8268 11:26:22.957 Reseting font smoothing
8268 11:26:22.967 DxWrapper terminated!
`
Answers:
username_1: That's a known issue which happens if you alt tab at any point while playing the game. We have no idea how to fix it.
Also, thats not the DXVK log and you didnt add an apitrace either.
username_1: Duplicate of #2119
Status: Issue closed
username_0: oh, I messed up :D
Would be glad if it's any useful:
info: Game: GTAIV.exe
info: DXVK: v1.9.1
info: Found built-in config:
info: Effective configuration:
info: d3d9.customVendorId = 1002
info: dxgi.emulateUMA = True
info: Built-in extension providers:
info: Win32 WSI
info: OpenVR
info: OpenXR
info: OpenVR: could not open registry key, status 2
warn: OpenVR: Failed to locate module
info: Enabled instance extensions:
info: VK_KHR_get_surface_capabilities2
info: VK_KHR_surface
info: VK_KHR_win32_surface
warn: D3D9: VK_FORMAT_D16_UNORM_S8_UINT -> VK_FORMAT_D24_UNORM_S8_UINT
info: NVIDIA GeForce GTX 780 Ti:
info: Driver: 471.11.0
info: Vulkan: 1.2.175
info: Memory Heap[0]:
info: Size: 3029 MiB
info: Flags: 0x1
info: Memory Type[7]: Property Flags = 0x1
info: Memory Type[8]: Property Flags = 0x1
info: Memory Heap[1]:
info: Size: 6103 MiB
info: Flags: 0x0
info: Memory Type[0]: Property Flags = 0x0
info: Memory Type[1]: Property Flags = 0x0
info: Memory Type[2]: Property Flags = 0x0
info: Memory Type[3]: Property Flags = 0x0
info: Memory Type[4]: Property Flags = 0x0
info: Memory Type[5]: Property Flags = 0x0
info: Memory Type[6]: Property Flags = 0x0
info: Memory Type[9]: Property Flags = 0x6
info: Memory Type[10]: Property Flags = 0xe
warn: D3D9: VK_FORMAT_D16_UNORM_S8_UINT -> VK_FORMAT_D24_UNORM_S8_UINT
info: NVIDIA GeForce GTX 780 Ti:
info: Driver: 471.11.0
info: Vulkan: 1.2.175
info: Memory Heap[0]:
info: Size: 3029 MiB
info: Flags: 0x1
info: Memory Type[7]: Property Flags = 0x1
info: Memory Type[8]: Property Flags = 0x1
info: Memory Heap[1]:
info: Size: 6103 MiB
info: Flags: 0x0
info: Memory Type[0]: Property Flags = 0x0
info: Memory Type[1]: Property Flags = 0x0
info: Memory Type[2]: Property Flags = 0x0
info: Memory Type[3]: Property Flags = 0x0
info: Memory Type[4]: Property Flags = 0x0
info: Memory Type[5]: Property Flags = 0x0
info: Memory Type[6]: Property Flags = 0x0
[Truncated]
info: Buffer size: 160x28
info: Image count: 3
info: Exclusive FS: 0
info: D3D9DeviceEx::ResetSwapChain:
info: Requested Presentation Parameters
info: - Width: 1920
info: - Height: 1080
info: - Format: D3D9Format::A8R8G8B8
info: - Auto Depth Stencil: false
info: ^ Format: D3D9Format::D24S8
info: - Windowed: false
info: Setting display mode: 1920x1080@60
info: Presenter: Actual swap chain properties:
info: Format: VK_FORMAT_B8G8R8A8_UNORM
info: Present mode: VK_PRESENT_MODE_FIFO_KHR
info: Buffer size: 1920x1080
info: Image count: 3
info: Exclusive FS: 0
warn: D3D9DeviceEx::SetRenderState: Unhandled render state D3DRS_ADAPTIVETESS_Z
warn: D3D9DeviceEx::SetRenderState: Unhandled render state D3DRS_ADAPTIVETESS_W
username_1: Not useful unfortunately. |
zhyzhyzhy/zhyzhyzhy.github.io | 694230085 | Title: LinkedList源码分析 | LoveZhy
Question:
username_0: https://blog.lovezhy.cc/2017/12/10/LinkedList%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90/
前言123public class LinkedList<E> extends AbstractSequentialList<E> implements List<E>, Deque<E>, Cloneable, java.io.Serializable 和ArrayList相比,LinkedList多了AbstractSequentia |
postcss/autoprefixer | 200087562 | Title: Confusion about the right order in task runners etc
Question:
username_0: Hey there, thanks fort the awesome work.
So, since i just wanted to use gulp and the advantages of stylelint,
i decided to integrate postcss into my tasks, since we are still using sass and regarding this [article](http://julian.io/some-things-you-may-think-about-postcss-and-you-might-be-wrong/) :
i decided to keep sass in the flow as well...
now i am just a bit confused as i usually would put autoprefixer as a plugin into postcss,
but since postcss needs to apply before sass ( need the linting before the concatenation of the files,
i am afrusername_1d that sass will eliminate some autoprefixes, because of the use of mixins and functions
so i thought it has to be after sass then, what eliminates the postcss plugin logic...kinda confusing, i know..i hope its the right place to ask it :-) thats my style task from gulp:
```
// Compile sass into CSS & auto-inject into browsers
gulp.task('scss', () => {
const processors = [
stylelint(pkg.stylelint),
// Pretty reporting config
reporter({
clearMessages: true,
throwError: true,
}),
//autoprefixer({ browsers: ['last 2 version'] }), --disabled because of sass
// cssnano(), -- same here
];
gulp.src(pkg.paths.src.scss + pkg.vars.scssPattern)
.pipe($.plumber({ errorHandler: onError }))
.pipe($.sourcemaps.init())
.pipe($.postcss(processors, { syntax: scss }))
.pipe($.sass())
.pipe($.sourcemaps.write())
.pipe($.autoprefixer({
browsers: ['last 2 versions'],
cascade: false,
}))
.pipe(cleanCSS({debug: true}))
.pipe($.rename({ suffix: '.min' }))
.pipe(gulp.dest(pkg.paths.src.assets))
.pipe($.notify({
title: 'Gulp',
subtitle: 'Success!',
message: 'Scss task completed!',
sound: 'Pop',
}))
.pipe($.browserSync.stream());
});
```
Status: Issue closed
Answers:
username_1: I recommend to run PostCSS twice — before and after Sass. So it is a good reason to use PostCSS also for preprocessing to avoid this PostCSS→Sass, Sass→PostCSS transformations :D.
username_1: Technically, Autoprefixer could work on Sass sources, but it is not right way to use it. Result could be wrong in some cases.
username_0: so i would just include it like that e.g:
```
gulp.task('scss', () => {
const preProcessors = [
stylelint(pkg.stylelint),
reporter({
clearMessages: true,
throwError: true,
}),
];
const postProcessors = [
autoprefixer({ browsers: ['last 2 version'] }),
];
gulp.src(pkg.paths.src.scss + pkg.vars.scssPattern)
.pipe($.plumber({ errorHandler: onError }))
.pipe($.sourcemaps.init())
.pipe($.postcss(preProcessors, { syntax: scss }))
.pipe($.sass())
.pipe($.postcss(postProcessors))
.pipe($.sourcemaps.write())
.pipe(cleanCSS({ debug: true }))
.pipe($.rename({ suffix: '.min' }))
.pipe(gulp.dest(pkg.paths.src.assets))
.pipe($.notify({
title: 'Gulp',
subtitle: 'Success!',
message: 'Scss task completed!',
sound: 'Pop',
}))
.pipe($.browserSync.stream());
});
```
thanks a lot for ur help |
wso2/product-is | 434269912 | Title: In SCIM2 Single Attribute filtering "attribute:emails" with "operator:EW" not working when the domain is specified
Question:
username_0: The curl command [1] should filter users from all the user stores if the user email address ends with "com" and it returns all the users that matches the filter.
The curl command [2] or [3] should filter users from the specific domain if the email address ends with "com". But it throws an error [4] when executed.
[1] - curl -v -k --user admin:admin 'https://localhost:9443/scim2/Users?filter=emails.home+ew+test1/com'
[2] - curl -v -k --user admin:admin 'https://localhost:9443/scim2/Users?filter=emails.home+ew+test1/com'
[3]- curl -v -k --user admin:admin 'https://localhost:9443/scim2/Users?filter=emails.home+ew+com&domain=primary'
[4] - Error while filtering the users for filter with attribute name: urn:ietf:params:scim:schemas:core:2.0:User:emails.home , filter operation: ew and attribute value: test1/com.
Answers:
username_0: Attaching domain name in front of attribute value when the operator EW is wrong, since results are matched from the end to the beginning of the attribute value.
username_1: Resolving this as this is not a valid scenario supported as per comment above, and the recommended practice of filtering users groups for a domain needs to be documented and reported
Status: Issue closed
|
jlippold/tweakCompatible | 538135864 | Title: `CrackTool4 (iOS 12)` not working on iOS 12.4
Question:
username_0: ```
{
"packageId": "com.julioverne.cracktool4",
"action": "notworking",
"userInfo": {
"arch32": false,
"packageId": "com.julioverne.cracktool4",
"deviceId": "iPhone8,4",
"url": "http://cydia.saurik.com/package/com.julioverne.cracktool4/",
"iOSVersion": "12.4",
"packageVersionIndexed": false,
"packageName": "CrackTool4 (iOS 12)",
"category": "Utilities",
"repository": "julioverne",
"name": "CrackTool4 (iOS 12)",
"installed": "",
"packageIndexed": true,
"packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.",
"id": "com.julioverne.cracktool4",
"commercial": false,
"packageInstalled": false,
"tweakCompatVersion": "0.1.5",
"shortDescription": "Crack Tweaks on one click",
"latest": "4.0~beta9c",
"author": "julioverne",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "not working",
"notes": "Only worth use to crack Activator & Filza. Paid tweak worked just VoiceChanger. Other than that did nothing."
}
``` |
christianmalek/vuex-rest-api | 235021871 | Title: Issue with using vuex-rest-api as store module
Question:
username_0: See https://github.com/username_0/vuex-rest-api/issues/12
@username_1 Could you please provide a code snippet? I tried to use it as module and it works without problems.
Answers:
username_1: @username_0 issue has been resolved.I had a coding issue on my end. Thank you for your help.
username_0: Awesome!
Status: Issue closed
|
Ryujinx/Ryujinx-Games-List | 877596374 | Title: Lost Lands 3: The Golden Curse - 0100156014C6A000
Question:
username_0: ## Lost Lands 3: The Golden Curse
#### Game Update Version : 1.0.0
#### Current on `master` : 1.0.6864
Game loads and plays at a high FPS, but with minor graphics issues. These range from minor annoyances to potentially interfering with gameplay.
#### Hardware Specs :
##### CPU: Ryzen 7 5800X
##### GPU: NVIDIA GTX 1080
##### RAM: 32GB
#### Screenshots :





#### Log file :
[LL3TGC.log](https://github.com/Ryujinx/Ryujinx-Games-List/files/6435606/LL3TGC.log) |
soraxas/echo360 | 651500303 | Title: Error on invalid TLS certificates
Question:
username_0: The tool can't connect to a webpage with an invalid TLS certificate. I got a quick solution running for the chromium webdriver from [this post](https://stackoverflow.com/questions/24507078/how-to-deal-with-certificates-using-selenium).
Maybe this could be introduced for every driver?
Answers:
username_1: In general, you wouldn't want to connect to a webpage with an invalid TLS certificate (this is universal with almost all networking tools, though often them might have a special flag that you can force it to skip verification).
Are there any legit or general reason that you would want an invalid TLS certificate? (as they are necessary in modern days for a reason)
username_0: I completely agree with you there. In my case, lecture videos (for a security lecture, ironically) were only accessible through a echo360 page that lacked a proper certificate.
While it should be the standard for every webpage to have a proper certificate, the sad reality still is that some do not. And, in my case I had to access the content with or without the tool and around the invalid certificate anyways. That is why I thought of an optional setting to also download videos from websites without a proper TLS certificate.
username_1: @username_0 Thanks for your message. Yeah, I understand your situation and I agree that it's hard to have a way around it.
I am guessing we can have a flag that skips TLS certificate checking when supplied by adding the option to the selenium driver. If you have it working would you want to make a simple PR that add such an option to the `argparse` and modify selenium accordingly? As I neither have such a site to test on nor am I inclined to work on it blindly. But I am happy to review and accept a working PR if you sned in one.
Status: Issue closed
username_0: Thank you! It might take a while until I find time to make a pull request but I will keep it in mind! |
lmaurits/BEASTling | 130027530 | Title: IDs are not unique across models
Question:
username_0: Defining two different covarion models in my configuration file, I get
```
Error 104 parsing the xml input file
IDs should be unique. Duplicate id 'covarion_alpha.s' found
Error detected about here:
<beast>
<state id='state'>
<parameter id='covarion_alpha.s' name='stateNode'>
```
IDs should always also contain the model name, I guess.
Answers:
username_1: Thanks for catching this. This kind of problem has happened before. I plan to eventually write a nice class called NameManager or something, where you can request an id for something (e.g. NameManager.getNewID("tree") for a <tree> ID) and it will give you back a guaranteed unique ID, by internally keeping track of all previously issued IDs. This will also allow a perfectly consistent naming scheme for all kinds of IDs - it's a bit of a mess at the moment.
However, because I want to get 1.0.0 out the door soon, for now I've just put the model name in front of the current IDs, with a colon separator, which should fix this specific issue. Can you try your configuration file again to make sure this works?
username_0: Instead of only allocating the provision of IDs, you could have a class that assembles the DAG of references, and writes it out as xml in a (later possibly optimized) manner. This would also be able to check that all referenced IDs exist and enforce some consistency in how dependencies are specfied, as opposed to the 145 different ways beast2 permits it.
username_1: To my surprise, I was bit by this again yesterday when I edited tests/configs/covarion.conf to use multiple models! Apparently I didn't do a through enough job last time. However, it now works, as guaranteed by the passing tests, so I'm closing this.
Status: Issue closed
|
calvium/react-native-redux-connectivity | 428243295 | Title: Update AsyncStorage Access, Retrieve from @react-native-community/async-storage
Question:
username_0: Hi,
I currently have a local branch with `AsyncStorage` access updated per RN deprecating it from the core library.
Can you please grant me permission to push my local branch to update this dependency or let me know what can be done to fix it?
Thanks,
-Mazen
Answers:
username_1: Hey @username_0, are you able to open a pull request?
username_0: @username_1 no. i'm not able to push my local branch up to open a PR. do you have permissions to help?
username_2: Hi @username_0 you should be able to fork this repository, push your changes to the fork and open a pull request from there against our repo. This gist have an explanation of the process: https://gist.github.com/Chaser324/ce0505fbed06b947d962
username_0: @username_2 thank you, i will follow those steps and create a PR
Status: Issue closed
username_0: closing this issue as the [PR](https://github.com/calvium/react-native-redux-connectivity/pull/4) is now open |
erlang/otp | 1023234222 | Title: gen_udp:connect/3
Question:
username_0: **Describe the bug**
gen_udp:connect/3 is not documented but useful. After commit b784dd18ba8011bac9cdb89effea4af338bf3419 when calling gen_udp:connect/3 on a socket, it stops receiving udp packets. I'm not familiar with the code but I think it's somewhere inside https://github.com/erlang/otp/blob/master/erts/preloaded/src/prim_inet.erl#L372
Please close this bug if this is an intended behavior.
**To Reproduce**
1. Create a udp socket with gen_udp:open/2
2. Create another udp socket with gen_udp:open/2
3. Send a packet from the first socket to the second to verify that both work.
4. Call gen_udp:connect(Socket2, Addr1, Port1) where Addr1 and Port1 are the address and port of the first socket.
5. Send a packet from the first socket to the second and verify that it is not received.
**Expected behavior**
A socket should receive packets after calling gen_udp:connect/3.
**Affected versions**
OTP 24.1.2
**Additional context**
Answers:
username_0: I can't reproduce it with otp 24.1.2 running on ubuntu 20.04.
Still trying.
```erlang
{ok,S1}=gen_udp:open(10000, [{ip, {127,0,0,1}}]).
{ok,S2}=gen_udp:open(20000, [{ip, {127,0,0,1}}]).
gen_udp:send(S1,{127,0,0,1},20000,<<"test 1">>).
flush().
gen_udp:connect(S2, {127,0,0,1}, 10000).
gen_udp:send(S1,{127,0,0,1},20000,<<"test 2">>).
flush().
```
username_1: I am for documenting `gen_udp:connect/3`!
username_0: @username_1 I understand not wanting to workaround a Linux kernel bug.
I need the source port to be constant because I advertise it to the client as part of DTLS handshake. Instead of using port 0 for open I could choose a random port in the ephemeral port range and retry on failure but it can get inefficient when large number of ports are already used. Is there other option?
username_1: Then I guess we will have another look at this...
username_1: @username_0: I have created a fix that it would be very nice if you could evaluate, in PR #5343
username_0: Thank you for the PR.
Does the socket backend considered stable enough to be used in production?
I've verified that the local port doesn't change after the first call to gen_udp:connect/3 with both backends:
```erlang
{ok, S} = gen_udp:open(0).
{ok, Port1} = inet:port(S).
gen_udp:connect(S, {127,0,0,1}, 10000).
{ok, Port2} = inet:port(S).
Port1 =:= Port2.
```
```erlang
{ok, S} = gen_udp:open(0, [{inet_backend, socket}]).
{ok, Port1} = inet:port(S).
gen_udp:connect(S, {127,0,0,1}, 10000).
{ok, Port2} = inet:port(S).
Port1 =:= Port2.
```
username_1: The `socket` backend for `gen_udp` is the newest one (the other one is the backend for `gen_tcp`).
I do not know how much field testing people have done. We run them in our daily builds and some test cases
simply does not work because the test case e.g assumes that a socket is a port, so we may have missed failing test cases that indicate "real" problems.
Sorry that I do not have a clearer answer for you...
Status: Issue closed
|
yjs/y-websocket | 717777250 | Title: initUser: Cannot read property 'NaN' of undefined
Question:
username_0: **Checklist**
* [x] Are you reporting a bug? Use github issues for bug reports and feature requests. For general questions, please use https://discuss.yjs.dev/
* [x] Try to report your issue in the correct repository. Yjs consists of many modules. When in doubt, report it to https://github.com/yjs/yjs/issues/
**Describe the bug**
Exception `TypeError: Cannot read property 'NaN' of undefined` in the JS console (see screenshots).
**To Reproduce**
After including `setUserMapping` on the client, like so:
```javascript
const userData = new Y.PermanentUserData(ydoc);
userData.setUserMapping(ydoc, ydoc.clientID, user.id);
provider = new WebsocketProvider(
"ws://localhost:1234",
roomname,
ydoc,
{
resyncInterval: 10 * 1000,
}
);
```
I am regularly receiving this error in the console – I believe it's when rejoining a previously joined websocket. I can't find documentation for `PermanentUserData`, so could be using it wrong somehow? If you need additional context let me know what I can grab, it's happening enough that I can reproduce on demand.
**Screenshots**


**Environment Information**
- Browser: Chrome latest
- version: [email protected], [email protected]
Answers:
username_1: Hi @username_0
`userData.setUserMapping(ydoc, ydoc.clientID, user.id);` expects a mapped string value. Is `user.id` a string?
username_1: Oh.. I see that decoder.pos is `NaN`. I'm not sure how this can happen..
I can't reproduce the problem at the moment because the demo application is working for me.
Are you maybe connecting to a public room?
username_1: I was able to reproduce the issue. I'm working on a fix.
Status: Issue closed
username_1: The problem resulted from some refactoring I did some time ago. The DS-decoder expected a lib0/Decoder, not an Uint8Array. Not sure why typescript didn't catch it.. Thanks for noticing!
I fixed the issue in [email protected]. |
ionic-team/ionic-framework | 1010797338 | Title: bug: Ion-datetime completely broken inside ion-modal (Angular)
Question:
username_0: ### Prequisites
- [X] I have read the [Contributing Guidelines](https://github.com/ionic-team/ionic-framework/blob/main/.github/CONTRIBUTING.md#creating-an-issue).
- [X] I agree to follow the [Code of Conduct](https://ionicframework.com/code-of-conduct).
- [X] I have searched for [existing issues](https://github.com/ionic-team/ionic-framework/issues) that already report this problem, without success.
### Ionic Framework Version
- [ ] v4.x
- [ ] v5.x
- [X] v6.x
### Current Behavior
The datetime component works really fine when I used in the page directly. On the contrary, when I try to open it with an ion-modal the component broke. The datetime seems to display correctly in the modal, and when I select a date of the same month as the current one it works correctly. However, as soon as I want to change the month using the top buttons or the scroll, the component does not update the months in the upper part of the date-picker (in addition, it only lets you advance or go back one month with respect to the current one, both through the upper buttons and through scroll). However, when you start punching on any date the component only advances to the next month.
See the video for a better compression of the problem (sorry for the low quality). In the first seconds when dates relative to the current month were selected it worked, however, as soon as you want to change the month strange behaviors appear:
https://user-images.githubusercontent.com/61509169/135252509-261918c7-d710-4c11-844d-65aa9b760632.mp4
I tried with diffent widths and heigths but I can't resolve the issue.
### Expected Behavior
The datepicker should work the same way inside an ion-modal / popover as it does outside of them.
### Steps to Reproduce
- Create a project with the beta 6 of Ionic v6
- Create a container that opens the modal
```
<div (click)="openDateModal()" class="container"> Open Date Modal </div>
```
- Create the corresponding function to open it
```
dateModal: HTMLIonModalElement;
async openDateModal() {
this.dateModal = await this.modalCtrl.create({
component: DatePickerPage,
cssClass: 'modal-with-transparency',
});
this.dateModal.onDidDismiss().then((data) => {
this.selectedTime = data.data;
});
return this.dateModal.present();
}
```
- Create a new page with the following code, that will contain the datepicker:
__html__
```
<div id="background" (click)="dismiss($event)">
<ion-datetime id="picker" [(ngModel)]="date">
<ion-buttons slot="buttons">
<ion-button (click)="confirm()">Done!</ion-button>
</ion-buttons>
</ion-datetime>
[Truncated]
Capacitor CLI : 3.2.3
@capacitor/android : 3.2.3
@capacitor/core : 3.2.3
@capacitor/ios : not installed
Utility:
cordova-res : 0.15.3
native-run : 1.4.1
System:
NodeJS : v14.15.4 (C:\Program Files\nodejs\node.exe)
npm : 6.14.10
OS : Windows 10
### Additional Information
_No response_
Answers:
username_0: Ok, it seems to be a problem related to issue #23985. Adjusting the size of the datepicker to a height of 100% make it works correctly. Anyway, it should still work in the same way for smaller sizes like the datepicker of the video.
username_1: Thanks for the issue. Are you able to reproduce this in an Ionic starter app and provide a link to the repo? I am not able to reproduce this on my end. Your video does not appear zoomed in, so I am not sure it is exactly the same issue as #23985.
username_0: I created a new repo with the issue. Here is the link:
https://github.com/username_0/datepicker-bug
The problem appears in some devices like the Pixel 2XL or the iPhone 5/SE in the devtools (also when you create the native app via android studio or xcode). In some other devices like the iPhone X the problem is not visible. Therebefore, it seems that it is dependent on the width and height of the component, and if you try to modify these parameters at the end you make it work, but it is too random and I have not yet managed to make it work for all devices.
username_0: I have made a new video with the problem just in case. At the beginning of the video I ran the application from the repository that I passed previously on the Pixel, and the component fails. However, when reloading the page, if we put the dimensions of another phone like the iPhone X, it does work
https://user-images.githubusercontent.com/61509169/135326598-e13c9fc3-a1f2-4cb3-8b9c-d405a8daa649.mp4
username_1: Thanks! I can reproduce this behavior.
username_2: Hello :wave: following up on this issue.
The problem here is that when unspecified the `ion-datetime` will auto size to its container. For different devices, this will be a different width.
For "Pixel 5" emulation, the `ion-datetime` width is set to `334.05px`.
The odd fractional value is affecting the intersection observers ability to handle thresholds correctly. We can replicate the issue with all usages of `ion-datetime` in a modal, if we set the width explicitly to `334.05px`.
If we set the value to an integer whole number, i.e. `334px` the problem goes away.
Obviously this isn't ideal for auto-sizing layouts, where you aren't specifying the end width of the datetime. I have an adjustment to the intersection observer usage that can account for this internally.
Status: Issue closed
|
parroty/excoveralls | 166363433 | Title: macros are ignored
Question:
username_0: Does it supposed to be this way?
`mix.exs`:
```elixir
defp deps do
[
...
{:excoveralls, "~> 0.5", only: :test}
]
end
```
Result:
```
$ mix coveralls.html
.............
Finished in 0.1 seconds (0.1s on load, 0.02s on tests)
13 tests, 0 failures
Randomized with seed 989620
----------------
COV FILE LINES RELEVANT MISSED
0.0% lib/ecto_autoslug_field.ex 2 0 0
100.0% lib/ecto_autoslug_field/slug.ex 35 2 0
0.0% lib/ecto_autoslug_field/slug_generator.e 48 0 0
0.0% lib/ecto_autoslug_field/type.ex 14 0 0
[TOTAL] 100.0%
----------------
Generating report...
```
<img width="702" alt="2016-07-19 18 33 45" src="https://cloud.githubusercontent.com/assets/4660275/16956097/be09b75c-4ddf-11e6-86dd-0edbcd55e860.png">
Answers:
username_1: Thanks for the report. It's kind of limitation around it's based on erlang's cover module and elixir's macro precomipling is not reflected well (haven't been able to find good workaround).
username_2: Is this related to coverage sometimes missing on all or part of the `with` macro in Elixir and the `schema` macro in Ecto?
username_3: @username_2 yes, it appears so since elixir implements those using macros.
username_4: Have the changes in Elixir 1.5 helped with this or are we still unable to see coverage on macros?
username_5: it seems macros still have test coverage = 0
username_6: Hey, any idea if this is fixable? @username_1 |
aburrell/aacgmv2 | 894372527 | Title: More use examples
Question:
username_0: **Is your feature request related to a problem? Please describe.**
Some people have trouble figuring out how to use the functions for their own purposes
**Describe the solution you'd like**
More use examples in the documentation
**Describe alternatives you've considered**
Answering questions as they come up
**Reminders**
This is a volunteer-driven project. Code contributions are welcome, as is help
testing new code.
Answers:
username_1: Sorry, excuse me. I want to know if, after altitude correction of orbits using AACGM module, the input is satellite (geographic latitude, geographic longitude and altitude) and the output is (magnetic latitude, magnetic longitude, magnetic place). Is the corresponding output altitude location the default surface height or something else?
username_1: Sorry, excuse me. Using the AACGM module for altitude correction of orbits, satellites are input (geographic latitude, geographic longitude and altitude) and outputs (magnetic latitude, magnetic longitude, magnetic place). I want to know if the corresponding output altitude location is the default surface height or something else?
username_0: It depends on the flags you're using. As stated in the docs, the returned height is either geocentric radial distance or height above the surface of the Earth (https://aacgmv2.readthedocs.io/en/latest/reference/aacgmv2.html). Can you give me a code example? For best formatting, be sure to flank the commands between three accent marks ('`').
username_1: For example: '''aacgmv2.wrapper.get_aacgm_coord_arr(glat, glon, height, dtime, method='TRACE')'''. In this case, in_Lat, in_lon, height, and dtime, the four input values are the satellite's current latitude, longitude, altitude, and time. The output result is' MLAT ',' MLON ',' MLT ', there is no height information in the result, but there is height information in the input data, I want to know which plane the output result is projected to, is it on the earth surface by default?
username_0: It's at the height you input (`height`). That's why no height information is output, it is used in the calculation and not altered.
username_1: If "`height" refers to the output altitude, how can I determine the current position of the satellite only by( "glat" , "glon" )when I input it?In the same geographic longitude and latitude, but at different altitudes, the corresponding geomagnetic field lines are also different. How can we correct them?
So what I don't understand is that if the "height" here refers to the given height after output, isn't the input position information enough to track along the geomagnetic line? Rely only on latitude and longitude.
username_1: Isn't the function of AACGM to track a satellite position to another low altitude plane according to the geomagnetic field line
username_0: No, the function of AACGMV2 to convert between magnetic and geodetic or geographic coordinates using Version 2 of the Altitude-Adjusted Corrected Geomagnetic Coordinates, as stated in the documentation: https://aacgmv2.readthedocs.io/en/latest/readme.html
username_1: Is the function of AACGM similar to converting the geographic coordinate information of a point of the satellite into the AACGM coordinate information of the current position. The satellite coordinate information cannot be tracked along the geomagnetic line to a plane at a low altitude
username_0: @username_1 I don't really understand what you're asking. All AACGMV2 does is convert between geographic/geodetic and magnetic coordinates. If you know where your satellite is, then you can get its location in magnetic coordinates. If you don't know where it is (in latitude, longitude, and altitude), then AACGMV2 isn't going to help.
username_1: After inputting the current geographic location (latitude, longitude and altitude) of the satellite into the program, the output result is actually only to convert the geographic coordinates into geomagnetic coordinates.
Is my current understanding correct?
I originally thought that the AACGM module could project the 840km satellite position along the magnetic field line to other heights, such as 110km
username_0: That may be possible, but is not a use case I have encountered. It would require a careful reading of the papers to make sure you're going about it correctly. If possible, it would be a multi-step process along the lines of:
1) Find geomagnetic coordinates at current location
2) Extract geomagnetic elements that remain the same for tracing
3) Use those in the call at the desired altitude. |
npms-io/npms-analyzer | 259960890 | Title: reduce the weight of issue-related factors (open issues, issues distribution)
Question:
username_0: It sounds like a great idea to look at GitHub issues, but in practice many projects use the issue tracker to also keep track of long-term projects and discussions. It's preferable to have one open issue that people can +1 rather than have people flood an issue tracker with new issues repeating the same request. Should the weight of the issue tracker be reduced?
Status: Issue closed
Answers:
username_1: Oops didn't mean to close it.
username_1: It sounds like a great idea to look at GitHub issues, but in practice many projects use the issue tracker to also keep track of long-term projects and discussions. It's preferable to have one open issue that people can +1 rather than have people flood an issue tracker with new issues repeating the same request. Should the weight of the issue tracker be reduced?
username_0: heh. Question still stands: given that projects handle issues differently, does it make sense to give such a large weight to Github issue statistics?
username_1: I think that the issues metric needs to be improved. The most important factor for that metrict should be the time it takes for a contributor to respond to an issue, but doing that would increase exponentially the number of calls to the GitHub API and that's a no-no :/
For now, maybe we can decrease it's weight but I need to so some extensive testings to find the right balance.
username_2: I think that issues created by collaborators should also have no or less impact on the score, because some projects are using issues as task-lists, and it seems to me that issue tracking usage is better than direct pushes to master, at least in some cases.
username_3: Another angle of evaluating issues are the labels. The maintainer can change the existing labels or add new ones, but my impression is that many uses the default ones (or at least in addition to any custom ones). And issues labeled with _enhancement_ are not necessarily a bad sign, more that there are plans for that project.
Awareness of NPMS's logic of penalizing open issues may incentives the maintainer to quickly close these issues with a comment instead of letting them stay open and more accessible. |
Shopify/secret-sender | 858338037 | Title: New Homebrew Warnings
Question:
username_0: I noticed on a `brew upgrade` today some new warnings pertaining to this package that I have not seen before:
```Warning: Calling `cellar` in a bottle block is deprecated! Use `brew style --fix` on the formula to update the style or use `sha256` with a `cellar:` argument instead.
Please report this issue to the shopify/shopify tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:
/usr/local/Homebrew/Library/Taps/shopify/homebrew-shopify/secret-sender.rb:13
Warning: Calling `sha256 "digest" => :tag` in a bottle block is deprecated! Use `brew style --fix` on the formula to update the style or use `sha256 tag: "digest"` instead.
Please report this issue to the shopify/shopify tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:
/usr/local/Homebrew/Library/Taps/shopify/homebrew-shopify/secret-sender.rb:15
Warning: Calling `cellar` in a bottle block is deprecated! Use `brew style --fix` on the formula to update the style or use `sha256` with a `cellar:` argument instead.
Please report this issue to the shopify/shopify tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:
/usr/local/Homebrew/Library/Taps/shopify/homebrew-shopify/secret-sender.rb:19
Warning: Calling `sha256 "digest" => :tag` in a bottle block is deprecated! Use `brew style --fix` on the formula to update the style or use `sha256 tag: "digest"` instead.
Please report this issue to the shopify/shopify tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:
/usr/local/Homebrew/Library/Taps/shopify/homebrew-shopify/secret-sender.rb:22
```
It looks like there is some TLC needed for this package to keep up with latest Homebrew changes and deprecations. |
ioos/bio_data_guide | 931758973 | Title: Expand ERDDAP documentation to provide MBON best practices in configuration
Question:
username_0: We should be documenting some of the nuances when serving DwC data via ERDDAP.
For example, if the date is only captured to the year, the ERDDAP configuration for that time variable should only treat it as a string, not a time in seconds since 1970.
I know there are other items, similar to above, which should be documented. I'll look at some of my notes and compile them here.
Here is the current section that should be updated: https://ioos.github.io/bio_data_guide/intro.html#erddap
Answers:
username_0: HakaiInstitute has a start to this https://github.com/HakaiInstitute/erddap-basic. But, I'd really like to get to the nuts-and-bolts of configuring a DwC dataset in ERDDAP.
Some questions:
* What `dataTypes` do we set for which 'columns' in DwC? Are there any specifics we should be identifying?
* How should the `datasetID` be configured?
* Identification of primary latitude, longitude, depth (in meters), and time variables.
* What about populating the metadata for each column? How do we recommend doing that?
* What about the global metadata? Can we map to EML?
username_1: GCOOS has done this work already so we should get input from <NAME> but I don't see her here in the Github. @SarahRDBingo might be interested in this as she is working to stand up an ERDDAP for PacIOOS Biological Data.
username_0: @fgayanilo might be good to ping from GCOOS as well. |
microsoft/onnxruntime | 1105503523 | Title: Can't load a model: Can't create InferenceSession
Question:
username_0: I have converted the Keras model to onnx using onnxmltools on python. the summary of the model is as below:
Model: "sequential"
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 32) 480
dense_1 (Dense) (None, 64) 2112
dense_2 (Dense) (None, 128) 8320
dense_3 (Dense) (None, 64) 8256
dropout (Dropout) (None, 64) 0
dense_4 (Dense) (None, 5) 325
=================================================================
Total params: 19,493
Trainable params: 19,493
Non-trainable params: 0
The converted model is working fine when used on python. In order to use the onnx model in react native, I converted the onnx model to ort format using the script:
**python -m onnxruntime.tools.convert_onnx_models_to_ort model.onnx**
The converted ORT model works perfectly fine on python but when I use this model in react native it throws exceptions while loading the model
the exceptions is:
**[Error: Can't load a model: Can't create InferenceSession]**
Answers:
username_1: ONNX Runtime react native packages supports a limited set of operators in [here](https://onnxruntime.ai/docs/reference/operators/mobile_package_op_type_support_1.8.html). Can you check if all kernels in your model are supported and opset version is either 12 or 13?
If your model is not using opset 12/13, you can either export an ONNX model with opset 12/13 or you can convert onnx version using [this script](https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md#converting-version-of-an-onnx-model-within-default-domain-aionnx). Then, convert to ort.
username_0: Hi,
Yes, I have checked the required operators of my model are listed in the set of operators list |
DmytroKondrashov/Learning-project-Kinozal | 700499980 | Title: Работа над ошибками
Question:
username_0: 1)
невалидный код

2)
переключатели не работают

Answers:
username_1: Исправил, проходит валидацию, панели переключаютсяю
Страница:
https://dmytrokondrashov.github.io/Learning-project-Kinozal/page.html
Репоиторий: https://github.com/username_1/Learning-project-Kinozal
username_0: 3)
неправильно отображение элемента

username_1: Исправлены ошибки адаптивности
Страница: https://dmytrokondrashov.github.io/Learning-project-Kinozal/
Репозиторий: https://github.com/username_1/Learning-project-Kinozal
Status: Issue closed
username_0: работа принята
нд, 13 вер. 2020 о 20:52 <NAME> <<EMAIL>> пише:
> Исправлены ошибки адаптивности
> Страница: https://dmytrokondrashov.github.io/Learning-project-Kinozal/
> Репозиторий: https://github.com/username_1/Learning-project-Kinozal
> |
Qluxzz/avanza | 804858039 | Title: 401 Client Error
Question:
username_0: Nice work!
I have some trouble though. First time I make a request I get what I want, but for ~10 seconds after the first request I get this error message, indicating I'm not authorized. Then after about 10 or 20 seconds it all works perfectly again.
Did I miss something?
```
Traceback (most recent call last):
File "***", line 6, in <module>
avanza = Avanza({
File "/usr/local/lib/python3.9/site-packages/avanza/avanza.py", line 23, in __init__
response_body, credentials = self.__authenticate(credentials)
File "/usr/local/lib/python3.9/site-packages/avanza/avanza.py", line 65, in __authenticate
return self.__validate_2fa(credentials)
File "/usr/local/lib/python3.9/site-packages/avanza/avanza.py", line 82, in __validate_2fa
response.raise_for_status()
File "/usr/local/lib/python3.9/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://www.avanza.se/_api/authentication/sessions/totp
```
Answers:
username_1: Receiving 401 also when trying to login in.
401 Client Error: Unauthorized for url: https://www.avanza.se/_api/authentication/sessions/totp
username_2: Does your username and password work with Avanzas website?
The "engångskod" (totp) can be fetched using the same way in the README.md
```python
import hashlib
import pyotp
totp = pyotp.TOTP('MY_TOTP_SECRET', digest=hashlib.sha1)
print(totp.now())
```


username_3: Indeed, I had to use the code above to create a TOTP code that worked.
username_0: Yes, I've created the TOTP and it worked just fine the first request. If I try to make a second request after about 5 seconds I get the error above. Then if I wait 20-30 seconds and try again it works just fine again.
username_2: Avanza has a time for how often you can try to login
So if your program looks like this for example:
```python
from avanza import Avanza
avanza = Avanza({
'username': 'MY_USERNAME',
'password': '<PASSWORD>',
'totpSecret': 'MY_TOTP_SECRET'
})
overview = avanza.get_overview()
```
And you run it twice, you will get the error.
But if you only login once and reuse the same Avanza object, you can make how many requests you want
username_1: Doesnt work for me, using security key, but checking the readme.md and generate "engångskod" it doesnt match the one that Google Authenticator gives me.
Trying the code that gives overview of accounts. Still 401... only called once som shouldnt be locked out by multiple calls with short interval.
username_0: Ah, that must be it, thanks @username_2
Status: Issue closed
username_2: @username_1 totp is based on the system time, so if there is some disparity there it would result in that you get an invalid totp. Have you checked that the system time on your computer doesn't differ from the one on your telephone?
username_1: Thanks, it was a time difference on my computer... works now |
mono/xwt | 3946541 | Title: [Wishlist] Notebook tabs with widgets as labels
Question:
username_0: One example of this feature would be tabs with icons and close buttons.
I believe this should be trivial to add to GTK+ and WPF. Not so sure about Mac.
Answers:
username_1: The Icons part is implemented in #340, but no close buttons. Custom tab widgets are not so simple on Mac. For the icons I had to adjust the label sizing and to draw the icon on top of it. |
paulmillr/chokidar | 262798985 | Title: Can chokidar handle diffing of files?
Question:
username_0: Looked around docs and searched issues to see if there was any inquiries for getting the `diff` changes of a watched file but couldn't find anything
I want to write a logging agent to send diff'd updates of a log file to a logging server so that the two copies can sync, logging server gets updates in real-time, and we don't bottleneck on filesize when things get large. With `fs.watchFile`, I can grab the previous and current file descriptor and compare bytes.
Answers:
username_1: You're comparing the `stat`? Or the actual contents of the file?
Chokidar doesn't read the contents of files in order to watch them - that's what would be needed in order to do before/after diffs. It is possible, but should be handled outside of chokidar.
username_0: @username_1: ah, I've only done some research, but it's the `fs.Stats` class I will be working with so will not be comparing contents.
username_1: Chokidar can provide you with the `fs.Stats` with each event. It already does it when the data is already available via normal processing, or you can force it to happen with every event with the `alwaysState: true` option.
Combined with the default `ignoreInitial: false`, which will send you `add` events for all paths being watched upon instantiation along with file stats, you can hold those stat objects and diff them against new ones received with subsequent events.
Status: Issue closed
username_0: Yep, I had to track the stats myself in a normalized dictionary. Was hoping for some convenience like what we get with `watch`, but it's working now.
username_2: @username_0 can you elaborate more on how you managed to get the diff?
username_0: Not near my code so can't share it.
But a library like `watch` implicitly holds the previous `fs.stat` data of the file. You compare the previous data with the new data to find the delta and parse the delta content into a buffer.
`chokidar` does not not keep the results of the previous `fs.stat` state so you have to do it yourself using whatever software pattern you like best.
username_0: @username_2: Not near my code so can't share it.
But a library like `watch` implicitly holds the previous `fs.stat` data of the file. You compare the previous data with the new data to find the delta and parse the delta content into a buffer.
`chokidar` does not not keep the results of the previous `fs.stat` state so you have to do it yourself using whatever software pattern you like best.
username_2: @username_0, thanks! |
iotaledger/iota.js | 802983798 | Title: Possibility to use bytes to set index
Question:
username_0: ## Description
As far as I can see, in the high level APIs I can only set an index for an indexation payload from a string. I need a way to hand over plain bytes.
## Motivation
I want to replace mam.c in my application. Therefore I need to publish a message to an index that equals a public key derived from a signature mechanism.
## Requirements
Write a list of what you want this feature to do.
1. Allow me to set an index from plain bytes, not only from a string
## Open questions (optional)
I believe this will be used a lot. Indezes dont neccessarily have to be readable.
## Are you planning to do it yourself in a pull request?
No
Answers:
username_1: The latest high level operations operate using bytes as hex instead of UTF8
Status: Issue closed
|
gbif/pipelines | 465226172 | Title: Review interpretation/indexing YARN settings
Question:
username_0: Probably the YARN settings are not the best, it would be nice to review and improve them.
Based on:
1) Interpretation and indexing settings
2) We can run many different interpretations and indexing jobs with various dataset sizes
3) Settings for mirroring and ES only cluster
4) Other cases?<issue_closed>
Status: Issue closed |
NationalSecurityAgency/ghidra | 798794769 | Title: Build Error
Question:
username_0: **Describe the bug**
I ran the ghidraRun.bat after building and it says JDK could not be found.
I do have it, though. v9.2.2 works fine!
**To Reproduce**
1. Build. ("gradle eclipse")
2. run ghidraRun.bat
**Screenshots**
This means it built correctly, right?

**Environment (please complete the following information):**
- OS: Windows
- Java Version: 11
- Ghidra Version: debugger
Answers:
username_1: When you do a `gradle buildGhidra`, it produces a zip file in the `build/dist` directory. This file, when unzipped, is your new runnable Ghidra build. When you unzip it, the unzipped directory should have a `ghidraRun.bat` file at the top level. That's the one you want to run.
I recommend you move the zip file to some other directory outside of the ghidra git repo so you don't confuse all the different `ghidraRun.bat` files that may be laying around.
You basically never want to run the `ghidraRun.bat` file that's in the `RuntimeScripts` directory. That one will only run if you have the Eclipse environment setup.
username_0: Oh? So is `gradle` kinda like `make`? I put the name of the configuration or whatever after
`gradle`?
username_0: I get this error now:

username_0: Oh, I am using the debugger branch. Sorry.
username_0: Does that image show what the problem is? It is trying to find jdk in the jre directory. Why?
username_0: PLEASE HELP?
Status: Issue closed
|
basarat/typescript-book | 912405871 | Title: Build fails at pdf file creation (calibre problem)
Question:
username_0: ```
Creating PDF Output...
67% Running PDF Output plugin
68% Parsed all content for markup transformation
70% Completed markup transformation
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-runner'
WebEngineContext used before QtWebEngine::initialize() or OpenGL context creation failed.
Traceback (most recent call last):
File "/usr/bin/ebook-convert", line 20, in <module>
sys.exit(main())
File "/usr/lib/calibre/calibre/ebooks/conversion/cli.py", line 401, in main
plumber.run()
File "/usr/lib/calibre/calibre/ebooks/conversion/plumber.py", line 1274, in run
self.output_plugin.convert(self.oeb, self.output, self.input_plugin,
File "/usr/lib/calibre/calibre/ebooks/conversion/plugins/pdf_output.py", line 188, in convert
self.convert_text(oeb_book)
File "/usr/lib/calibre/calibre/ebooks/conversion/plugins/pdf_output.py", line 253, in convert_text
convert(
File "/usr/lib/calibre/calibre/ebooks/pdf/html_writer.py", line 1195, in convert
manager = RenderManager(opts, log, container.root)
File "/usr/lib/calibre/calibre/ebooks/pdf/html_writer.py", line 279, in __init__
ans.setUrlRequestInterceptor(self.interceptor)
AttributeError: 'QWebEngineProfile' object has no attribute 'setUrlRequestInterceptor'
``` |
inkle/inky | 704337514 | Title: Feature: auto white-space menu action
Question:
username_0: The idea being you can select a block of ink code and hit the buton, and it'll auto-format it to tab/indent the weave blocks correctly. To make it easier to spot errors.
Answers:
username_0: Additionally, if we have something like that, a smart indent / outdent feature would be amazing for refactoring weave. So you select a block, and hit outdent, and all the * *'s become *'s, etc. (And if there are any *'s the action fails silently, maybe, so you don't "flatten" weaves accidentally.) This'd be useful for moving sub-weave blocks out into their own knot when they get big, as you could cut-paste-outdent. |
patternfly/patternfly-org | 720634817 | Title: Mobile collapsible sidebar behavior/design
Question:
username_0: It seems like when you're on mobile, and the sidebar overlays the content area when expanded, if the sidebar is open and a user clicks on a link in the sidebar, that the sidebar should automatically close. Otherwise a user has to click outside of the sidebar somewhere to close it. The only time I can see that a user might want it to stay open is if they click on an item by accident and want to leave it open so they can click on the correct link. I imagine 80% of users on mobile open the sidebar to go somewhere, then want the sidebar to close so they can see the page content.
Also there is no separator between the sidebar and main page content on org as the sidebar shadow is disabled. Seems like we should retain the shadow on mobile.
What it looks like now
<img width="848" alt="Screen Shot 2020-10-13 at 1 13 27 PM" src="https://user-images.githubusercontent.com/35148959/95899272-ee84e480-0d55-11eb-890c-e255065477e7.png">
What it looks like with the shadow
<img width="892" alt="Screen Shot 2020-10-13 at 1 13 33 PM" src="https://user-images.githubusercontent.com/35148959/95899283-f3e22f00-0d55-11eb-8eaf-07c97d8ca3cf.png"><issue_closed>
Status: Issue closed |
cms-sw/cmssw | 889874427 | Title: [Geometry][Detector Description] Remove CLHEP and long double
Question:
username_0: Remember to remove CLHEP dependency from Geometry subsystem especially from the DD Algorithms.
Open issues with CLHEP mentioned:
https://github.com/cms-sw/cmssw/search?q=CLHEP&state=open&type=issues
CMSSW code using CLHEP:
https://github.com/cms-sw/cmssw/search?l=C%2B%2B&q=CLHEP&type=code
@username_1 - FYI
Answers:
username_1: most of the "new" long double comes from the use of user-defined literals
their effect can be seen in this simple code
https://godbolt.org/z/956fn9aWh
not sure if intended/desirable
username_1: the "issue: is solved changing the return of the ```operator""``` to double
```constexpr double operator"" _pi(long double x) ```
https://godbolt.org/z/KKPYc935n
username_0: @username_1 - would you submit a PR?
username_1: the code in question is
https://cmssdt.cern.ch/lxr/source/DataFormats/Math/interface/angle_units.h
not the inconsistency with
https://cmssdt.cern.ch/lxr/source/DataFormats/Math/interface/CMSUnits.h#0025
username_1: Sorry @username_0 ,
I am neither the author nor the maintainer of these files and even less of their clients.
I think that those responsible for such code and its performance shall take action
username_0: Sure, thanks for tracing the issue.
username_1: Btw: have not found the CLHEP usage in Detector Description.
The only "serious" one is for Phase0 forward pixels
username_0: and this link is only one example
username_0: @username_1 - do you know per chance who uses/needs Basic3DVectorLD?
username_0: possibly partially fixed by https://github.com/cms-sw/cmssw/pull/33705
username_2: assign geometry
username_3: @username_0 , detector description creates Geant4 geometry and materials, so it depends on Geant4, Geant4 depends on CLHEP. So, we should not plan remove CLHEP dependence, all what can be done is to revise utilities in CMSSW to make them consistent.
I would suspect that switch from 'long long double' to 'long double' will break regression for the PR, which will do this, but also assume, that this may bring more stability in general. One cannot exclude, that non-reproducibility issue we observed in DD4Hep WFs are connected with this problem.
username_0: let's hope it does
username_3: @username_0 , if a sub-library does not depend on Geant4 there is likely no need to depend on CLHEP. I agree, that we should avoid unnecessary units conversions and trying to have as less units conversions as possible.
username_0: closed in favour of https://github.com/cms-sw/cmssw/issues/34663
Status: Issue closed
|
NumEconCopenhagen/projects-2020-group-999 | 616052185 | Title: Model project feedback
Question:
username_0: I couldn't run the code all in one, as there is an indentation error in the optimal choice function and an error from lacking arguments for "sol_0", "u" and more. These problems meant that I couldn't run your code. The briefness of this feedback reflects that.
**1. The best part of the project was:** I liked how you visualised your steps with graphs.
**2. The hardest part of the project to understand was:** Like I mentioned first, I had trouble understanding a lot of your code because it wouldn't run.
**3. This part of the project could be better documented: ** I really think you documented your functions neatly. I do not think this is an issue for you.
**4. An idea for an improvement/clarification could be:** Please make sure the code run correctly.
** 5. An idea for an extension could be: ** |
ankurjain20111989/vmcs | 256433910 | Title: Enhancement
Question:
username_0: Denomination Quantity and Amount that totals to Cash Held and Coins Expected to Collected to be displayed
Answers:
username_0: As a Project Manager a Change Request Form is submitted to me by the user representative. The Major enhancements made were:- Enable an accounting check-and-balance that insures that what’s reported as collected by the machine is in fact what has been collected by the Maintainer. each time “Press to Collect Cash” is pressed, a record that contains the amount collected and the logged in Maintainer’s id is captured and recorded in a central database. The system is required to provide an at-a-glance display of the cash amount available, by denomination, for collection by the Maintainer. This will enable the Maintainer to readily tally the total cash collected and the total number of coins by denomination displayed in “Show Total Cash Held” and “Press to Collect Cash” when pressed, thus leaving no room for dispute over system errors.
username_0: As a ## **project manager** i foresee the changes in the change request log form and submit the changes to the ## **Technical Lead** to go through the enhancement changes.
username_0: As a Technical Lead i assign these changes to the respective developer. The Developer then make the changes and assigns it to the Quality Assurance for testing the changes done.
Status: Issue closed
username_0: As a Quality Assurance Guy, i verify the Enhancement mentioned above and send the results to the Project Manager for approval. As a project manager the changes have been done and the change request is approved. After approving the changes, the request is send to the Release Manager for the Release. |
irbv-collections/MT-controlled-vocabulary | 86289313 | Title: New taxon
Question:
username_0: [Originally posted on GoogleCode (id 825) on 2014-01-15 by francois.lambert.3]
Scientific name: <NAME>
ID of the specimen(s): Workbench 4605
Your name: <NAME>
Answers:
username_1: [Originally posted on GoogleCode on 2014-01-15 18:10Z]
Note que <NAME>ana n'est pas présent au Labrador. Il s'agit sans doute d'une autre espèce comme B. minor ou B. pumila
Luc
Status: Issue closed
|
atlassian-labs/compiled | 725068056 | Title: Setup rennovate
Question:
username_0: Configure rennovate to run against this repo. Ideally it should be raise + merge automatically if all tests pass.
Answers:
username_1: Hi @username_0. I recently did a dive into setting up renovate, so I'm happy to help out.
I think the app needs to be added at the github org level though, so not sure who would have access to do that.
https://docs.renovatebot.com/install-github-app/
Once the app is given access to this repo, it will raise a PR with some initial configuration.
For the behaviour you want, I think the config would look something like this.
```json
{
"extends": ["group:monorepos", "group:recommended", "workarounds:all"],
"lockFileMaintenance": {
"enabled": true
},
"packageRules": [
{
"managers": ["npm"],
"updateTypes": ["minor", "patch"],
"automerge": true
},
{
"managers": ["nvm"],
"automerge": true
},
{
"updateTypes": ["lockFileMaintenance"],
"automerge": true
}
],
"postUpdateOptions": ["yarnDedupeHighest"],
"rangeStrategy": "auto"
}
```
This will raise package updates that are often released together (e.g. react, react-dom, react-test-renderer, react-is, etc.) as a single pull request.
I also enabled lockfile maintenance which will try to regen the lockfile once a week, to update transitive dependencies.
Its also configured to dedupe dependencies after an update.
These changes will be automerged if status checks pass.
`separateMajorMinor` is true by default, so major updates will be raised in a separate PR and wont be automerged.
I think for github actions, you've tagged them with major version only in your workflow files, so I think they will already use new minor and patch versions, and renovate should only raise a PR when a new major version is released.
username_0: Thanks! I've requested the rennovate app.
Awesome insight thanks! Will see what happens when its approved and we can get something working.
username_0: Cheers I've set it up here, let's see how it goes https://github.com/atlassian-labs/compiled/pull/428
Status: Issue closed
username_1: I noticed that the pull requests aren't automerging. It looks like its because the repo requires pull request reviews.
It looks like there is a bot that can be added to automatically approve renovate pull requests.
https://docs.renovatebot.com/automerge-configuration/#required-pull-request-reviews
username_0: thanks! I've set that up but it hasn't approved the already existing ones. maybe for new PRs?
username_2: It only approves new PRs
username_0: Thanks @username_2!
Now I need to figure out why some major bumps haven't happened. Would have thought tsnode would be bumped.
username_2: I suggest first enabling dependencyDashboard to get some visibility into planned PRs to confirm. Next, check the logs via app.renovatebot.com - find the "packages with updates" log message and then locate the dependency name within.
username_0: That's very helpful, thanks |
kubeflow/website | 421215421 | Title: NVIDIA TensorRT Inference Server: libcaffe2.so Error
Question:
username_0: I see the following error when NVIDIA container is started following the instruction in the article:
"E trtserver: error while loading shared libraries: libcaffe2.so: cannot open shared object file: No such file or directory"
I am using nvcr.io/nvidia/inferenceserver:18.08.1-py3
Answers:
username_1: @username_0 Can you explain what you are doing? E.g. if you are deploying some server can you provide the YAML spec for your resource?
This seems like it would be an issue with your docker image since its missing a linked file.
username_1: My suggestion for debugging would be to use kubectl to start an interactive shell
```
kubectl -it run ...
```
Then use a tool like `ldd` to look at linking of whatever binary you are running.
username_2: Thanks username_1
I'm going to close this issue as answered. If there's a need for a doc update, please raise an issue specifying the doc update required.
/close |
elastic/apm-agent-ruby | 402669151 | Title: Data from the created additional streams are not recorded.
Question:
username_0: When executed, the method creates multiple threads to speed up its work.
```
Rails.application.executor.wrap do
threads << Thread.new do
Rails.application.executor.wrap do
Job1
Job2
end
end
threads << Thread.new do
Rails.application.executor.wrap do
Job3
end
end
end
```
If you record data like this
```
ElasticAPM.start_span('all')
Rails.application.executor.wrap do
threads << Thread.new do
Rails.application.executor.wrap do
Job1
Job2
end
end
threads << Thread.new do
Rails.application.executor.wrap do
Job3
end
end
end
ElasticAPM.end_span
```
it turns out to keep the data general, about the time of work.
But if you wrap it like this
```
Rails.application.executor.wrap do
threads << Thread.new do
Rails.application.executor.wrap do
ElasticAPM.start_span('Job1')
Job1
ElasticAPM.end_span
ElasticAPM.start_span('Job2')
Job2
ElasticAPM.end_span
end
end
threads << Thread.new do
Rails.application.executor.wrap do
ElasticAPM.start_span('Job3')
Job3
ElasticAPM.end_span
end
end
end
```
there is no data at all. I really want to somehow collect data on individual streams separately, preferably with details of what is happening inside.
Maybe I'm doing something wrong?
Answers:
username_1: Hi @username_0! I have no experience with Rails' `executor` so I'll have to do some investigation before I can give you a good answer.
Please update this thread if you do any investigation yourself in the meantime.
username_1: APM's `current_transaction` and `current_span` are thread locals that are automatically inherited, so we'll probably have to do some fidgeting to arrange the parent-child relationships manually when making our own, additional threads. |
yosupo06/library-checker-problems | 507426200 | Title: [問題案] Maximum Flow
Question:
username_0: 問題ID: maximum_flow
問題名: Maximum Flow
# 問題概要
N頂点の流量つき有向グラフと頂点S, Tが与えられる、S -> Tの最大流
## 入力
```
N M S T
u_0 v_0 c_0
u_1 v_1 c_1
:
u_{M - 1} v_{M - 1} c_{M - 1}
```
## 出力
```
TODO
```
## 制約
- N <= 300
- M <= N(N-1)
- C <= 1e9
## 議論
- 制約
- 復元方式
- 単純グラフか、など
Answers:
username_0: https://en.wikipedia.org/wiki/Maximum_flow_problem
動的木系はあんまり速くなりそうな気がしないし、想定はO(V^3) (or O(V^2 sqrt E))でV = 300あたりが丸いのでは?
username_0: 復元は辺ごとの流量でいいかな
単純である必要無し
制約はV <= 300, Dinicで書いて死んだら考える
username_1: Dynamic tree や高度なデータ構造を使わない O(nm + n^2 log U) (Excess scaling algorithm) があるので, もっと厳しく行ってもいい思います.
username_1: ついでに lower bound とか需要点/供給点とかも入れられますが, 入れるべきか微妙なところですね
username_0: lower boundは割と使うのであっても良さそうな気がします
需要 / 供給は微妙な気がします
username_1: デファクトスタンダードはそうですが, lower bound に対応するならどうせ需要点/供給点つきに対応するのも自明にできるので… まぁ無くてもよい気はします.
username_2: 需要点・供給点割と使うのであっても良さそうな気がします
でもどっちでもいいです
最小費用流と揃えて綺麗にするという観点だと,以下の 2 つのオプションがありそう (両立させるとよくわからないのだろうか)
- 需要点・供給点を b で与える
- s-t フローにするが,各辺の流量だけでなくカット (残余グラフで s から到達可能な頂点集合) も出力させる (やりすぎか?) (最適性の確認にはなる)
username_1: 両立させてもそのまま, s-t カット (S, T) であって残余グラフに (S, T) の辺がない, つまり (S, T) の辺を上限, (T, S) を下限で使っているようなもので, フロー整合性条件を満たすようなフローとカットのペアで最適性が担保できますね. カット容量 - b(S) が最大流量とかそんな感じかと.
username_1: FYI: https://gist.github.com/username_1/47b1d99c372daffb6891662db1a2b686
username_3: 需要Max Flow,支持。 |
facebookresearch/TensorComprehensions | 327082370 | Title: Identify reductions by statement name rather than by tensor name
Question:
username_0: In particular, this should allow for reductions with multiple updates.
Answers:
username_0: It may not be limited to reductions, generally LHS tensor names may be reused in different statements so we should not use them as primary keys.
Status: Issue closed
|
huawei-noah/trustworthyAI | 1173513029 | Title: [Feature request]: Adding GES algorithm to the package
Question:
username_0: Hi,
I think it would be great to add GES algorithm implementation to `gcastle`. It would make broad comparisons between algorithms easier.
There is an existing Python implementation of GES by <NAME>: https://github.com/juangamella/ges
Maybe it could be integrated into `gcastle`. What are your thoughts?
If you think it's a good idea, I am happy to help with integration.
BTW., I'll be speaking about `gcastle` in my upcoming conference talk next week: https://ghostday.pl/#agenda
Answers:
username_1: Hello,
Yes, the GES algorithm is something that has been on the todo list for some time. In truth, we already have a colleague that recently started working on an implementation to add to gcastle. It should be released in the near future although I'm not sure regarding the exact timeframe.
Also, thanks for contributing to the exposure of the package, that's very exciting to hear. :)
username_0: My pleasure. It's a great, very useful and much needed package.
Another event where I'll talk about `gcastle` is PyData Hamburg: https://t.co/YgcD6Ka9lR |
OHDSI/ETL-Synthea | 1054244424 | Title: Update Eunomia support
Question:
username_0: Functions to support Eunomia (synthea cdm --> sqlite) were not updated when support for CDM v5.4 was implemented.
Answers:
username_0: @yradsmikham
Hi Yvonne, looks like the functions providing Eunomia support weren't updated when ETL-Synthea was last updated to support cdm v5.4. The functions are createPrunedTables(), getEventConceptId(), pruneCDM(), backupCDM(), and restoreCDMTables(). However, these were all created specifically for Eunomia. Is that what you're working on?
As for CreateCDMIndexAndConstraintScripts(), I should've dropped that function since all DDL comes from the package CommonDataModel now. The CommonDataModel package has functions writePrimaryKeys(), writeForeignKeys(), and writeIndex(). Can you leverage those to recreate your schema constraints or would you like to continue using CreateCDMIndexAndConstraintScripts()? If the latter, I'll simply update it to make the necessary calls to the three aforementioned functions from CommonDataModel. |
spritewidget/spritewidget | 326732050 | Title: Use Radians angles instead of Degrees
Question:
username_0: I understand that a casual developer will understand better degrees angles, but any experienced dev should use the internal units the FPU understand. Doing the conversion all the time is not a good idea, and it comes with rounded approximations.
So it would be best to have all internal values stored as radians, expose getters and setters for this unit and keep the current get/set degrees angles for backward compatibility. |
lanayotech/vagrant-manager | 58683770 | Title: High power consumption under OSX
Question:
username_0: OSX activity monitor reports high average power consumption for vagrant manager. It's only rivaled by notorious apps such as Safari and Skype:

Can this be caused by the virtual machine running in the background? Or is vagrant manager itself actually causing the power drain?
Answers:
username_1: 
Based on the process details, it appears Vagrant Manager itself is not responsible for the high power usage, it is mostly the aggregate consumption of active virtual machines.
username_1: closing alongside #67
Status: Issue closed
username_0: When I check my top I usually see VBoxHeadless at position 1 or 2. @username_1 your observation is spot on! |
Atlantiss/NetherwingBugtracker | 374777124 | Title: [Profession][Misc?] First Aid cancels out the players drinking
Question:
username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.)
[//]: # (You can use screenshot ingame to visual the issue.)
[//]: # (Write your tickets according to the format:)
[//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!)
[//]: # ([NPC] Magistrix Erona)
[//]: # ([Spell][Mage] Fireball)
[//]: # ([Npc][Drop] Ghostclaw Lynx)
[//]: # ([Web] Armory doesnt work)
**Description**:
Casting first aid [in this case runecloth] cancels the players drinking.
**Current behaviour**:
Casting first aid [in this case runecloth] cancels the players drinking.
**Expected behaviour**:
shouldnt be cancelling his/her drinking.
**Server Revision**:
2214
Status: Issue closed
Answers:
username_1: #130 |
dotnet/dotnet.github.io | 144105509 | Title: Weak signature in apt repository
Question:
username_0: Hi,
The dotnet repository is signed weakly, according to newer (>= 1.2.7)
apt versions. It complains:
W: http://apt-mo.trafficmanager.net/repos/dotnet/dists/trusty/InRelease: Signature by key 52E16F86FEE04B979B07E28DB02C46DF417A0893 uses weak digest algorithm (SHA1)
It seems your repositories are done with aptly. Unfortunately, aptly
does not appear to offer a configuration option, so you'll need to use
a build that includes the following commit:
https://github.com/smira/aptly/commit/1069458aee25b406c8d5b8c29d37a01b6786d2ef
Hopefully this can be fixed, so that users do not get a scary warning
on apt-get update.
Answers:
username_1: Thanks for reporting the issue!
We are aware of the problem. We will probably wait for the aptly official build to include the commit to update our aptly version.
username_1: We have updated our aptly version. This issue should have been fixed now.
username_0: Indeed it is fixed. Thanks!
username_2: Thanks @username_1 !
@blackdwarf who has the power to close issues in this repo? 😄
username_0: I can because I opened it :)
Status: Issue closed
|
kino-ngoo/Instapaper_reading | 867004484 | Title: [新手專區] 什麼時候關卡比較好
Question:
username_0: <b>[新手專區] 什麼時候關卡比較好?</b><br>
開卡通常伴隨著許多的開卡獎勵優惠,但凡事有開始就有結束,什麼時候是適合關卡的時機呢 ? 老狐狸會在這篇知識分享文裡簡單解說關卡的好處壞處,與什麼時機關卡對使用者來說比較適合。 老狐狸對於信用卡關卡有一個非常簡單的決定方式…<br>
<br>
April 25, 2021 at 08:53PM<br>
via Instapaper https://ift.tt/2LLaFO4 |
flutter/flutter | 545672909 | Title: Mising description in "Write Your First Flutter App"
Question:
username_0: In part "4. Use an external package", the const identifiers in Title and Body magically vanish with no explanations. One cannot add "child: Text(wordPair.asPascalCase)" into the body without removing the const before Body, as this throws errors. There should be a comment in the code to tell the user to remove the const before Body.
Answers:
username_1: Hi @username_0
If I understood correctly, you mentioned this paragraph.

It is clear from code which lines should be added, which should be removed.
I close the issue
if you disagree please write in the comments
and I will reopen it.
Thank you
Status: Issue closed
username_0: Sorry, I should have been clearer: I'm not talking about the code in the repo, but the Codelab description. Here, the app is created using const identifiers:
https://codelabs.developers.google.com/codelabs/first-flutter-app-pt1/#2
One step later, the const's vanished without a comment, causing problems when not removing them:
https://codelabs.developers.google.com/codelabs/first-flutter-app-pt1/#3
username_1: In part "4. Use an external package", the const identifiers in Title and Body magically vanish with no explanations. One cannot add "child: Text(wordPair.asPascalCase)" into the body without removing the const before Body, as this throws errors. There should be a comment in the code to tell the user to remove the const before Body.
username_1: Ah, ok, my screenshot was from [this documentation](https://flutter.dev/docs/get-started/codelab).
I was looking there (and there examples were without `const` in mentioned places).
But now I see what do you mean
Thank you
username_1: Closing as duplicate of https://github.com/flutter/flutter/issues/48404
The current one is older, but less descriptive
Status: Issue closed
|
gotev/android-upload-service | 833828488 | Title: setMaxRetries is not work
Question:
username_0: ```
MultipartUploadRequest(context, serverUrl = URL+"upload-image")
.setMethod("POST")
.addFileToUpload(
filePath = ImagePath,
parameterName = "Image"
)
.setMaxRetries(50)
.addParameter("incident", incident.IncidentNumber!!)
.startUpload()
```
On server side i set Exception. I expect what Upload service will try to do 50 tries. But that not happened. Why?
Answers:
username_1: Retry mechanism triggers when there's a communication failure to/from the server due to connectivity problems (e.g. timeout, broken pipe, server unreachable, no network connection on the device)
If your server sends a 4xx or 5xx response, the retry mechanism will not be triggered, as the communication happened correctly, but something on your server was broken and you will receive the error response in onError.
Status: Issue closed
|
glasklart/hd | 109701973 | Title: VHS Camcorder
Question:
username_0: **App Name:** VHS Camcorder
**Bundle ID:** com.rarevision.vhs-camcorder
**iTunes ID:** <a target="_blank" href="http://getart.username_1.at?id=679454835">679454835</a>
**iTunes URL:** <a target="_blank" href="https://itunes.apple.com/us/app/vhs-camcorder/id679454835?mt=8&uo=4">https://itunes.apple.com/us/app/vhs-camcorder/id679454835?mt=8&uo=4</a>
**App Version:** 1.1.2
**Seller:** Rarevision LLC
**Developer:** <a target="_blank" href=https://itunes.apple.com/us/developer/rarevision/id464591028?uo=4>© Rarevision</a>
**Supported Devices:** iPad2Wifi, iPad23G, iPhone4S, iPadThirdGen, iPadThirdGen4G, iPhone5, iPodTouchFifthGen, iPadFourthGen, iPadFourthGen4G, iPadMini, iPadMini4G, iPhone5c, iPhone5s, iPhone6, iPhone6Plus, iPodTouchSixthGen
**Original Artwork:**
<img src="http://is4.mzstatic.com/image/thumb/Purple69/v4/df/d7/61/dfd761b7-b690-dfb1-4364-bed8727c5399/mzl.mbzheffd.png/0x0ss-85.jpg" width="150" height="150" />
**Accepted Artwork:**
\#\#\# THIS IS FOR GLASKLART MAINTAINERS DO NOT MODIFY THIS LINE OR WRITE BELOW IT. CONTRIBUTIONS AND COMMENTS SHOULD BE IN A SEPARATE COMMENT. \#\#\#
Answers:
username_1: 
https://cloud.githubusercontent.com/assets/2068130/12078503/f9c59db8-b214-11e5-85f4-989b7f88b286.png
--- ---
Source:
https://cloud.githubusercontent.com/assets/2068130/12078504/077ead32-b215-11e5-8f18-1f7159798624.png
Status: Issue closed
|
pyepics/pyepics | 416835757 | Title: No monitor for large PVs after clear_auto_monitor()
Question:
username_0: After creating a ```pv = epics.PV()``` with ```auto_monitor=True``` I get the value updates as expected (the value callback gets called). If I need to stop the callbacks (monitor) I call ```clear_auto_monitor()``` and later ```reconnect()``` to get the updates going again. All good so far.
The above procedure fails for me when large arrays (~500k points; ~1MB) are at stake.
The last step - reconnect() - does not seem to re-enable the value change updates to call my callback anymore. No other warnings or errors were noticed so far..
Any ideas why this manifets itself for larger arrays only, while it works for scalars and smaller arrays (ie. ~900 points)?
Answers:
username_1: @dchabot I agree with all of that -- add `auto_monitor` to both methods (defaults of `None`), and use that option with `reconnect()` as the right way to change the `auto_monitor` status of a PV.
username_0: @dchabot and @username_1 thank you for looking into this. Adding the ```auto_monitor``` argument seems flexible enough, if you ask me.
username_0: I made a crude attempt at the above fix on the pyepics code I have deployed from pip.
It seems to work as expected; my change in ```__on_connect()``` and ```reconnect()``` just comments out lines that change ```self.auto_monitor```, nothing else.
username_1: @username_0 OK, glad to hear that works. Would you be willing to make a Pull Request for that change?
username_0: Sure. Let me try to clean it up a bit.
Status: Issue closed
|
canada-ca/ore-ero | 286760365 | Title: NoSQL databases as part of development tool
Question:
username_0: Removing NoSQL databases all together may be limiting development tool options at the GC.
NoSQL products like MongoDB are useful during agile sprints where lots of changes occur. RDBMS are very static in nature and doesn't allow the flexibility of changes which can halt innovation. What's been suggested in the industry is use of NoSQL during agile sprinting allows to focus on the application side and figure out your data model based on your sprints. Once you have a stable product, then you're in a better position to design your SQL schema.
Answers:
username_1: @username_0 the Assessments were move under the [Open Source Advisory Board](https://github.com/canada-ca/OS-Advisory_Conseil-SO) repo. I will close the issue here but feel free to start a conversation on the other repo.
This repo will be used to host the Github pages for the Open Resource Exchange
Status: Issue closed
|
binzabinza/rs-disc | 540133585 | Title: Add database constraints
Question:
username_0: make sure that :
duplicate threads records are not created (unique url)
duplicate raw_posts records are not created (unique thread_id, page_num, post_num)
duplicate price_reports records are not created (unique thread_id, page_num, post_num)
duplicate items records are not created (unique full_name??)<issue_closed>
Status: Issue closed |
emberjs/ember.js | 591078209 | Title: Ability to configure "usePods" for everything apart from co-located components
Question:
username_0: Hi folks 👋
There was a bit of a [discussion about this in the #topic-octane-migration](https://discordapp.com/channels/480462759797063690/608346628163633192/694189776437510145) channel yesterday and I wanted to open up an issue to describe the request and try to figure out a way forward.
Essentially with Octane we have co-located components (which are amazing 😍) but we still need to use the "classic" style for controllers, routes, templates etc.
A lot of people have had success using Pods while we have been waiting for Module Unification and other forms of co-location, and most of the projects that I work on have `"usePods": true` configured in their `.ember-cli` files because of this.
The issue that I have is that if you define `"usePods": true` in your `.ember-cli` config then generating a new component will use the pod structure instead of the new co-location structure 😢
What I would ideally like would be for some way to say "use pods but not for components". While discussing this in #topic-octane-migration on Discord some other people mentioned that it might be good to be able to define what you would like to use pods for, that way you could configure the app to use pods for Routes and Controllers, but tell it to leave Models in a non-pod structure.
I'm not sure what the best way to define this would be, i don't have any **good** proposals but my naive idea would be to add something like:
```json
{
"disableAnalytics": false,
"usePodsFor": [ "controllers", "routes" ]
}
```
Any thoughts?
Answers:
username_1: Seems pretty unlikely that we will change this in the short term. I think that you should probably just pass the flag to disable your global `usePods` when you generate a component.
username_0: @username_1 I'm happy to do the work if that's the blocker 😂 but if you mean that we probably don't **want** to do this then that's of course a different thing
I understand that the pods structure is "lightly deprecated" or at least discouraged at this point so I understand if we don't want to add more complexity to this situation 👍
username_2: We have exactly the same use case - for us, the classic layout for controllers/routes/route templates doesn't work at all, and we're quite happy with using pods for them, and co-location for components. I would also appreciate a way to set this more permanently, as it happens rather frequently that I do e.g.:
```bash
ember g controller my-controller
# Oops, this is classic...
ember d controller my-controller
ember g controller my-controller --pods
```
I've heard a few times that the recommended solution is kind of "do not use pods", but IMHO pods are vastly easier for us to work with than the classic layout, so that's not really a solution for us. |
aspnetboilerplate/aspnetboilerplate | 509572557 | Title: Independent, modular development with user interface and storage
Question:
username_0: I am using abp mvc5.x 4.9
I wanted to develop a separate plug-in with a user interface and storage, and then load the module (and its contained submodules) through the Plugins directory.
[PluginDemo](https://github.com/aspnetboilerplate/aspnetboilerplate-samples/tree/master/PlugInDemo) I have ever seen, but it is just a simple plug-in, there is no storage and user interface.
Also see [MultipleDbContextDemo](https://github.com/aspnetboilerplate/aspnetboilerplate-samples/tree/master/MultipleDbContextDemo) , when I was in application service to inject IRepository < TEntity, long >, it is how to determine what a DBContext?
Is my idea reasonable and is there a complete example?
Answers:
username_1: The ABP framework will automatically register the entity and the corresponding DbContext to the DI container:
username_0: Thank you very much.
Status: Issue closed
|
snyk/snyk-sbt-plugin | 490988527 | Title: The current README doesn't describe how to use it
Question:
username_0: How should I use this plugin to monitor my build.sbt file?
From my perspective, the current README doesn't describe how to use it. 🤔
Answers:
username_1: Hi @username_0,
this is the plugin used by https://github.com/snyk/snyk, what is the thing used for monitoring your project :-) No need (and also no way) how to use this plugin directly.
username_0: In this case, readme should inform that this plugin shouldn't be used directly 😉
username_2: hey @username_0 , added a section int the README informing that this plugin should be used directly. If you think it's fine, you can close this ticket |
weirongxu/coc-explorer | 517148864 | Title: [Feature Request] Open in split
Question:
username_0: Would be nice if there was a way to open in a normal "horisontal" split.
Answers:
username_1: I also have this need ! like action defx#do_action('open', 'botright split') in defx.nvim
username_2: I tested `defx#do_action('botright split')`, but doesn't seem work very well.
defx

I feel that split under the buffer is better, like this

Do you think so? @username_1 @username_0
username_0: I agree that the second picture is the better result.
username_3: Hi maybe you're implementing this but this is what current split looks like..

It's splitting the explorer pane 😅
username_2: Yes, I need to reimplement this, so i didn't close this issue yet
Status: Issue closed
|
dmlc/treelite | 1170982703 | Title: Add support to release Linux aarch64 wheels
Question:
username_0: I’m trying to build Linux aarch64 wheels but while building `treelite-build.cpu` image, the following error is arising:
```
runtime: failed to create new OS thread (have 2 already; errno=22)
fatal error: newosproc
```
The error seems to be due to qemu. Since azure pipelines uses qemu to build aarch64 wheels, we can use other CI, such as CircleCI which have aarch64 build agents, to build aarch64 wheels.
@username_1, Please let me know your interest in releasing Linux aarch64 wheels. I can help with this.
Answers:
username_1: You can install Treelite using Conda: https://anaconda.org/conda-forge/treelite
username_1: I don't think I'd have bandwidth to make aarch64 wheels for PyPI. Conda-forge supports aarch64 out of the box, so that will be our primary way for supporting aarch64
Status: Issue closed
|
vlio20/angular-datepicker | 261834425 | Title: Possible to create a Year Picker?
Question:
username_0: Thanks for a great picker component! I've been looking for one with different levels, and your Month Picker is perfect. However, I'd also need a Year Picker. Is that easy to extend?
Answers:
username_1: nope, you can use: `showMultipleYearsNavigation` & `multipleYearsNavigateBy`
Status: Issue closed
username_0: But, the point is to be able to display a picker which shows, say, a 3x4 matrix of years:
` < >`
`2017 2018 2019 2020`
`2021 2022 2023 2024`
`2025 2026 2027 2028`
Setting the options you're suggesting, we're still seeing a _month_ picker. I need a component where the user can click and select 2019, for example. Not some month in 2019, just the year.
username_1: @username_0, I understand your requirement, but I don't want currently change the current implementation.
Do you want to PR? if so, we can add an option to choose between the 2 implementations. |
dart-lang/sdk | 247152098 | Title: HTTP request/response clean-up on timeout
Question:
username_0: This might simply be a documentation issue. It is not clear what one needs to do (if anything) when a `dart:io` HTTP client request/response times out. For example, should some kind of `close()` methods be called? On which objects? Can `HttpClient` be reused? Should the underlying socket connection be closed?
Sample code:
```dart
final client = new HttpClient();
HttpClientRequest request;
HttpClientResponse response;
try {
request = await client.getUrl(uri).timeout(httpTimeout);
response = await request.close().timeout(httpTimeout);
var data = await response.fold(<int>[], (data, bytes) => data..add(bytes))
.timeout(instructions.timeout);
} on TimeoutException {
// What should be done with client, request and response objects?
}
``` |
Holzhaus/mixxx-gh-issue-migration | 873343259 | Title: Add to Auto DJ queue (replace) needs confirm dialog
Question:
username_0: The "Add to Auto DJ queue (replace)" function is dangerous because it there is no undo and the function appears adjacent to non-destructive operations like "Add to Auto DJ queue (bottom)".
A slip of the mouse could easily clobber the current Auto DJ queue...
There should be at least a confirm dialog and/or an undo action. |
usgs/groundmotion-processing | 793565951 | Title: Add Kleckner et al. clipping detection method
Question:
username_0: This should implement all of the individual clipping methods considered, as well as the ANN model. We should re-use (and potentially refactor) the generic aspects of the ANN code in the nn_quality_assurance module. Unit tests should reproduce the results reported in the paper.
Answers:
username_0: This was completed with recent merges.
Status: Issue closed
|
visit-dav/visit | 414865218 | Title: Using VISITPLUGINDIR w/ 2.6.x
Question:
username_0: <NAME> reports that he used to use this env var to point to DB plugins, but with new release of VisIt on LC, it doesn't seem to pick the plugins up any more.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1407
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: Using VISITPLUGINDIR w/ 2.6.x
Assigned to: <NAME>
Category: -
Target version: 2.6.3
Author: <NAME>
Start: 04/03/2013
Due date:
% Done: 100%
Estimated time:
Created: 04/03/2013 06:36 pm
Updated: 05/01/2013 07:02 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.6.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
<NAME> reports that he used to use this env var to point to DB plugins, but with new release of VisIt on LC, it doesn't seem to pick the plugins up any more.
Comments:
I changed internallauncher to fix a glitch that prevented VISITPLUGINDIR from being used.<issue_closed>
Status: Issue closed |
Atomox/mta-delays | 310072234 | Title: MTAD-052 - Reroute Lists All Stations, not parsed.
Question:
username_0: - TRACK MAINTENANCE [R] Some Bay Ridge-bound trains skip [Qs273-G21], [Mn613-R11] and [Mn8-R13] Rush Hour, 6 AM to 10 AM, Mon to Fri, Mar 19 - 23 Mar 26 - 30 Please allow additional travel time. After [Qs272-G20], Queens, some [R] trains run via the [F] stopping at [Qs221-B04], [Mn222-B06] and [Mn223-B08], resuming regular service at [Mn9-R14]. |
eclipsesource/papyrus-seqd | 318618525 | Title: [Properties View Edition:002]: Message properties
Question:
username_0: It SHALL be possible to edit the following attributes of a message in the Property View UML section:
- Name
- Label (Internalization)
- Signature
- Arguments
It SHALL be possible to see (read-only):
- Message Sort
- MessageEnd Send
- MessageEnd Received<issue_closed>
Status: Issue closed |
nginx/unit | 777338968 | Title: Import error when using pypy-django-psycopg2
Question:
username_0: Using
Ubuntu 18.04LTS Server - ARM64
unit (1.21.0-1~bionic).
Python3.6 Language pack
Virtual environment of Pypy3.6 (7.3.2) - python version 3.6.9
Django 3.1
psycopg2 2.8.6
PostgreSQL 12
Unit is able to launch and run Django project perfectly when using pypy virtual environment and using sqlite3 as db. When using psycopg2 connector to use PostgreSQL with Django it is unable to launch. when reconfiguring it says
{
"error": "Failed to apply new configuration."
}
error log shows
2021/01/01 19:01:57 [alert] 4339#4339 Python failed to import module "new.wsgi"
Traceback (most recent call last):
File "/home/ubuntu/django-env/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 25, in <mo$
import psycopg2 as Database
File "/home/ubuntu/django-env/lib/python3.6/site-packages/psycopg2/__init__.py", line 51, in <module>
from psycopg2._psycopg import ( # noqa
ModuleNotFoundError: No module named 'psycopg2._psycopg'
the file '_psycopg.pypy36-pp73-linux-gnu.so' exists in the library and works since running 'python manage.py runserver' works without any issues.
Answers:
username_1: Hello,
Could you please share complete example of your app and config.
username_0: unit_config.json
{
"listeners": {
"127.0.0.1:8000": {
"pass": "applications/project"
}
},
"applications": {
"project": {
"processes": {
"max":2,
"spare":1
},
"threads": 3,
"type": "python 3.6",
"path": "/home/ubuntu/websites/example.com/",
"home": "/home/ubuntu/django-env/",
"module": "se.asgi",
"protocol": "asgi",
"environment": {
"DJANGO_SETTINGS_MODULE": "se.settings"
}
}
}
}
i have tried with wsgi, python 3.7 language pack with pypy3.7 as well, it gives the same error. "home" points to pypy3.6 virtual-env named django-env. it has Django, pyscopg2 installed using pip. i also tried with pyscopg2-binary, it didn't work as well.
project settings.py - database section
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'project-db',
'USER': 'project-user',
'PASSWORD': '<PASSWORD>',
'HOST': '/var/run/postgresql/',
'PORT': '5432',
'client_encoding': 'UTF8',
}
}
i hope these configs suffice to alleviate the problem
username_1: Does the `psycopg2` connector installed in same virtual environment (`/home/ubuntu/django-env/`)?
username_0: yes, all libraries were installed in the 'django-env' virtual-env. The django test server was able to run everything perfectly when the django-env was activated(being used).
username_1: The thing is django test server and application runs in different environment. To double check, please issue following command and share the output: `find /home/ubuntu/django-env psycopg2`.
username_0: output is
/home/ubunut/django-env/site-packages/psycopg2
i also created symbolic link of folder 'site-packages' from /home/ubuntu/django-env to /home/ubuntu/django-env/lib/python3.6/. This was done because UNIT tries to search for libraries installed in env using pip in lib/python3.x/site-packages/ and lib64/python3.x/site-packages/.
username_1: I would suggest you to create virtual environment with standard python3.6 tools (`python3.6 -m venv django-venv) because Unit module is linked against libpython and will use python interpreter, not pypy.
username_0: UNIT's python module works fine with pypy, I tried running it with sqlite3 instead of postgres and it worked just fine, I think it's just one file in psycopg2 lib that's creating problems it's a .so file.
username_1: PyPy is binary incompatible with Python. I've created venv for both Python and PyPy and compared the _psycopg*.so files import - it is completely different.
Again, instead of creating symlinks you need to properly initialise virtual environment for Python:
```
python3.6 -m venv /home/ubuntu/django-env
source /home/ubuntu/django-env/bin/activate
pip3.6 install django psycopg2
```
Otherwise you may face any issue. To check your venv run django server using python3.6 `python3.6 manage.py runserver`.
username_0: Thanks for the clarification.
Status: Issue closed
|
CartoDB/cartodb | 166617227 | Title: Torque doesn't work with shared datasets
Question:
username_0: STR
Disable builder in order to try with the Editor (I haven't been able to test this in the Builder because of a bug but I think that the problem is exactly the same).
1. Go to Dashboard -> Datasets -> Shared with you.
2. Select a private one and click "Create map".
3. Open the map in an incognito window. It works (this helps to see that problem is at torque).
4. Set Torque style in the wizard. There's a GET to the tiler that fails with this error:
```
{type: "unknown", message: "TorqueRenderer: permission denied for relation twitter_t3chfest_reduced",…}
```
That request is done to `https://TABLE_OWNER_USERNAME.carto.com/api/v1/map?config=...`, which I think that is wrong, as it should be done to the subdomain of the owner of the visualization.
5. If you open the embed the request to the tiler fails with `"Template 'tpl_9b364c98_4e90_11e6_937a_0e3ebc282e83' of user 'TABLE_OWNER_USERNAME' not found"`, because of the same reason.
6. If you reload the editor the Torque animation works, as the request is now done with the right user subdomain (the owner of the visualization). Nevertheless the embed seems broken again.
It looks like Torque is creating the url with the username that it finds at `layer.options['user_name']`. That's wrong, as that user is not the visualization owner but the `layer.options['table_name']` owner.
This has appeared while testing the fix for #8974.
cc @xavijam @alonsogarciapablo @fdansv @username_1
Answers:
username_1: could you share the viz.json?
username_1: I did the same with the builder and the viz.json is not being generated correctly and indeed we are using `layer.options.user_name` instead of `datasource.username` to generate the url
username_0: 1. Vizjson [v2](https://team.carto.com/u/username_0/api/v2/viz/e3778c76-4e98-11e6-a95f-0ecd1babdde5/viz.json) and [v3](https://team.carto.com/u/username_0/api/v3/viz/e3778c76-4e98-11e6-a95f-0ecd1babdde5/viz.json). What's wrong with them?
2. Using `datasource.username` would fix it with v3, but would keep v2 broken. Is that good enough now that we're moving forward?
username_1: yep, the v3 would be fixed using dataset.userneme, for v2 I think we might have a problem because if we change user_name to the real one it could break other things
username_1: I checked v3 viz.json with cartodb.js v4 and works as expected (so does the embed https://team.carto.com/u/username_0/builder/e3778c76-4e98-11e6-a95f-0ecd1babdde5/embed)
For v2 it needs to be a fix done server side
username_0: @username_1 the embed doesn't work for me, it's "frozen" at the beginning. In spite of being a 200, the named map jsonp response contains an error: `"Template 'tpl_e3778c76_4e98_11e6_a95f_0ecd1babdde5' of user 'xavijam' not found"`.
username_1: - that jsonp call is fine, it's a legacy call we should remove ,the good one is the named map one
- timeslider not working is another issue @fdansv
If you check the torque tiles they are fine
username_0: @username_1 I've tried adding the `datasource` at vizjson v2 and Torque won't use it. What's the server side fix that you suggested?
username_1: v2 does not use datasource, that's a new thing in v3.
In v2 we'd need to fix layer `user_name` so it's the table owner and not the logged user
username_0: No, the current problem is that layer `user_name` _is_ the table owner (it's related to `table_name` option). And the opposite, making `user_name` the visualization owner, won't work because it's currently needed to compose the query.
username_1: so it's an issue we can't fix just tweaking user_name. The only way I know is replacing backend side the {{user}} in maps_api_template
username_1: the code: https://github.com/CartoDB/torque/blob/master/lib/torque/provider/windshaft.js#L361
username_0: Ok, the last tests of #8991 integrating all fixes revealed a last issue in staging: Torque still generates the wrong CDN request. [Example](https://username_0-ded-01.carto-staging.com/u/username_0-ded-01-1/viz/a667cde8-533c-11e6-a354-040120e0c101/map). It requests [a url based on the table owner](https://cdb-staging-1.global.ssl.fastly.net/username_0-ded-01-admin/api/v1/map/username_0-ded-01-1@03b6ae62@975860cc57a80183e1c8e4924a51839a:1469014073378/2/4/7/5.json.torque) instead of [the visualization owner](https://cdb-staging-1.global.ssl.fastly.net/username_0-ded-01-1/api/v1/map/username_0-ded-01-1@03b6ae62@975860cc57a80183e1c8e4924a51839a:1469014073378/2/4/7/5.json.torque).
It's probably because of [these lines](https://github.com/CartoDB/torque/blob/31a81e760b249e76b159c9966c219ae570f9acb3/lib/torque/provider/windshaft.js#L364-L389). The [backend patch that works without CDN](https://github.com/CartoDB/cartodb/pull/9112/commits/03d55781a20ca21a8feabbc8e494191d2df30a59#diff-5b802705d1b0026242a3fa27f9659720R144) can't be applied here.
Status: Issue closed
|
wolfogre/blog-gitment | 380982284 | Title: 要换工作了
Question:
username_0: https://blog.username_0.com/posts/from-eastmoney-to-qiniu/
Answers:
username_1: 突然发现一个陌生人follow我,一看是建新,虽然没有见面过,但感觉元气满满哦, 欢迎来七牛~
username_1: 更正下,建鑫童靴
username_2: 为什么这么快就离职,去头条了么,薪资问题嘛
username_0: @username_2 七牛待我不薄的,只是因为一些人员调整的原因,好聚好散
username_3: 从《 从 Gogs vs Gitea 看中外文化差异 》看到你博客的, 真巧, 我在东财正好一年半 |
bpowers/ortools | 589091072 | Title: How to minimize cost per km along with total distance?
Question:
username_0: I have given the information about the cost per km for each vehicle. My objective is to minimize the distance along with the cost by choosing optimal vehicles. I am using google or-tools where the objective function is to minimize the total distance. Is there any way to incorporate the cost per km information and minimize that along with total distance using google or-tools in python language? |
google/flutter-desktop-embedding | 602693299 | Title: any plans for supporting simple audio play plugin for windows/linux? (mac os works)
Question:
username_0: simple audio play plugin request.
android/ios/mac works fine, but i cannot find any way to play sound on windows/ or linux.
Status: Issue closed
Answers:
username_1: No; see [the readme](https://github.com/google/flutter-desktop-embedding/tree/master/plugins#desktop-plugins) for the explanation of the types of plugins that would (temporarily) be developed in this repository. An audio player plugin isn't part of either category. |
aws-samples/aws-workshop-for-kubernetes | 284289325 | Title: Failed with Vault secret pod.
Question:
username_0: - I have tried this example with minikube and also with the cluster created using the kubeadm but it gives same error.
When i deploy a Pod using secrets from Vault. It shows error.
```
$ kubectl get pods --show-all
NAME READY STATUS RESTARTS AGE
vault-kubernetes 0/1 Error 0 9s
```
The logs look like.
```
$ kubectl logs vault-kubernetes
serviceToken: <KEY>
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: --> POST http://172.16.17.32:8200/v1/auth/kubernetes/login
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Content-Type: application/json; charset=utf-8
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Content-Length: 895
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO:
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: {
"role": "demo",
"jwt": "<KEY>"
}
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: --> END POST (895-byte body)
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: <-- 500 Internal Server Error http://172.16.17.32:8200/v1/auth/kubernetes/login (111ms)
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Cache-Control: no-store
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Content-Type: application/json
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Date: Sat, 23 Dec 2017 06:14:26 GMT
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: Content-Length: 401
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO:
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: {"errors":["{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"tokenreviews.authentication.k8s.io is forbidden: User \\\"system:serviceaccount:default:vault-auth\\\" cannot create tokenreviews.authentication.k8s.io at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"group\":\"authentication.k8s.io\",\"kind\":\"tokenreviews\"},\"code\":403}"]}
Dec 23, 2017 6:14:26 AM okhttp3.internal.platform.Platform log
INFO: <-- END HTTP (401-byte body)
Exception in thread "main" org.json.JSONException: JSONObject["auth"] not found.
at org.json.JSONObject.get(JSONObject.java:520)
at org.json.JSONObject.getJSONObject(JSONObject.java:732)
at org.examples.java.App.getClientToken(App.java:85)
at org.examples.java.App.main(App.java:26)
```
Answers:
username_0: I have to update some RBAC rules then it worked
Status: Issue closed
|
NavyBye/ft_transcendence | 907217655 | Title: 게임 이슈 몇가지
Question:
username_0: 
1. /api/users/:user_id/game 이 안됩니다. (게임중이 아닐땐 그냥 텅 빈 JSON이 오는데, 게임중일 땐 저런 에러가 뜹니다)
2. 게임이 끝난 후 status 갱신이 안됩니다
@username_1 @username_1 @username_1
Answers:
username_1: 1번 부분은 #206 에서 머지했고, 나머지는 #245 에서 다루겠습니다
Status: Issue closed
|
spring-projects/spring-boot | 848693876 | Title: Spring Boot 2.4.4 maven test incompatible
Question:
username_0: Same `pom.xml`, just upgrade Spring Boot parent from 2.3 to 2.4 but:
Spring Boot 2.4.4: `mvn clean test` doesn't run any test (build success, Tests run: 0)
Spring Boot 2.3.8.RELEASE: `mvn clean test` run all tests
Maven: 3.6.3
JUnit: 4.13.2
Mockito: 3.3.3
Able to run all test cases normally within IntelliJ.
Status: Issue closed
Answers:
username_1: This is covered in the release notes: https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.4-Release-Notes#junit-5s-vintage-engine-removed-from-spring-boot-starter-test. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.