repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
CrunchyData/postgres-operator | 642844957 | Title: Backup/Restore on Openshift 4
Question:
username_0: We're using the Crunchy operator 4.3.2 on Openshift 4.4 Trying a Backup/Restore to a local disk we run into a permission issue (ERROR: [088]: unable to set ownership ...).
Steps to reproduce the behavior:
1. Configure PrimaryStorage as nfsstorage in pgo.yaml
....
PrimaryStorage: nfsstorage
WALStorage:
BackupStorage: nfsstorage
ReplicaStorage: nfsstorage
BackrestStorage: nfsstorage
Storage:
nfsstorage:
AccessMode: ReadWriteMany
Size: 1G
StorageType: create
StorageClass: ontap-gold
SupplementalGroups: 65534
....
2. Create configMap pgo-config
$ oc -n pgo create configmap pgo-config --from-file=./conf/postgres-operator/
3. Create a postgres-ha-cluster
$ pgo -n pgo create cluster bkcluster
3. Backup
$ pgo -n backup bkcluster
4. Restore-Error
$ pgo -n pgo restore bkcluster --backup-opts="--log-level-console=debug"
....
$ oc logs restore-bkcluster-ohsm-7vjjz
+ trap trap_sigterm SIGINT SIGTERM
+ CONFIG=/sshd
+ mkdir //.ssh/
mkdir: cannot create directory '//.ssh/': File exists
+ cp /sshd/config //.ssh/
+ cp /sshd/id_ed25519 /tmp
+ chmod 400 /tmp/id_ed25519 //.ssh/config
+ mkdir /pgdata/bkcluster-ikra
+ /usr/sbin/sshd -D -f /sshd/sshd_config
sleep 5 secs to let sshd come up before running pgbackrest command
+ echo 'sleep 5 secs to let sshd come up before running pgbackrest command'
+ sleep 5
+ '[' '' = '' ']'
+ echo 'PITR_TARGET is empty'
PITR_TARGET is empty
+ pgbackrest restore --log-level-console=debug
2020-06-22 06:49:16.936 P00 INFO: restore command begin 2.25: --log-level-console=debug --log-path=/tmp --pg1-path=/pgdata/bkcluster-ikra --repo1-host=bkcluster-backrest-shared-repo --repo1-path=/backrestrepo/bkcluster-backrest-shared-repo --repo1-type=posix --stanza=db
....
2020-06-22 06:49:30.737 P00 DEBUG: command/restore/restore::restoreRecoveryWriteAutoConf: (pgVersion: 120000, restoreLabel: {"2020-06-22 06:49:30"})
2020-06-22 06:49:30.737 P00 DEBUG: storage/storage::storageNewRead: (this: {type: posix, path: {"/pgdata/bkcluster-ikra"}, write: false}, fileExp: {"postgresql.auto.conf"}, param.ignoreMissing: true, param.compressible: false, param.limit: null)
2020-06-22 06:49:30.738 P00 DEBUG: storage/storage::storageNewRead: => {type: posix, name: {"/pgdata/bkcluster-ikra/postgresql.auto.conf"}, ignoreMissing: true}
2020-06-22 06:49:30.738 P00 DEBUG: storage/storage::storageGet: (file: {type: posix, name: {"/pgdata/bkcluster-ikra/postgresql.auto.conf"}, ignoreMissing: true}, param.exactSize: 0)
2020-06-22 06:49:30.739 P00 DEBUG: storage/storage::storageGet: => {used: 88, size: 88, limit: <off>}
2020-06-22 06:49:30.740 P00 DEBUG: command/restore/restore::restoreRecoveryConf: (pgVersion: 120000, restoreLabel: {"2020-06-22 06:49:30"})
2020-06-22 06:49:30.740 P00 DEBUG: command/restore/restore::restoreRecoveryOption: (pgVersion: 120000)
2020-06-22 06:49:30.740 P00 DEBUG: command/restore/restore::restoreRecoveryOption: => {KeyValue}
2020-06-22 06:49:30.740 P00 DEBUG: command/restore/restore::restoreRecoveryConf: => {"# Recovery settings generated by pgBackRest restore on 2020-06-22 06:49:30
[Truncated]
2020-06-22 06:49:31.044 P00 DEBUG: common/exit::exitSafe: => 88
2020-06-22 06:49:31.044 P00 DEBUG: main::main: => 88
The restore job stopped than and the postgres-cluster stayed down.
**Expected behavior**
Restore without permission issues :-)
**Please tell us about your environment:**
Crunchy Operator 4.3.2 (installed via GUI vom Operator Hub)
Openshift 4.4
RHEL 8
NFS
CCPImageTag: ubi7-12.3-4.3.2
**Additional context**
The problem could be related to 2 pgbackrest issues:
[https://github.com/pgbackrest/issues/727]
[https://github.com/pgbackrest/issues/624]
Answers:
username_1: In OpenShift, it may be preferable to set this to `true` based on your overall environmental settings.
username_0: hi @username_1
I set the property DisableFSGroup: true in the pgo.yaml:
...
Cluster:
CCPImagePrefix: registry.connect.redhat.com/crunchydata
Metrics: false
Badger: false
CCPImageTag: ubi7-12.3-4.3.2
Port: 5432
PGBadgerPort: 10000
ExporterPort: 9187
User: "pgadmin"
Database: "postgres"
PasswordAgeDays: 0
PasswordLength: 24
Replicas: 0
ArchiveMode: false
ServiceType: ClusterIP
Backrest: true
BackrestPort: 2022
BackrestS3Bucket: "bucket01"
BackrestS3Endpoint: "s3.vkbads.de:8082"
BackrestS3Region:
DisableAutofail: false
PodAntiAffinity: preferred
PodAntiAffinityPgBackRest: ""
PodAntiAffinityPgBouncer: ""
SyncReplication: false
DefaultInstanceMemory: "128Mi"
DefaultBackrestMemory:
DefaultPgBouncerMemory:
**DisableFSGroup: true**
PrimaryStorage: nfsstorage
WALStorage:
BackupStorage: nfsstorage
ReplicaStorage: nfsstorage
BackrestStorage: nfsstorage
Storage:
nfsstorage:
AccessMode: ReadWriteMany
Size: 1G
StorageType: create
StorageClass: ontap-gold
SupplementalGroups: 65534
Pgo:
Audit: false
PGOImagePrefix: registry.connect.redhat.com/crunchydata
PGOImageTag: ubi7-12.3-4.3.2
username_1: Did the initial backup successfully complete?
username_0: Yes, the inital (full) backup completed successfully, and also a second one (incremental).
$ pgo show backup bkcluster -n pgo
cluster: bkcluster
storage type: local
stanza: db
status: ok
cipher: none
db (current)
wal archive min/max (12-1)
full backup: 20200616-141127F
timestamp start/stop: 2020-06-16 16:11:27 +0200 CEST / 2020-06-16 16:11:38 +0200 CEST
wal start/stop: 000000010000000000000002 / 000000010000000000000002
database size: 23.4MiB, backup size: 23.4MiB
repository size: 2.8MiB, repository backup size: 2.8MiB
backup reference list:
incr backup: 20200616-141127F_20200617-071953I
timestamp start/stop: 2020-06-17 09:19:53 +0200 CEST / 2020-06-17 09:19:55 +0200 CEST
wal start/stop: 000000010000000000000007 / 000000010000000000000007
database size: 23.4MiB, backup size: 120.3KiB
repository size: 2.8MiB, repository backup size: 13.5KiB
backup reference list: 20200616-141127F
username_1: Can you provide an output of the restore job yaml?
```shell
oc -n pgo get jobs restore-bkcluster-ohsm-7vjjz -o yaml
```
username_0: Unfortunatly I deleted this postgres-cluster. But I reproduced this with a new one - the error was the same:
$ oc get job restore-bkcluster-ohsm -o yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2020-06-22T06:48:26Z"
labels:
backrest-restore-to-pvc: bkcluster-ikra
pg-cluster: bkcluster
pgo-backrest-restore: "true"
vendor: crunchydata
workflowid: aba71686-648e-4561-a68d-3f70ebad3407
name: restore-bkcluster-ohsm
namespace: pgo
resourceVersion: "198624624"
selfLink: /apis/batch/v1/namespaces/pgo/jobs/restore-bkcluster-ohsm
uid: e5f2db04-f7fc-4efc-adb8-d9055bf8fa2b
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: e5f2db04-f7fc-4efc-adb8-d9055bf8fa2b
template:
metadata:
creationTimestamp: null
labels:
backrest-restore-to-pvc: bkcluster-ikra
controller-uid: e5f2db04-f7fc-4efc-adb8-d9055bf8fa2b
job-name: restore-bkcluster-ohsm
pg-cluster: bkcluster
pgo-backrest-restore: "true"
service-name: bkcluster
vendor: crunchydata
name: restore-bkcluster-ohsm
spec:
affinity: {}
containers:
- env:
- name: COMMAND_OPTS
value: --log-level-console=debug
- name: PITR_TARGET
- name: PGBACKREST_STANZA
value: db
- name: PGBACKREST_DB_PATH
value: /pgdata/bkcluster-ikra
- name: PGBACKREST_REPO1_PATH
value: /backrestrepo/bkcluster-backrest-shared-repo
- name: PGBACKREST_REPO1_HOST
value: bkcluster-backrest-shared-repo
- name: PGBACKREST_REPO_TYPE
value: posix
- name: PGBACKREST_LOG_PATH
value: /tmp
- name: NAMESPACE
valueFrom:
fieldRef:
[Truncated]
serviceAccountName: pgo-backrest
terminationGracePeriodSeconds: 30
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: bkcluster-ikra
- name: sshd
secret:
defaultMode: 511
secretName: bkcluster-backrest-repo-config
status:
conditions:
- lastProbeTime: "2020-06-22T06:55:47Z"
lastTransitionTime: "2020-06-22T06:55:47Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 5
startTime: "2020-06-22T06:48:26Z"
username_1: I'm not seeing anything out of the ordinary. I would advise checking your [Security Context Constraints](https://docs.openshift.com/container-platform/4.4/authentication/managing-security-context-constraints.html) and what ownership group they are assigning to the Pod. Operator 4.3.2 is been enabled to use restricted mode out of the box, which tends to be the default in OpenShift environments.
There is a bit of context about this in #1535, but I think that would be a place to start investigating in your environment.
username_2: Did you figure this out?
I am encountering the same issue in Openshift 3.11, also with NFS....
username_0: We figured it out: we used a wrong root squasing on the netapp which caused pgbackup to create all files with gid 99 (nobody);
as a result pgrestore tried a chown during the restore. After changing the root_squash group to root the error disappeared.
Status: Issue closed
username_2: @username_0 Thx for the update! |
PixelogicDev/HypeVision | 339811832 | Title: Create Python test to change grayscale of image
Question:
username_0: Figure out how to change image gray scale (white background, dark text)
Answers:
username_1: `from PIL import Image
originalImage = Image.open("spongebob.jpg") #load the original image
originalImage.save("s1.jpg")
newImage = originalImage.convert(mode="L") #converts to gray-scale
newImage.save("s2.jpg") #want alpha? add use "LA" instead of "L"`
username_1: `
from PIL import Image
originalImage = Image.open("spongebob.jpg") #load the original image
originalImage.save("s1.jpg")
newImage = originalImage.convert(mode="L") #converts to gray-scale
newImage.save("s2.jpg") #want alpha? add use "LA" instead of "L"
`
username_1: ^^ converts an image to gray scale using the Pillow library (was this what you needed?) |
naotsugukaneko/share_seller_app | 959859655 | Title: ナビバーの実装
Question:
username_0: - Bootstrapを使い,ナビバーの装飾を実施
- 部分テンプレート`app/views/layouts/_header.html.erb`を作成済を編集
- 作成済みの「新規登録・ログアウトなどのリンク」は削除
- ナビバーのリンクを有効にする
- ログイン時リンクは,「投稿」「アカウント編集」「ログアウト」
- 非ログイン時リンクは「ログイン」「新規登録」「ゲストログイン」
- ゲストログインのリンクは別タスクとする
- ナビバーは fixed-top クラスを利用して上に固定されるようにする
- また,ナビバーに隠れないようなスタイルの調整も行う<issue_closed>
Status: Issue closed |
atlassian/react-beautiful-dnd | 613984734 | Title: Support fixed position items
Question:
username_0: Please add possibility to have elements that will always stay at position X in given list.
No matter what user drags and drops these items are always keeping their positions.
example list before manipulation:
1. item 1
2. item 2
3 FIXED ITEM
4. item 4
action: move item 4 to position 0
result:
1. Item 4
2. Item 1
3. FIXED ITEM
4. item 2
Answers:
username_1: I would definitely love to see this.
username_2: This is a wonderful idea. I would use this for sure. A great use case for an inline element that functions like a button, for example an "add object" button. In that case it would remain fixed as the last element. |
mockk/mockk | 1028349996 | Title: JvmHinter does not work for generic types on OpenJ9-JVM
Question:
username_0: - [x] I am running the latest version
- [x] I checked the documentation and found no answer
- [x] I checked to make sure that this issue has not already been filed
### Expected Behavior
Mocks of generic return types share same behavior cross-jvm implementation
### Current Behavior
OpenJ9 causes MockK to throw a Class-Cast Exception when mocking a function with generic return-type.
### Failure Information (for bugs)
The Auto-Hinter seems to not work in certain cases on the OpenJ9 JVM Implementation - on Hotspot it works fine.
#### Steps to Reproduce
I already have an open issue here: https://github.com/mockk/mockk/issues/717, so I extended the Repo with the code to reproduce (can be found at https://github.com/Staffbase/mockK-bug) by the [`GenericHinterTest`](https://github.com/Staffbase/mockK-bug/blob/main/src/test/java/com/staffbase/bugs/GenericHinterTest.kt) - it will fail on OpenJ9)
#### Context
Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
* MockK version: 1.12.0
* OS: Dockers `ubuntu-latest`, Arch Linux x86_64 (Kernel: 5.14.8-arch1-1), macOS
* Kotlin version: 1.5.31
* JDK version: openjdk 16.0.1 - OpenJ9 0.26.0 (adoptopenjdk/openjdk16-openj9:alpine-slim)
* JUnit version: 5.8.1
* Type of test: Unit-Test
#### Failure Logs
```
Oct 17, 2021 4:46:17 PM io.mockk.impl.log.JULLogger warn
WARNING: Non instrumentable classes(skipped): class java.lang.Object
```
#### Stack trace
```
io.mockk.MockKException: Class cast exception happened.
Probably type information was erased.
In this case use `hint` before call to specify exact return type of a method.
at app//io.mockk.impl.InternalPlatform.prettifyRecordingException(InternalPlatform.kt:73)
at app//io.mockk.impl.eval.RecordedBlockEvaluator.record(RecordedBlockEvaluator.kt:66)
at app//io.mockk.impl.eval.EveryBlockEvaluator.every(EveryBlockEvaluator.kt:30)
at app//io.mockk.MockKDsl.internalEvery(API.kt:92)
at app//io.mockk.MockKKt.every(MockK.kt:98)
at app//com.staffbase.bugs.GenericHinterTest.auto hints do not work(GenericHinterTest.kt:19)
at [email protected]/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at [email protected]/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at [email protected]/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at [email protected]/java.lang.reflect.Method.invoke(Method.java:567)
at app//org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
at app//org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at app//org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at app//org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
[Truncated]
@Test
fun `class typing the provider works as well`() {
val expected = "Hello, World!"
class StringProvider : Provider<String> {
override fun get() = "something"
}
val mockedProvider = mockk<StringProvider>()
every {
mockedProvider.get()
} returns expected
Assertions.assertEquals(expected, mockedProvider.get())
}
}
// -----------------------[ YOUR CODE ENDS HERE ] -----------------------
``` |
nextflow-io/nextflow | 319885702 | Title: K8s executor does not honour env variables defined in the NF config file
Status: Issue closed
Question:
username_0: Env variables define in the nextflow config files are not injected in the pod/job execution environment.
Answers:
username_0: Env variables define in the nextflow config files are not injected in the pod/job execution environment.
username_0: Rolling back this change because adding the task environment can break the pod environment (for example when a custom `PATH` variable is specified).
The task environment is correctly managed by the task bash wrapper script already.
Status: Issue closed
|
i-Gnomo/es6-demos | 276889829 | Title: 手机端iframe引入腾讯视频时,视频不能自适应的问题
Question:
username_0: `<iframe src="https://v.qq.com/iframe/preview.html?vid=y0174f57qev"></iframe>`样式异常
`<iframe src="http://v.qq.com/iframe/player.html?vid=y0174f57qev"></iframe>`正常显示
第一种引入视频路径方式,会使video标签播放视频时按钮样式异常,但因为引入iframe跨域,无法修改control buttons的样式,改为第二种引入路径就可正常显示 |
webmin/webmin | 493409636 | Title: Guru help needed: "disk quota exceeded". System files to delete on linux?
Question:
username_0: Trying to access webmin (virtualmin) thru https, which runs on a 100GB disk quota server, which is running Debian 8 "Jessie".
Browser is showing this error:
"Error
Failed to write to /tmp/.webmin/.theme_YjZjZmJlZDAzMWZi_webmin_goto_root when closing : Disk quota exceeded"
Is there a safe, smart, quick and easy way of deleting a few GB of system temp files, ancient log files, basically do a "disk cleanup", without destroying webmin's environment to the point webmin would break, and without touching user files in `/home`, so that webmin can run, and allow webmin to update itself, and linux enough disk space to update its packages....?!
Answers:
username_1: Well, anything under `/tmp` is fair game for starters..
username_2: I would also check `/var/log` as there might be lots of data, especially in case of log overflows.
username_0: After doing `rm -f /var/log/*`, there's errors while doing `apt upgrade`:
```
apache2_reload: (2)No such file or directory: AH02291: Cannot access directory '/var/log/virtualmin/' for error log of vhost defined at /etc/apache2/sites-enabled/mydomain1.com.conf:1
apache2_reload: (2)No such file or directory: AH02291: Cannot access directory '/var/log/virtualmin/' for error log of vhost defined at /etc/apache2/sites-enabled/mydomain2.com.conf:48
apache2_reload: (2)No such file or directory: AH02291: Cannot access directory '/var/log/virtualmin/' for error log of vhost defined at /etc/apache2/sites-enabled/mydomain3.com.conf:1
apache2_reload: (2)No such file or directory: AH02291: Cannot access directory '/var/log/virtualmin/' for error log of vhost defined at /etc/apache2/sites-enabled/mydomain4.com.conf:48
apache2_reload: (2)No such file or directory: AH02291: Cannot access directory '/var/log/virtualmin/' for error log of vhost defined at /etc/apache2/sites-enabled/mydomain5.com.conf:1
```
username_1: Yeah I wouldn't recommend deleting all of /var/log - but if you do, you'd need to re-create `/var/log/virtualmin` and restart Apache.
username_0: Had to do the following two commands, because neither `apache2` (as configured by `virtualmin`), nor `nginx`, are smart enough to dynamically re-create log directories that got deleted, and/or don't exist yet.
```
mkdir -p /var/log/virtualmin
mkdir -p /var/log/nginx
service nginx restart
service apache2 restart
``` |
sigalor/whatsapp-web-reveng | 486940978 | Title: WhatsApp updated?
Question:
username_0: Hello
Today we getting below error when trying to send message
VM31:440 Uncaught ReferenceError: _module is not defined
at init (<anonymous>:440:491)
at <anonymous>:440:858
Answers:
username_1: Hi,
Here too.
Some news?
username_1: This is still working:
```
function _requireById(id) {
return webpackJsonp([], null, [id]);
}
var Store = {};
Store = _requireById("bhggeigghg").default;
``` |
fossasia/pslab-android | 389361082 | Title: Unexpected Crash
Question:
username_0: **Actual Behaviour**
In multimeter , unexpected crash occurs on pressing delete after pausing the recording .
Please state here what is currently happening.
**Expected Behaviour**
The app should work smoothly.
State here what the feature should enable the user to do.
**Steps to reproduce it**
Go to multimeter , click on record , then pause and then on delete .
Add steps to reproduce bugs or add information on the place where the feature should be implemented. Add links to a sample deployment or code.
**Screenshots of the issue**

Where-ever possible add a screenshot of the issue.
**Would you like to work on the issue?**
No , just a review
Let us know if this issue should be assigned to you or tell us who you think could help to solve this issue.
Answers:
username_1: @username_2 @username_4 can I solve this in a similar way by using PSLabSensor abstract class ?
username_2: `PSLabSensor` class is only for sensors. Other instruments need to be handled individually.
username_1: I will like to work on this then
username_3: @username_2 @username_1 I would like to take up the issue if it's okay
username_1: @username_3 I want to learn the working of multimeter . I would like to help you in this if it's ok .
username_1: Did this issue require the pslab device to solve ? @username_3
username_3: No, I just added the condition that the recording will take place only when PSLab is connected which makes sense.
Status: Issue closed
|
volcas/ark-studio | 556836867 | Title: Project Feedback
Question:
username_0: {"Hits": "-The page has all the required elements
-CSS formatting rules are followed properly", "Misses": "-HTML formatting rules are not followed properly
-The page doesn't follow good UI/UX structure", "Final Score(Out of 5)": 2.625} |
wso2/product-apim | 822919907 | Title: Issue when enabling email as username for tenants
Question:
username_0: ### Description:
Following issues occurred when enabling email as username for tenants.
If we follow the steps mentioned in the 4.0.0 documentation (which points IS documentation - [1]) and enable email as username following error gets printed in the logs:
```
ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://172.17.0.1:9712.
org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException: Cannot borrow client for ssl://172.17.0.1:9712.
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:145) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:59) [org.wso2.carbon.databridge.agent_5.2.34.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_171]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException: Error while trying to login to data receiver :/172.17.0.1:9712
at org.wso2.carbon.databridge.agent.endpoint.binary.BinaryDataEndpoint.login(BinaryDataEndpoint.java:50) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:139) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
... 6 more
Caused by: org.wso2.carbon.databridge.commons.exception.AuthenticationException: java.lang.NullPointerException
at sun.reflect.GeneratedConstructorAccessor391.newInstance(Unknown Source) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_171]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_171]
at org.wso2.carbon.databridge.agent.endpoint.binary.BinaryEventSender.processResponse(BinaryEventSender.java:163) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.binary.BinaryDataEndpoint.login(BinaryDataEndpoint.java:44) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:139) ~[org.wso2.carbon.databridge.agent_5.2.34.jar:?]
... 6 more
```
And the API invocation in tenants is failing with the following error trace:
```
INFO - LogMediator To: local://axis2services/pizzashack/1.0.0/menu, MessageID: urn:uuid:7680c12c-f797-4779-90d5-1a2b12fdb222, Direction: request
```
2. As a workaround when trying the suggestion provided in [2] (Configured super admin username as `<EMAIL>@carbon.super`), API invocation was successful, but following error is thrown when loading the tenant:
```
ERROR - ReservedUserCreationObserver Error occurred while getting the realm configuration, User store properties might not be returned
org.wso2.carbon.user.core.UserStoreException: 31301 - Username apim_reserved_user is not valid. User name must be a non null string with following format, ^[a-zA-Z0-9.-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:210) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.addUser(AbstractUserStoreManager.java:4663) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
at org.wso2.is.key.manager.core.observers.ReservedUserCreationObserver.createReservedUser(ReservedUserCreationObserver.java:69) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.is.key.manager.core.observers.ReservedUserCreationObserver.createdConfigurationContext(ReservedUserCreationObserver.java:47) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.carbon.core.multitenancy.utils.TenantAxisUtils.createTenantConfigurationContext(TenantAxisUtils.java:356) [org.wso2.carbon.core_4.6.1.jar:?]
at org.wso2.carbon.core.multitenancy.utils.TenantAxisUtils.getTenantConfigurationContext(TenantAxisUtils.java:148) [org.wso2.carbon.core_4.6.1.jar:?]
at org.wso2.carbon.core.multitenancy.utils.TenantAxisUtils.getTenantAxisConfiguration(TenantAxisUtils.java:104) [org.wso2.carbon.core_4.6.1.jar:?]
at org.wso2.is.key.manager.core.tokenmgt.util.TokenMgtUtil.loadTenantConfigBlockingMode(TokenMgtUtil.java:350) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.is.key.manager.core.tokenmgt.issuers.AbstractScopesIssuer.getAppScopes(AbstractScopesIssuer.java:136) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.is.key.manager.core.tokenmgt.issuers.RoleBasedScopesIssuer.getScopes(RoleBasedScopesIssuer.java:299) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.is.key.manager.core.tokenmgt.issuers.RoleBasedScopesIssuer.validateScope(RoleBasedScopesIssuer.java:120) [wso2is.key.manager.core_1.2.5.jar:?]
at org.wso2.carbon.identity.oauth2.authz.handlers.AbstractResponseTypeHandler.validateScope(AbstractResponseTypeHandler.java:113) [org.wso2.carbon.identity.oauth_6.4.111.jar:?]
at org.wso2.carbon.identity.oauth2.authz.AuthorizationHandlerManager.validateScope(AuthorizationHandlerManager.java:198) [org.wso2.carbon.identity.oauth_6.4.111.jar:?]
at org.wso2.carbon.identity.oauth2.authz.AuthorizationHandlerManager.validateAuthzRequest(AuthorizationHandlerManager.java:157) [org.wso2.carbon.identity.oauth_6.4.111.jar:?]
at org.wso2.carbon.identity.oauth2.authz.AuthorizationHandlerManager.handleAuthorization(AuthorizationHandlerManager.java:92) [org.wso2.carbon.identity.oauth_6.4.111.jar:?]
at org.wso2.carbon.identity.oauth2.OAuth2Service.authorize(OAuth2Service.java:105) [org.wso2.carbon.identity.oauth_6.4.111.jar:?]
at org.wso2.carbon.identity.oauth.endpoint.authz.OAuth2AuthzEndpoint.authorize(OAuth2AuthzEndpoint.java:2288) [classes/:?]
at org.wso2.carbon.identity.oauth.endpoint.authz.OAuth2AuthzEndpoint.handleUserConsent(OAuth2AuthzEndpoint.java:964) [classes/:?]
at org.wso2.carbon.identity.oauth.endpoint.authz.OAuth2AuthzEndpoint.handleConsent(OAuth2AuthzEndpoint.java:1855) [classes/:?]
at org.wso2.carbon.identity.oauth.endpoint.authz.OAuth2AuthzEndpoint.doUserAuthorization(OAuth2AuthzEndpoint.java:1829) [classes/:?]
[Truncated]
Caused by: java.security.PrivilegedActionException
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_171]
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:196) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
... 83 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171]
at org.wso2.carbon.user.core.common.AbstractUserStoreManager$2.run(AbstractUserStoreManager.java:199) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_171]
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.callSecure(AbstractUserStoreManager.java:196) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
... 83 more
Caused by: org.wso2.carbon.user.core.UserStoreException: 31301 - Username apim_reserved_user is not valid. User name must be a non null string with following format, ^[a-zA-Z0-9.-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.addUser(AbstractUserStoreManager.java:4851) ~[org.wso2.carbon.user.core_4.6.1.jar:?]
at sun.reflect.NativeMethodAccessorI
```
[1] https://is.docs.wso2.com/en/5.10.0/learn/using-email-address-as-the-username/
[2] https://github.com/wso2/product-apim/issues/2618
Answers:
username_1: @Arshardh
username_2: Didn't face above issues when following https://apim.docs.wso2.com/en/latest/install-and-setup/setup/security/logins-and-passwords/maintaining-logins-and-passwords/#setup-an-e-mail-login doc,
API could could be invoked with a tenant user without above error as well.
Tested in carbon-apimgt- 9.0.79
Didn't try to login or do anything with apim_reserved_user, since it might be a user for internal product usage.
<img width="1653" alt="Screenshot 2021-03-25 at 14 21 05" src="https://user-images.githubusercontent.com/23296184/112445838-91b23f00-8d75-11eb-9ad1-689d533d4c85.png">
username_2: Above error can be reproduced if we add following config and create a new tenant and login
[user_store]
type = "database_unique_id"
username_java_regex = '^[a-zA-Z0-9.-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}'
username_java_script_regex = '^[a-zA-Z0-9.-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$'
Status: Issue closed
|
bkabrda/dfe | 249291359 | Title: dfe list-configs traceback when defaults are not defined.
Question:
username_0: My configurations.yml:
```
version: '1'
configurations:
- name: php
tag: "fedora/php"
vars:
base_img_reg: registry.fedoraproject.org/f26-modular
base_img_name: boltron
base_img_tag: latest
installer: dnf
options: --nodocs -y
```
Error:
```
Traceback (most recent call last):
File "/usr/bin/dfe", line 11, in <module>
load_entry_point('dfe==0.1.0', 'console_scripts', 'dfe')()
File "/usr/lib/python3.6/site-packages/dfe-0.1.0-py3.6.egg/dfe/bin.py", line 21, in main
File "/usr/lib/python3.6/site-packages/dfe-0.1.0-py3.6.egg/dfe/configurations.py", line 54, in from_file
KeyError: 'defaults'
```<issue_closed>
Status: Issue closed |
ClickHouse/ClickHouse | 789848386 | Title: ReplicatedMergeTree: Clean inactive parts code is fragile
Question:
username_0: See `clearOldPartsAndRemoveFromZK`:
https://github.com/ClickHouse/ClickHouse/blob/636049f4657f0943875199cb983eee6d168aaaf8/src/Storages/StorageReplicatedMergeTree.cpp#L5301-L5323
It collects the list of all inactive parts and try to remove all of them at once - first from zookeeper, then from filesystem.
Some scenarios (most typical example - mutations on tables with lot of parts) can create enormous number of inactive parts, which are removed slowly. And while they are removed a lot of bad things can happen:
1) hard restart - if data will were already removed from zookeeper but not from filesystem:
- it can prevent clickhouse from restarting (local set of parts is too diffent from zookeeper)
- even in clickhouse will start all those parts will be moved to detached as 'unexpected' and may take a lot of disk space.
2) new mutations creating even more inactive parts - may end up with situations when millions of inactive parts were collected, making the system unusable.
Proposal:
1) change clearOldPartsAndRemoveFromZK to process old parts in bulks (bulk size can be configured via mergetree settings, i guess nice default is about 1000 parts), maybe introduce and option do process every individual bulk in parallel by several threads simultaniously.
2) Introduce 'too many inactive parts' exception (configurable via merge_tree_setttings similar to parts_to_throw_insert or to max_parts_in_total), to prevent collecting too many inactive parts (i think that smth about 50K inactive parts in system is something you should worry). |
marmelab/react-admin | 853328084 | Title: Warning: validateDOMNesting(...): <div> cannot appear as a descendant of <p>.
Question:
username_0: I got the error "Warning: validateDOMNesting(...): <div> cannot appear as a descendant of <p>."
when I used the "SimpleList" component;
I supposed i got this error because i passed prop "secondaryText"
which pass to the prop "secondary" of <ListItemText/> there we have the condition:
secondary={ hasSecondaryText ? <Placeholder /> : undefined}
<ListItemText/> it's node Element <p>
component <Placeholder /> return to us </div> node element
Example
<SimpleList
primaryText={record => record.email}
secondaryText={record => record.roleId}
tertiaryText={record => (record.isActive ? 'Active' : 'Inactive')}
linkType="edit"
/>
Environment
React-admin version: ^3.12.2
React version: ^17.0.1
Browser: Google Chrome
Status: Issue closed
Answers:
username_1: So make sure your Placeholder component does not return a `div`
username_2: I could reproduce the bug in the simple example by making the dataProvider slower. The problem comes from our `<SimpleListLoading>` component.
username_2: when I used the "SimpleList" component;
I supposed i got this error because i passed prop "secondaryText"
which pass to the prop "secondary" of ListItemText there we have the condition:
secondary={ hasSecondaryText ? Placeholder : undefined}
ListItemText it's node Element p
component Placeholder return to us div node element
## Example
```
<SimpleList
primaryText={record => record.email}
secondaryText={record => record.roleId}
tertiaryText={record => (record.isActive ? 'Active' : 'Inactive')}
linkType="edit"
/>
```
## Environment
React-admin version: ^3.12.2
React version: ^17.0.1
Browser: Google Chrome
Status: Issue closed
|
aspnetboilerplate/aspnetboilerplate | 258664684 | Title: About import {**} from @shared/* in angular template
Question:
username_0: Hi,Thanks for this great project and you。
I encountered a strange phenomenon in the angular project.
In the angular project,`app-component-base.ts`:
```
import { AppSessionService } from '@shared/session/app-session.service';
```
and `tsconfig.json`:
```
"paths": {
"@abp/*": [ "../node_modules/abp-ng2-module/src/*" ],
"@app/*": [ "./app/*" ],
"@shared/*": [ "./shared/*" ],
"@node_modules/*": [ "../node_modules/*" ]
}
```
it is ok.
but in my project(my project is automatically generated from the command line`ng new myproject` )
I also follow the above code to do,But vscode shows an error:
```
[ts] Cannot find module '@shared/common/session/app-session.service'.
```
Do i have any configuration is wrong?
This may not be a problem with abp, but this problem has bother me for two days, please help me, thank you!
Answers:
username_1: Hi @username_0,
Are you getting error at compile time (when you run `ng serve` command)?
I am testing and VS code is showing the same error, but project is running with no error.
This is the test project (it is created by running `ng new <project_name>`)

username_0: @username_1
It is well at compile time.
Only in the development process shows red underline, and vscode no smart tips.
username_2: @username_0 as far as I remember this is related to angular-cli and they have fixed it. Can you try to upgrade your project's angular-cli version ?
username_3: @username_0 Any update?
Status: Issue closed
|
robolectric/robolectric | 77219097 | Title: Script to run tests under Wine
Question:
username_0: In the wake of #1817 (which is in turn the most recent in a series of bugs I have fixed that are regressions on Windows) @erd asked if there was a way we could check for such regressions in the future short of getting Windows CI support on Travis. I floated the idea of using Wine (https://www.winehq.org) to do this. I did a bit more research into Wine and Travis and I reckon this might work, so I'm creating this issue to track the idea. I would envisage implementing this in two stages:
# A script (either a shell script or a Maven plugin) which runs the tests using the Windows JDK(s) under Wine.
# Some extra bits in Travis to install Wine, then install the Windows JDK under Wine, and finally to run the above script.
The first would give the Unix developers a way to ensure their changes have not introduced any regressions on Windows (whereas at the moment they only find out after their changes have been merged and a Windows user tries them out). The second would allow the CI to detect issues without requiring a full Windows instance.
If done right there could be a fair bit of reuse between Unix and Wine (eg, you should be able to share the Maven repository and maybe the Maven installation itself.
Status: Issue closed
Answers:
username_0: I have managed to write some simple scripts that allow you to get Wine set up and running Java. These scripts are available at https://github.com/username_0/winejava-test. I have managed to test the full Robolectric test suite using winejava on Fedora 20 using wine 1.7.something. So this is in a state now where people might be able to find it useful for testing under Windows on their own machines. I would be happy if people could try it out and let me know how it works for them.
Note that JDK 8 doesn't seem to install cleanly under wine so this is still a Java 7 thing for the moment.
Unfortunately I have run into some complications when trying to integrate this into the Robolectric build. The problem:
- Travis CI VMs use Ubuntu 12.
- Default version of Wine for Unbuntu 12 is wine1.4, and the JVM seems to die when run under this version of Wine.
- The latest version of Wine seems to fix the dying problem, but installing it requires moving away from the container-based workers so that sudo can be used. Unfortunately the Robolectric build/test seems to get abruptly terminated during the final test phase when run on the standard (non-container) workers.
I think the solution will be to ask the Travis team to whitelist the ubuntu-wine PPA so that the latest version of Wine can be installed on the container-based workers.
username_0: In the wake of #1817 (which is in turn the most recent in a series of bugs I have fixed that are regressions on Windows) @erd asked if there was a way we could check for such regressions in the future short of getting Windows CI support on Travis. I floated the idea of using Wine (https://www.winehq.org) to do this. I did a bit more research into Wine and Travis and I reckon this might work, so I'm creating this issue to track the idea. I would envisage implementing this in two stages:
* A script (either a shell script or a Maven plugin) which runs the tests using the Windows JDK(s) under Wine.
* Some extra bits in Travis to install Wine, then install the Windows JDK under Wine, and finally to run the above script.
The first would give the Unix developers a way to ensure their changes have not introduced any regressions on Windows (whereas at the moment they only find out after their changes have been merged and a Windows user tries them out). The second would allow the CI to detect issues without requiring a full Windows instance.
If done right there could be a fair bit of reuse between Unix and Wine (eg, you should be able to share the Maven repository and maybe the Maven installation itself.
username_0: I have submitted a couple of whitelist requests to the Travis team to ask them to include the ubuntu-wine ppa so that the latest versions of Wine can be installed in the container workers. Hopefully they will get to this soon and I can integrate the Wine tests with the Travis build.
Status: Issue closed
username_1: Closing this as it hasn't been updated in a while. If its still an issue with Robolectric 4.0 please reopen with a reproducible test case and we'll prioritize. |
idies/pyJHTDB | 504211082 | Title: following #26
Question:
username_0: works perfectly. thanks. How to replace the calls of .libJHTDB by zeep calls for Lagrangian tracking and snapshots download? Do you have more examples of using zeep?
thanks @username_1
Answers:
username_1: I see. seems this is a bit complex. Have no idea how to fix it at the moment. but you could probably try `zeep` python package.
```import zeep
client = Client('http://turbulence.pha.jhu.edu/service/turbulence.asmx?WSDL')
ArrayOfPoint3 = client.get_type('ns0:ArrayOfPoint3')
SpatialInterpolation = client.get_type('ns0:SpatialInterpolation')
TemporalInterpolation = client.get_type('ns0:TemporalInterpolation')
Point3 = client.get_type('ns0:Point3')
temp1=Point3(x=0.1, y=0.1, z=0.1)
temp2=Point3(x=1.1, y=1.1, z=1.1)
temp3=Point3(x=2.1, y=2.1, z=2.1)
point = ArrayOfPoint3([temp1,temp2,temp3])
print(point)
print(client.service.GetVelocity("jhu.edu.pha.turbulence.testing-200711","isotropic1024coarse", 0.6,
SpatialInterpolation("None"), TemporalInterpolation("None"), point, ""))```
output:
```{
'Point3': [
{
'x': 0.1,
'y': 0.1,
'z': 0.1
},
{
'x': 1.1,
'y': 1.1,
'z': 1.1
},
{
'x': 2.1,
'y': 2.1,
'z': 2.1
}
]
}
[{
'x': -0.103202209,
'y': -1.22528827,
'z': -0.03917983
}, {
'x': -0.321914315,
'y': -0.499108434,
'z': -1.07386518
}, {
'x': -0.12327221,
'y': 1.19931781,
'z': -0.03163197
}]```
I will also try to write a package based on this. but it takes time.
username_0: works perfectly. thanks. How to replace the calls of .libJHTDB by zeep calls for Lagrangian tracking and snapshots download? Do you have more examples of using zeep?
thanks @username_1
username_1: This should work. `Point` and `SpatialInterpolation` are defined in the sameway as above.
```
StartTime=0.1
EndTime=0.2
dt=0.02
print(client.service.GetPosition("uk.ac.manchester.zhao.wu-ea658424","isotropic1024coarse", StartTime, EndTime, dt, SpatialInterpolation("None"), point, ""))
```
Status: Issue closed
username_1: It seems it's very slow to generate the a lot of Point3 structures: it takes me 1 minute to generate 1 million Point3. Do you have the same problem?
username_0: @username_1 not tested yet
username_1: see `examples\Use_JHTDB_in_windows.ipynb`
We provide some examples.
Thanks for sharing the information |
Serena-Wang/WordAnalysis | 348979390 | Title: Project Feedback!
Question:
username_0: Hey! Just wanted to say thanks for giving this project a go, and to give you a heads up that reviewing the projects are taking a bit longer than I expected, so I'm not going to be able to do an in-depth review with comments until late next week. Feel free to ping me on Slack if you want a quicker review or have any other concerns.
Answers:
username_0: Great job on the project and on finishing the course! I took a look at the code for this project and here are some small feedback points :
1. Adding a ReadMe
Would be nice to have a simple ReadMe for the project. This makes it easier for recruiters and potential interviewers to quickly look at your projects and be able to know what you've worked on. This is a great place to summarize 1) The problem you were trying to solve and 2) The technical challenges you faced
2. Not a big concern, but sometimes when interviewers look at what you have on GitHub , they'll notice how clean your code is. One easy thing to do is to auto-format code in IntelliJ https://stackoverflow.com/questions/17879475/how-enable-auto-format-code-for-intellij-idea This will remove all the extra white spaces and properly indent lines of code for free!
Great job overall. One thing that's always helpful is going back and looking at your old code to give yourself a 'code review'. I find that this often helps me learn from my mistakes and also helps me improve by seeing things that I wasn't able to when I had spent too long looking at one piece of code. |
irmen/pyminiaudio | 662370148 | Title: Error: AttributeError: cdata 'drmp3_config *' has no field 'outputChannels'
Question:
username_0: Calling `miniaudio.mp3_read_f32` results in error:
```
Traceback (most recent call last):
File "/workspaces/ml/test6.py", line 4, in <module>
decoded = miniaudio.mp3_read_f32(f.read())
File "/home/dev/.local/lib/python3.8/site-packages/miniaudio.py", line 567, in mp3_read_f32
config.outputChannels = want_nchannels
AttributeError: cdata 'drmp3_config *' has no field 'outputChannels'
```
Example Code:
```
import miniaudio
with open('common_voice_en_20603299.mp3', 'rb') as f:
decoded = miniaudio.mp3_read_f32(f.read(), want_nchannels=1, want_sample_rate=16000)
```
Example File: [common_voice_en_20603299.zip](https://github.com/username_1/pyminiaudio/files/4950456/common_voice_en_20603299.zip)
Environment:
OS: Ubuntu 20.04
Lib Version:
```
Collecting miniaudio
Downloading miniaudio-1.31.tar.gz (492 kB)
|████████████████████████████████| 492 kB 3.0 MB/s
Requirement already satisfied: cffi>=1.3.0 in /usr/local/lib/python3.8/dist-packages (from miniaudio) (1.14.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.3.0->miniaudio) (2.20)
Building wheels for collected packages: miniaudio
Building wheel for miniaudio (setup.py) ... done
Created wheel for miniaudio: filename=miniaudio-1.31-cp38-cp38-linux_x86_64.whl size=522066 sha256=318e1965eae33bdff2091fff6c8966503cab77c60a29800f2a0c463cf8451d45
Stored in directory: /home/dev/.cache/pip/wheels/ce/ff/9a/844cb9c3f1fae09d3dad5731eeddedcc228938dca81e92eefa
Successfully built miniaudio
Installing collected packages: miniaudio
Successfully installed miniaudio-1.31
```
Answers:
username_1: Could you perhaps retry with the latest 1.36 version?
username_0: @cameronmaske I am using a workaround and I haven't tried with updated libraries. Were you able to recreate the bug? I have outlined all the steps to recreate it. If this works, then you can close it. I also you add some unit tests to catch it because it seems the API was untested.
username_1: You are right, I mistakenly assumed the problem was with the underlying miniaudio api itself. I have reproduced the problem and will fix the python wrapper interface
Status: Issue closed
|
aspnet/dnx | 124647822 | Title: dnu pack will fail if consuming nuget package `ServiceBus.V1_1`
Question:
username_0: Error message from compiller is
```The package ID 'ServiceBus.v1_1' contains invalid characters. Examples of valid package IDs include 'MyPackage' and 'MyPackage.Sample'.```
Answers:
username_1: Can you make a sample that repros the issue? Push it to github and send the url.
username_0: @username_1 Sample application is found here https://github.com/username_0/bugs/tree/DnuPack-3294
username_1: That's pretty weird. I wonder if NuGet pack fails the same way.
username_1: Yep, bug in `dnu pack`, this will be fixed in the dotnet CLI (hub.com/dotnet/cli) as `dnu` and `dnx` are being retired.
username_0: I have added an nuspec that will work with `nuget pack` and that seams to work.
username_1: Fix is here https://github.com/dotnet/cli/pull/689
Status: Issue closed
|
FTBTeam/FTB-Sky-Adventures | 392805246 | Title: Ticking Particles
Question:
username_0: <!--
Thanks for wanting to report an issue you've found. Please delete
this text and fill in the template below. If unsure about something, just do as
best as you're able. Thank you!
Note: any external modifications to this modpack will render all support useless,
ie; adding mods like optifine to the modpack! So please remove all added content,
re-test bug/issue and resubmit!
-->
* **Modpack Version**: 1.3.0
* **Issue**: IC2 Mass Fabricator exploded and killed me in game, then the game stalled and crashed.
* **Link to Log or Crash File Paste**: I keep getting 522 error when accessing FTB paste website, therefore I think I can only attach the crash report. Sorry for the inconvenience.
[crash-2018-12-19_14.20.35-client.txt](https://github.com/FTBTeam/FTB-Sky-Adventures/files/2696729/crash-2018-12-19_14.20.35-client.txt)
* **Is it Repeatable?**: Yes; the game keeps crashing if I try to enter the world.
* **Mod/s Affected**: Industrial Craft 2, 2.8.73-ex112, according to "installed mod".
* **Known Fix**: <!-- optional; if you know of a fix please let me know! Thanks -->
Answers:
username_0: Adding known fix:
The problem is actually caused by rain particle. I used NBTexplorer to stop the rain, and it worked.
More specifically, the entry that is modified is under level.dat, Data, raining. The entry value was originally 1, setting it to 0 will fix the problem.
username_1: I will report this to the dev, also the pastebin site has been restored. Thanks
username_2: Works for me, pack version 1.5
Status: Issue closed
|
spring-petclinic/spring-framework-petclinic | 328742544 | Title: Use Spring Data JDBC
Question:
username_0: We could add a fourth implementation of the DAO layer with Spring Data JDBC
https://projects.spring.io/spring-data-jdbc/
Answers:
username_1: Hey @username_0. I developed Spring Data JDBC version of PetClinic - https://github.com/username_1/spring-petclinic-data-jdbc.
Would you like to move it into the `spring-petclinic` organisation?
username_0: Hi @username_1. Thanks for your proposal. I had a look to your Spring Data JDBC version of Petclinic and you've done a great job.
By reading your code and the https://spring.io/blog/2018/09/24/spring-data-jdbc-references-and-aggregates blog post, implementing Spring Data JDBC repositories requires to split Many-to-One and Many-to-Many relationship. Thus I suppose we could not add a fourth DAO implementation in the `spring-framework-petclinic`version. So I'm aware to create a new repo in the `spring-petclinic` organisation. Thanks a lot.
username_1: That's on purpose, as far as I can see most forks are Boot-based. Do you prefer to base it on plain Spring Framework?
Regarding readme - roger that 👍
username_1: Although I am in the org, I don't have permissions to push to `spring-petclinic-data-jdbc`:
```
~ git push petclinic master
remote: Permission to spring-petclinic/spring-petclinic-data-jdbc.git denied to username_1.
fatal: unable to access 'https://github.com/spring-petclinic/spring-petclinic-data-jdbc.git/': The requested URL returned error: 403
```
username_0: Could you retry please? I gave you admin rights.
Status: Issue closed
username_0: Thanks a lot. I've twitted the news. I suppose we could close this issue. |
alexdobin/STAR | 335475910 | Title: Segmentation fault
Question:
username_0: There was one read pair caused the "Segmentation fault" error. No other detailed log information about this error, only the "Segmentation fault".
I tested several parameters, looks like the --peOverlapNbasesMin.
If the --peOverlapNbasesMin was set from 1-7, the error "Segmentation fault" occurs, while the parameter was set larger than 8, the mapping ran successfully.
Here is the read pair. It was mapped to hg19 with gencode v19 annotation.
@mate1
ATCTGAGGAGTGTGGGGATGGAGGCATCTGAGGAGTGTGGGGATGGCTCTCAGCTGGGCCTATGCTGGTCATGAACGGTCCTGGAAAATGACTCCCTTCCT
+
CCCFFFFFCFAFDGIJIGIJJJJJJJJJJJJJJGI?FHGIJJJIJJJIJJJIIHHHHFFFFEEEEEEDDDDDDDDCBDDDDDDDDCCDCCDDCDDDDDDDD
@mate2
CTCAGATGCCTCCATCAACACCAAGCAGCAGTTTCTTAACCACGAAAAGTGAAGACACAATCTCCAAAATGAATGACTTCATGAGGATGCAGATACTGAAG
+
CCCFFFFFHHHHHJJJJJJJJJJJJJJJJJJIHIIJJJIJJJJJJJJJJFFGIIJJJJJIJJJJJHHHHHHFFFFFFEEEEEEDDDDDDDDDDDDDDEDDD
Thank you.
Answers:
username_1: Hi Jie,
please try the patch on GitHub [master](https://github.com/username_1/STAR/archive/master.zip), I think I have fixed the problem.
Thanks!
Alex
Status: Issue closed
|
Ericsson/codechecker | 537595370 | Title: The database of some product cannot be connected
Question:
username_0: ON the main product listing page, if you have more than 60 products, when you reload the page some of them randomly fails to connect.
Error message in the webui.
"Failed to connect to the database"
Answers:
username_1: @username_0 I think increasing the [max_connections](https://www.postgresql.org/docs/9.4/runtime-config-connection.html) parameter will solve your problem in case of PSQL.
Status: Issue closed
|
lasote/conan-gtest | 175908269 | Title: Add an option to include symbols/pdbs with gtest.
Question:
username_0: I'm building a project that uses the gtest package and I would like to avoid LNK4099 (missing gtest.pdb) warnings when building with MSVC.
I've been working around this issue with a fork of the package, I'll have a pull request up shortly with an implementation.<issue_closed>
Status: Issue closed |
thadeusb/flask-cache | 257783365 | Title: Redis - Protocol error (invalid bulk length) from client
Question:
username_0: I'm trying to use flask cache with redis to memoize function calls with large outputs. However, on the Redis side, I'm getting errors like:
```
Protocol error (invalid bulk length) from client: id=18158 addr=10.12.345.678:55056 fd=8 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=13107 qbuf-free=19661 obl=42 oll=0 omem=0 events=r cmd=auth. Query buffer during protocol error: '*4..$5..SETEX..$30..flask_cache_view//v1/data/endpoint..$5..43200..$' (... more 12979 bytes ...) '\x7f\x00\x00\x00\x00\x00\x00\xf8\x7f\x00\x00\x00\x00\x00\x80@@\x'
```
Could this be a problem with how flask cache is encoding the redis commands? Or am I looking in the wrong place? Thanks! |
tyejae/msf.gg.public | 610732421 | Title: Show Total Power On War Defense Screen
Question:
username_0: Total number showing after the name in war defense screen which would basically just add together the strength of all the teams present
Answers:
username_0: Also be able to sort the list by power
username_0: Maybe assign rooms from this screen? |
heroiclabs/nakama-godot | 1185419153 | Title: Support for setting Match Name on Match Create
Question:
username_0: Had a discussion with @novabyte about being able to set a name on match creation. It appears that the godot client doesn't support this. The use case is creating four digit room codes.
Answers:
username_0: Sounds great! I'm actually follow your example tank game to build what I need to. So thank you! |
Ryujinx/Ryujinx-Games-List | 623140098 | Title: Croc's World
Question:
username_0: ## Croc's World
#### Current on `master` : 1.0.4593
Game loads and plays at a high FPS.
#### Screenshots :







#### Log file :
[CrocsWorld.zip](https://github.com/Ryujinx/Ryujinx-Games-List/files/4667717/CrocsWorld.zip) |
akkadotnet/akka.net | 920478483 | Title: Documentation: Appears to be some copy pasta in FSM.SubscribeTransitionCallback class docs
Question:
username_0: Apologies for not following the template, but it didn't really apply.
The class level documentation on `FSMBase.SubscribeTransitionCallback` doesn't make sense. It looks like the author copied the wrong type names in a couple of places.
See: https://github.com/akkadotnet/akka.net/blob/dev/src/core/Akka/Actor/FSM.cs#L157
``` csharp
/// <summary>
/// Send this to an <see cref="SubscribeTransitionCallBack"/> to request first the <see cref="UnsubscribeTransitionCallBack"/>
/// followed by a series of <see cref="Transition{TS}"/> updates. Cancel the subscription using
/// <see cref="CurrentState{TS}"/>.
/// </summary>
public sealed class SubscribeTransitionCallBack
```
* Send this to an `SubscribeTransiitionCallback` should probably be to an `FSM`
* ...to request first the `UnsubscribeTransitionCallback` - not sure what this is trying to tell me
* Cancel the subscription using `CurrentState<TS>` - unsubscribe using `UnsubscribeTransitionCallback`
Answers:
username_1: Yeah this doesn't seem right.... |
niaogege/web-front_end-develop-Interview | 397336629 | Title: 正则表达式
Question:
username_0: 这里我第一时间想到的就是用 js 的search 和 match ,其中最常见的是match;
1. str.search(regexp):search()方法不支持全局搜索,因为会忽略正则表达式参数的标识g,并且也忽略了regexp的lastIndex属性,总是从字符串的开始位置进行检索,所以它会总是返回str的第一个匹配的位置。
1 var str = "Javascript";
2 str.search(/script/); // 返回 script 中s的位置为 4
3 str.search(/j/i); // 设置正则表达是标识i:忽略大小写,则匹配到J,返回位置0
2. str.match(regexp):返回值是包含了匹配结果的数组。(有设置全局标志g和没有设置全局标志,如果没有设置全局标志,则就不是全局性的检索,只是检索第一个匹配。)
有设置全局标志
1 // 全局匹配
2 var str = "1 plus 2 equals 3";
3 str.match(/\d/g); // 匹配字符串中出现的所有数字,并返回一个数组: [1,2,3]
没有设置全局标志
1 // 非全局匹配
2 var str = "visit my blog at http://www.example.com";
3 str.match(/(\w+):\/\/([\w.]+)/); // 返回结果:["http://www.example.com", "http", "www.example.com"]
4 // 正则表达式匹配的结果为:http://www.example.com
5 // 第一个子表达式 (\w+)匹配结果:http
6 // 第二个子表达式 ([\w.])匹配结果: www.example.com
Answers:
username_0: ```
var str = 'price: $3.6'
var pattern = new RegExp(/\$[0-9]/)
console.log(pattern.test(str))
console.log(str.match(pattern))
VM234:3 true
VM234:4 ["$3", index: 7, input: "price: $3.6", groups: undefined]
``` |
freshworkstudio/ChileanBundle | 291568334 | Title: validacion rut
Question:
username_0: Hola, estoy utilizando la validación del rut (`cl_rut`) cuando ingreso cualquier numero, retorna el mensaje en la vista diciendo " Debe ser un rut válido " pero cuando mando el campo del rut vacío salta la `InvalidFormatException`, en ese mismo campo tengo la validación de requerido pero me salta es la excepción
Answers:
username_0: ya resolvi, mi error... gracias por este paquete me ayuda en todos los proyectos, saludos
Status: Issue closed
username_1: Hola César,
¿Era problema del paquete? ¿Recomiendas hacerle algun cambio?
Saludos!
GONZALO DE-SPIRITO ZÚÑIGA | Web Development
*FRESHWORK STUDIO CHILE*
Fijo: 02 - 2979 0059| Móvil: 881 53 776 | <EMAIL>
<<EMAIL>>
Cuida el medio ambiente. No imprimas este e-mail a menos que sea necesario.
2018-01-25 10:30 GMT-03:00 <NAME> < |
temporalio/temporal | 624982741 | Title: FYI: python-betterproto compatibility
Question:
username_0: This isn't really an issue/feature request.
It's more of an observation on issues I had when integrating the temporal-proto into the Python SDK using python-betterproto.
The default GRPC libraries for Python leave a lot to be desired. So I was looking into using python-betterproto which gives us a much better Python experience for GRPC.
However, I ran into a few issues that I would like to highlight here. The issues are actually not the fault of temporal-proto at all. It's mainly due to the default behavior of python-betterproto. I looked into the code for python-betterpro and I think it's easy enough to fork and modify it for my needs. So there is no need for any action on Temporal's side.
Having said that, I am simply highlighting this before the window for backwards incompatible changes closes for the Temporal team's possible consideration:
1.) Recursive relationship: Failure
python-betterproto attempts to assign a default value to each field. This leads to the stack space being exhausted when the a Failure object is initialized because it attempts to build an object graph which never ends. I think this is because python-betterproto doesn't support HasField/ClearField functions like other protobuf implementations.
I already fixed this issue in my local copy of python-betterproto by simple not initializing sub-message fields when instantiating messages.
2.) "None" in enum QueryRejectCondition is a keyword in Python
I believe this is also quite easy to fix:
https://github.com/danielgtaylor/python-betterproto/blob/f8203977513d5d7c821cbac63e6fa764d301c2e8/betterproto/casing.py
----------
I will need to fork python-betterproto whether or not any changes are made in temporal-proto for the two above issues --- in particular the generated RPC stub methods use parameters instead of request objects --- therefore it really is not much trouble at all for me to make accommodations for the two above issues.
Answers:
username_1: Thanks for reporting this, @username_0, at the very right moment!
We will fix 2nd item soon because we gonna prefix all enum values with type prefix as recommended [here](https://developers.google.com/protocol-buffers/docs/style#enums).
For the 1st one, getting rid of recursive relationship for `Failure` will break semantic. Original idea was to use something like `FailureChain` with repeated failures inside. But this will add complexity to both SDKs and server. I believe moving forward with your own fork of `python-betterproto` is a better option for now. Also I hope this bug will be fixed there sooner or later.
Status: Issue closed
|
Kuzmin/node-cache-middleware | 189464378 | Title: Add as browsersync/connect middleware
Question:
username_0: Hey there.
I'm trying to get this to work as a middleware in connect (what BrowserSync uses). Here's the link to their docs https://browsersync.io/docs/options#option-middleware
I'm currently loading it like so:

Don't this it's working though. I have to add my own `req.header` function because the method didn't exist in the `req` object.
Answers:
username_1: @username_0: I have only used this with Express unfortunately. Is adding the `header` method a viable option to make this work for you? I'm currently sloshed with work so I will not be able to look into Connect middleware compatibility for some time, but if you want to submit a PR it's more than welcome! |
PaddlePaddle/Paddle | 244247129 | Title: Conv layer cheke "layer_type" err while use "cudnn_convt" or "cudnn_conv" in gpu mode
Question:
username_0: Conv layer will check 'use_gpu' option while `layer_type` is setted to `cudnn_conv` at [code](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer/config_parser.py#L1825-L1826). But conv layer can't can get `use_gpu` option correctly at [here](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer/config_parser.py#L1819). I guess the solution of parsing `use_gpu` from cmd line is not compatible with v2 api.
We can fix this bug by:
1. Remove check use_gpu
2. Put `use_gpu` option into os.environ.iteritems() while [paddle.init()](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/v2/__init__.py#L60) is called.
3. Or other better solution?
And another bug:
`cudnn_convt` should be added [here](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/trainer_config_helpers/layers.py#L157) for supporting trans_conv in gpu mode.
Answers:
username_1: I think we can consider removing `exconv` and `cudnn_conv` type, only to retain the `conv` type. In the c++ code to determine the use of exconv or cudnn_conv calculation method.
Status: Issue closed
|
uf-feedback/herpetology | 131487744 | Title: Monthly VertNet data use report for January, 2016, resource herpetology
Question:
username_0: Your monthly VertNet data use report is ready!
You can see the HTML rendered version of the reports through this link http://htmlpreview.github.io/?https://github.com/uf-feedback/herpetology/blob/master/reports/99010-herpetology_2016_02_02.html or you can see and download the raw report via GitHub as a text file (https://github.com/uf-feedback/herpetology/blob/master/reports/99010-herpetology_2016_02_02.txt) or HTML file (https://github.com/uf-feedback/herpetology/blob/master/reports/99010-herpetology_2016_02_02.html).
To download the report, please log in to your GitHub account and view either the text or html document linked above. Next, click the "Raw" button to save the page. You can also right-click on "Raw" and use the "Save link as..." option. The txt file can be opened with any text editor. To correctly view the HTML file, you will need to open it with a web browser.
You can find more information on the reporting system, along with an explanation of each metric, here: http://www.vertnet.org/resources/usagereportingguide.html
Please post any comments or questions to http://www.vertnet.org/feedback/contact.html
Thank you for being a part of VertNet. |
adokter/bioRad | 456040719 | Title: units interval_max argument in integrate_profile()
Question:
username_0: Units in `integrate_profile` of the `interval_max` are in seconds, while `regularize_vpts()` has an interval argument that is in minutes by default, and adjustable by argument `unit`.
These two should be made consistent
Answers:
username_1: I would argue for seconds in both.
username_0: closed by commit https://github.com/username_0/bioRad/commit/ddbce83809a9acc89dbf8b0e50e2459b91394263
Status: Issue closed
|
Naereen/Objectif-Zero-Dechet-2018 | 821643820 | Title: Essayer de réécrire un peu sur ce blogue ?
Question:
username_0: - [ ] Changer l'URL en juste "<NAME>" ? Pas de raison de marquer 2018, si ?
- [ ] J'ai fait beaucoup de progrès depuis le printemps 2019, mais c'est tellement intégré dans mes habitudes désormais (deux ans sans garder trace ici) que je ne pensais plus à mettre à jour ce blogue ;
- [ ] Il me suffirait d'écrire un gros article maintenant, pour faire état de mes nouvelles habitudes,
- [ ] Puis je pourrai écrire un tous les 3 mois (au début de chaque saison ce serait une bonne habitude). |
codenotary/immudb | 869935759 | Title: Single or multiple collection
Question:
username_0: I understood there is an option for multiple databases, but didn't get how it is handling collections, if I have 2 collections, one for 'purchase orders' and one for 'shipments' how can I ask the shop to sauce in the correct collection. I couldn't see anything at the [Get and Set](https://docs.immudb.io/master/sdk.html#get-and-set)
Answers:
username_1: Hu @username_0,
you can create and select different databases before start to writing data inside immudb, [here](https://docs.immudb.io/master/sdk.html#multiple-databases) you will find more informations.
We have also [secondary indexes](https://docs.immudb.io/master/sdk.html#secondary-indexes) that can be used to handle collections.
If you prefer we have also a [discord channel](https://discord.com/invite/ThSJxNEHhZ) in order to receive quick help.
Status: Issue closed
username_0: Thanks |
alexlafroscia/alexlafroscia.com | 574069094 | Title: Fix social media links hidden by ad blocker
Question:
username_0: Due to the class names on the social media links in the sidebar and header, ad blockers will sometimes hide these from the page. This is not ideal.
Re-naming these classes, or directly embedding the icons as SVGs rather than using CSS at all, will likely solve this issue.<issue_closed>
Status: Issue closed |
Azure/azure-libraries-for-net | 452792125 | Title: Is it possible to move resources using fluent azure client?
Question:
username_0: **Query/Question**
Is it possible to move resources using fluent azure client?
***Why is this not a Bug or a feature Request?***
I am not sure if it already exists.
**Setup (please complete the following information if applicable):**
NA
**Information Checklist**
NA
Answers:
username_1: ResourceManager
.Configure()
.Authenticate(AzureCredentials)
.WithSubscription(TestParameters.SubscriptionId.ToString())
.ResourceManager
.Inner
.Resources
.MoveResourcesAsync("sourceRG", new ResourcesMoveInfoInner()
{
TargetResourceGroup = "destinationRG"
});
username_2: This question appears to have been answered so I'm closing the issue. Please reopen if more followup is needed.
Status: Issue closed
|
urbit/docs | 241535493 | Title: Examples in Hoon Library 2i Page (Maps) largely don't work
Question:
username_0: /docs/hoon/library/2i/md
Discovered this while trying to learn about hoon maps. Many of the example don't work because (mo is no longer valid hoon. I do have a PR to fix this coming very soon, though there is still one example I couldn't figure out how to fix.
Status: Issue closed
Answers:
username_2: Great work guys! |
JSQLParser/JSqlParser | 195635179 | Title: PostgreSQL "simple" form of CASE expression not supported
Question:
username_0: JSqlParser Version: 0.9.6 and 0.9.7-SNAPSHOT (master)
According to PostgreSQL documentation (https://www.postgresql.org/docs/current/static/functions-conditional.html) there is an additional case syntax, which they call "simple" form of CASE expression.
Examples:
`select case when 1=2 then 1 else 0 end` (supported)
`select case 1=2 when true then 1 else 0 end` (not supported, `CCJSqlParserUtil.parse()` throws `ParseException`)
Discovered this behavior using SQuirreL SQL client.
Could you please add this additional case syntax for the next release?
Thank you
Answers:
username_0: Thanks for the informations. I can see some limitation in the expressions. What I really need is to parse a more complex case statement query which is queried by some SQL clients.
Can you enumerate what are the limitations in this kind of query? Perhaps I can help with some code to parse this kind of case statement. Please take a look in the following query:
`SELECT NULL AS TABLE_CAT, NULL AS TABLE_SCHEM, c.relname AS TABLE_NAME, CASE c.relname ~ '^pg_' WHEN true THEN CASE c.relname ~ '^pg_toast_' WHEN true THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TOAST TABLE' WHEN 'i' THEN 'SYSTEM TOAST INDEX' ELSE NULL END WHEN false THEN CASE c.relname ~ '^pg_temp_' WHEN true THEN CASE c.relkind WHEN 'r' THEN 'TEMPORARY TABLE' WHEN 'i' THEN 'TEMPORARY INDEX' WHEN 'S' THEN 'TEMPORARY SEQUENCE' WHEN 'v' THEN 'TEMPORARY VIEW' ELSE NULL END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'SYSTEM TABLE' WHEN 'v' THEN 'SYSTEM VIEW' WHEN 'i' THEN 'SYSTEM INDEX' ELSE NULL END ELSE NULL END ELSE NULL END WHEN false THEN CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'v' THEN 'VIEW' WHEN 'c' THEN 'TYPE' ELSE NULL END ELSE NULL END AS TABLE_TYPE, NULL AS REMARKS FROM pg_class c WHERE true AND (false OR ( c.relkind = 'r' AND c.relname !~ '^pg_' ) OR ( c.relkind = 'r' AND c.relname ~ '^pg_' AND c.relname !~ '^pg_toast_' AND c.relname !~ '^pg_temp_' ) OR ( c.relkind = 'v' AND c.relname !~ '^pg_' ) ) ORDER BY TABLE_TYPE,TABLE_NAME`
Thanks again.
Status: Issue closed
username_0: Thanks very much. It`s working fine now. |
ianstormtaylor/slate | 802295122 | Title: Feature: Find the element by its unique id
Question:
username_0: #### Do you want to request a _feature_ or report a _bug_?
Feature
#### What's the current behavior?
Only able to find the path of node with ReactEditor.findPath()
#### Explanation of Feature
In my project each Element have unique _id field. And I would really love a feature where I can just find the path of element by `RectEditor.findPathById()`.
As of now I am using `Node.elements()` which works fine but if the element at the end of the document it may long time to find the path. And I think ReactEditor maintains some sort of Map/WeakMap to store path. So if you are able to implement something like that for `_id` then that would be great.
I am not sure if lot of people will get benefit from this feature. But still I think it is a great feature to have.
Answers:
username_1: Slate is really agnostic to whether any given node stores an id so I am not sure this type of feature will ever make it into the core library. As you've done though, any user can implement the feature on top of Slate, or feel free to release a plugin. Thanks!
Status: Issue closed
username_2: @username_1 when I try to implement it, I got an issue, please see: https://github.com/ianstormtaylor/slate/issues/4641 |
spacetelescope/jwst | 447327627 | Title: NIRISS SOSS regression test has input data in wrong location
Question:
username_0: The regression test
```
niriss.test_niriss_steps_single.TestNIRISSSOSS2Pipeline.[stable-deps] test_nirisssoss2pipeline1
```
has its data located in Artifactory in the wrong location
```
jwst-pipeline/dev/niriss/test_detector1pipeline/truth/jw10003001002_03101_00001-seg003_nis_calints_ref.fits
```
It should not be in the `detector1` area.<issue_closed>
Status: Issue closed |
icsharpcode/ILSpy | 501118232 | Title: Allow transformation pipeline to override CSharpDecompiler.MemberIsHidden
Question:
username_0: We need an API that allows us to tell the decompiler to ignore the result of `MemberIsHidden` and instead include a member or type in the decompilation result.
For example: In #1699 an auto property is not transformed, because it cannot be expressed as such in C#, however the decompiler does not include the backing field in the output.
Answers:
username_1: I don't think "an API" makes sense -- if each transform needs to explicitly report when it couldn't handle something; we still run into issues when a transform gets the reporting wrong.
The simpler solution is:
* first decompile only the non-hidden members
* if the decompiled code has references to hidden members, also decompile those
* iterate until fixed point is reached |
satijalab/seurat | 774098288 | Title: integration error
Question:
username_0: I am trying to integrate two objects with one being a reference. I get the following error:
```
reference_dataset <-pbmc.list[[1]]
pbmc.list
```
[[1]]
An object of class Seurat
308079 features across 11186 samples within 5 assays
Active assay: SCT (25290 features, 3000 variable features)
4 other assays present: RNA, ATAC, peaks, sciPeaks
2 dimensional reductions calculated: pca, lsi
[[2]]
An object of class Seurat
280385 features across 2774 samples within 5 assays
Active assay: SCT (19042 features, 3000 variable features)
4 other assays present: RNA, ATAC, peaks, sciPeaks
2 dimensional reductions calculated: pca, lsi
```reference_dataset```
An object of class Seurat
308079 features across 11186 samples within 5 assays
Active assay: SCT (25290 features, 3000 variable features)
4 other assays present: RNA, ATAC, peaks, sciPeaks
2 dimensional reductions calculated: pca, lsi
```pbmc.anchors <- FindIntegrationAnchors(object.list = pbmc.list, normalization.method = "SCT",
anchor.features = pbmc.features, reference = reference_dataset)```
Error in h(simpleError(msg, call)): error in evaluating the argument 'i' in selecting a method for function '[': Incorrect number of logical values provided to subset features
Traceback:
```
1. FindIntegrationAnchors(object.list = pbmc.list, normalization.method = "SCT",
. anchor.features = pbmc.features, reference = reference_dataset)
2. unique(x = sort(x = reference))
3. sort(x = reference)
4. sort.default(x = reference)
5. x[order(x, na.last = na.last, decreasing = decreasing)]
6. order(x, na.last = na.last, decreasing = decreasing)
7. lapply(z, function(x) if (is.object(x)) as.vector(xtfrm(x)) else x)
8. FUN(X[[i]], ...)
9. as.vector(xtfrm(x))
10. xtfrm(x)
11. xtfrm.default(x)
12. as.vector(rank(x, ties.method = "min", na.last = "keep"))
13. rank(x, ties.method = "min", na.last = "keep")
14. x[!nas]
15. `[.Seurat`(x, !nas)
16. stop("Incorrect number of logical values provided to subset features")
17. .handleSimpleError(function (cond)
. .Internal(C_tryCatchHelper(addr, 1L, cond)), "Incorrect number of logical values provided to subset features",
. base::quote(`[.Seurat`(x, !nas)))
18. h(simpleError(msg, call))
```<issue_closed>
Status: Issue closed |
Leoltron/liquid-log | 365211362 | Title: 2. Оценить скорость ещё одно действия - GetCatalogsAction (до 10.10)
Question:
username_0: Для начала, чтобы разобраться в структуре приложения, необходимо добавить в парсинг лога и отображение ещё одно действие - `GetCatalogsAction`.
Можно (и даже лучше всего) сделать как можно проще - т.е. по аналогии с другими действиями. (если вы пойдёте именно по такому пути, то у вас будет всего 4 изменённых файла).
В процессе выполнения попытайтесь ответить на следующие вопросы:
* Было ли легко разбираться в чужом коде?
* Было ли легко добавить новое действие в приложение? Если да - то за счёт чего, если нет - почему?<issue_closed>
Status: Issue closed |
entaxy-project/entaxy | 340379165 | Title: usersnap - Box for Income Taxes and Portfolio menus should be clickable (not only arrow)
Question:
username_0: [Open #12 in Usersnap Dashboard](https://usersnap.com/a/#/twg/p/entaxy-a629d125/12)
<a href='https://usersnappublic.s3.amazonaws.com/2018-07-11/21-40/bfb84530-1696-4c88-af88-39ecd8911159.png'></a>
<a href='https://usersnappublic.s3.amazonaws.com/2018-07-11/21-40/bfb84530-1696-4c88-af88-39ecd8911159.png'>Download original image</a>
**Browser**: Chrome 67 (Macintosh)
**Referer**: [http://entaxy-staging.s3-website.ca-central-1.amazonaws.com/](http://entaxy-staging.s3-website.ca-central-1.amazonaws.com/)
**Screen size**: 2560 x 1440 **Browser size**: 1280 x 1321
Powered by [usersnap.com](https://usersnap.com/?utm_source=github_entry&utm_medium=web&utm_campaign=product)
Answers:
username_0: [Open #12 in Usersnap Dashboard](https://usersnap.com/a/#/twg/p/entaxy-a629d125/12)
<a href='https://usersnappublic.s3.amazonaws.com/2018-07-11/21-40/bfb84530-1696-4c88-af88-39ecd8911159.png'></a>
<a href='https://usersnappublic.s3.amazonaws.com/2018-07-11/21-40/bfb84530-1696-4c88-af88-39ecd8911159.png'>Download original image</a>
**Browser**: Chrome 67 (Macintosh)
**Referer**: [http://entaxy-staging.s3-website.ca-central-1.amazonaws.com/](http://entaxy-staging.s3-website.ca-central-1.amazonaws.com/)
**Screen size**: 2560 x 1440 **Browser size**: 1280 x 1321
Powered by [usersnap.com](https://usersnap.com/?utm_source=github_entry&utm_medium=web&utm_campaign=product) |
ustc-zzzz/fmltutor | 278680100 | Title: 你好,ustc-zzzz
Question:
username_0: 在forge开发的过程中eclipse一直显示配置失败,去看日志,就算用配置好的也是eclipse加载不出来,我该考虑idea吗?
--致此,感谢!
Answers:
username_1: 怎么显示的
username_0: 就是
这样
username_0: org.eclipse.core.runtime.CoreException: The Class File Viewer cannot handle the given input ('org.eclipse.ui.ide.FileStoreEditorInput').
at org.eclipse.jdt.internal.ui.javaeditor.ClassFileEditor.doSetInput(ClassFileEditor.java:677)
at org.eclipse.ui.texteditor.AbstractTextEditor$19.run(AbstractTextEditor.java:3220)
at org.eclipse.ui.internal.WorkbenchWindow.run(WorkbenchWindow.java:2098)
at org.eclipse.ui.texteditor.AbstractTextEditor.internalInit(AbstractTextEditor.java:3238)
at org.eclipse.ui.texteditor.AbstractTextEditor.init(AbstractTextEditor.java:3265)
at org.eclipse.ui.internal.EditorReference.initialize(EditorReference.java:361)
at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.create(CompatibilityPart.java:319)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:56)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:898)
at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:879)
at org.eclipse.e4.core.internal.di.InjectorImpl.inject(InjectorImpl.java:121)
at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:345)
at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:264)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:162)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.createFromBundle(ReflectionContributionFactory.java:104)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.doCreate(ReflectionContributionFactory.java:73)
at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.create(ReflectionContributionFactory.java:55)
at org.eclipse.e4.ui.workbench.renderers.swt.ContributedPartRenderer.createWidget(ContributedPartRenderer.java:129)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:971)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:640)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:746)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$0(PartRenderingEngine.java:717)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$2.run(PartRenderingEngine.java:711)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:695)
at org.eclipse.e4.ui.workbench.renderers.swt.StackRenderer.showTab(StackRenderer.java:1306)
at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer.postProcess(LazyStackRenderer.java:103)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:658)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:746)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$0(PartRenderingEngine.java:717)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$2.run(PartRenderingEngine.java:711)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:695)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:71)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:654)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$1.run(PartRenderingEngine.java:525)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:509)
at org.eclipse.e4.ui.workbench.renderers.swt.ElementReferenceRenderer.createWidget(ElementReferenceRenderer.java:69)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:971)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:640)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:746)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$0(PartRenderingEngine.java:717)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$2.run(PartRenderingEngine.java:711)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:695)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:71)
at org.eclipse.e4.ui.workbench.renderers.swt.SashRenderer.processContents(SashRenderer.java:151)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:654)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:746)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$0(PartRenderingEngine.java:717)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$2.run(PartRenderingEngine.java:711)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:695)
at org.eclipse.e4.ui.workbench.renderers.swt.SWTPartRenderer.processContents(SWTPartRenderer.java:71)
[Truncated]
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:337)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1018)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:156)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:654)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:337)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:598)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:139)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608)
at org.eclipse.equinox.launcher.Main.run(Main.java:1515)
username_0: 也有
就是显示
这个问题
username_0: 也能用gradlew命令启动,但eclipse死活装不了
username_0: 
username_0: !SESSION 2017-12-17 18:04:25.842 -----------------------------------------------
eclipse.buildId=4.5.0.I20150603-2000
java.version=1.8.0_73
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=zh_CN
Command-line arguments: -os win32 -ws win32 -arch x86 -data E:\新建文件夹 (2)\forge\eclipse
!ENTRY org.eclipse.core.resources 4 567 2017-12-17 18:04:27.514
!MESSAGE Could not read the project location for 'MDKExample'.
!STACK 0
java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source)
at org.eclipse.core.internal.resources.LocalMetaArea.readPrivateDescription(LocalMetaArea.java:365)
at org.eclipse.core.internal.localstore.FileSystemResourceManager.read(FileSystemResourceManager.java:855)
at org.eclipse.core.internal.resources.SaveManager.restoreMetaInfo(SaveManager.java:902)
at org.eclipse.core.internal.resources.SaveManager.restoreMetaInfo(SaveManager.java:882)
at org.eclipse.core.internal.resources.SaveManager.restore(SaveManager.java:733)
at org.eclipse.core.internal.resources.SaveManager.startup(SaveManager.java:1588)
at org.eclipse.core.internal.resources.Workspace.startup(Workspace.java:2386)
at org.eclipse.core.internal.resources.Workspace.open(Workspace.java:2157)
at org.eclipse.core.resources.ResourcesPlugin.start(ResourcesPlugin.java:463)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:771)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:764)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:721)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:941)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:318)
at org.eclipse.osgi.container.Module.doStart(Module.java:571)
at org.eclipse.osgi.container.Module.start(Module.java:439)
at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:454)
at org.eclipse.osgi.internal.hooks.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107)
at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClass(ClasspathManager.java:531)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.findLocalClass(ModuleClassLoader.java:324)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:327)
at org.eclipse.osgi.internal.loader.sources.SingleSourcePackage.loadClass(SingleSourcePackage.java:36)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:398)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:352)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:344)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:140)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:669)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:608)
at org.eclipse.equinox.launcher.Main.run(Main.java:1515)
!ENTRY org.eclipse.core.resources 4 567 2017-12-17 18:04:27.584
!MESSAGE Could not read the project location for 'MDKExample'.
!STACK 0
[Truncated]
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:721)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:941)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:318)
at org.eclipse.osgi.container.Module.doStart(Module.java:571)
at org.eclipse.osgi.container.Module.start(Module.java:439)
at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:454)
at org.eclipse.osgi.internal.hooks.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107)
at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClass(ClasspathManager.java:531)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.findLocalClass(ModuleClassLoader.java:324)
at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:327)
at org.eclipse.osgi.internal.loader.sources.SingleSourcePackage.loadClass(SingleSourcePackage.java:36)
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:398)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:352)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:344)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:140)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
username_2: 额...
你打过gradlew eclipse了没
上面说Eclipse没有找到项目文件
username_1: 本教程由于严重过时,故放弃更新,请检索仍在更新的教程。
Status: Issue closed
|
phylogeny-explorer/explorer | 381359142 | Title: Clade View
Question:
username_0: **[<NAME>](https://github.com/username_1)** 7 months ago
Currently, the clade view is not very good. It spits out the data, but doesn't really have any design to it. We need to explore some design possibilities that will organize the data in a friendlier way.<issue_closed>
Status: Issue closed |
microsoft/vscode-github-issue-notebooks | 636515359 | Title: Feature Request: merge queries
Question:
username_0: It would be nice to search multiple repos at once like:
```
$my_repos=[repo:foo, repo:bar, ...]
$my_repos is:issue is:open assignee:@me
```
Answers:
username_1: YOu can do that, e.g like so
```
$repos=repo:microsoft/vscode repo:microsoft/vscode-remote-release repo:microsoft/vscode-js-debug repo:microsoft/vscode-pull-request-github repo:microsoft/vscode-github-issue-notebooks
$repo is:open
```
This is a sample that uses this: https://github.com/microsoft/vscode/blob/b4da98ac074daec5549b7395dbedf1eb3551832b/.vscode/notebooks/my-work.github-issues
Status: Issue closed
|
wrathematics/dequer | 93237458 | Title: Not so fast
Question:
username_0: Hi.
I tested code from the README and got some interesting results.
```r
library(dequer)
fun1 <- function(n) {
l <- list()
for (i in 1:n) l[[i]] <- i
return(l)
}
# right way to work with loops
fun2 <- function(n) {
l <- vector(mode = "list", length = n)
for (i in 1:n) l[[i]] <- i
return(l)
}
fun3 <- function(n) {
dl <- deque()
for (i in 1:n) pushback(dl, i)
l <- as.list(dl)
return(l)
}
```
And comparison:
```r
library(microbenchmark)
n <- c(1e2, 1e3, 1e4)
results <- lapply(n, function(x) microbenchmark(fun1(x), fun2(x), fun3(x), times = 50))
[[1]]
Unit: microseconds
expr min lq mean median uq max neval cld
fun1(x) 211.449 237.547 268.3748 246.9790 310.202 326.399 50 b
fun2(x) 133.002 157.149 176.2252 162.0155 202.334 276.736 50 a
fun3(x) 974.118 1018.808 1144.0400 1047.4845 1304.037 1361.196 50 c
[[2]]
Unit: milliseconds
expr min lq mean median uq max neval cld
fun1(x) 7.152716 9.402594 16.012337 9.739639 18.960025 63.77235 50 b
fun2(x) 1.257557 1.429933 2.032583 1.509856 2.036579 10.98680 50 a
fun3(x) 9.416961 9.855888 12.951196 9.985354 10.332416 112.30584 50 b
[[3]]
Unit: milliseconds
expr min lq mean median uq max neval cld
fun1(x) 633.13061 644.55194 1942.9682 700.27453 2724.14892 13842.183 50 b
fun2(x) 12.26946 13.07718 262.6274 13.74065 15.11343 2264.404 50 a
fun3(x) 96.24794 99.72093 1086.8249 100.34199 105.03587 17598.198 50 ab
```
Regards.
Answers:
username_1: Hi, thanks for the report.
This is as expected. The deque (or stack or queue) is not a substitute for an array; they're used in different circumstances. The example in the README is just for simplicity. If you aren't sure how many elements the final structure should have, then preallocation of an array is impossible. The two strategies in this case are to grow an array by some factor (1.5x-2x) whenever it's full and you need to add a new element, or to use a linked list. dequer uses the latter strategy.
Status: Issue closed
username_0: You're right. Thanks. |
kalexmills/github-vet-tests-dec2020 | 764461763 | Title: bitleak/go-redis-pool: pool_test.go; 11 LoC
Question:
username_0: [Click here to see the code in its original context.](https://github.com/bitleak/go-redis-pool/blob/bda28908d7017b20d869a5e6d07ba74f9f2efa91/pool_test.go#L467-L477)
<details>
<summary>Click here to show the 11 line(s) of Go which triggered the analyzer.</summary>
```go
for _, pool := range pools {
go func() {
time.Sleep(100 * time.Millisecond)
pool.LPush(key, "e1", "e2")
}()
Expect(pool.BLPop(time.Second, key).Val()).To(Equal([]string{key, "e2"}))
Expect(pool.BLPop(time.Second, key).Val()).To(Equal([]string{key, "e1"}))
if pool == shardPool {
Expect(pool.BLPop(time.Second, key, noExistsKey).Err()).To(HaveOccurred())
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: bda28908d7017b20d869a5e6d07ba74f9f2efa91<issue_closed>
Status: Issue closed |
MicrosoftDocs/dynamics365smb-devitpro-pb | 542805291 | Title: Regarding "Add custom control add-ins to the server instance."
Question:
username_0: The step "Add custom control add-ins to the server instance" and its reference https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/upgrade/converting-a-database#controladdins do in now way discuss how to handle references to old version of DotNet components. A substantial number of standard objects do reference DotNet compononents with a "version" (in case of a NAV 2018 tchnical upgrade):
Version=172.16.31.10, Culture=neutral, PublicKeyToken=31bf3856ad364e35
You either (1) need to add the the specific .dll(s) to the upgraded installation or (2) remove the "version" from the DotNet reference in the objects (the latter I personally prefer and would encourage MS to leave version out in their objects anay way - please also read https://dynamics.is/?p=2944).
Please add this information to the topic.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5856d30d-b2c7-44b4-0aa9-45c09421f1e6
* Version Independent ID: b7fba357-9603-ac13-1191-c08b0faa34be
* Content: [Technical Upgrade Quick Reference - Business Central](https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/upgrade/technical-upgrade-checklist)
* Content Source: [dev-itpro/upgrade/technical-upgrade-checklist.md](https://github.com/MicrosoftDocs/dynamics365smb-devitpro-pb/blob/live/dev-itpro/upgrade/technical-upgrade-checklist.md)
* Service: **dynamics365-business-central**
* GitHub Login: @jswymer
* Microsoft Alias: **jswymer** |
mongodb/mongodb-enterprise-kubernetes | 700169536 | Title: Cannot specify the persistent volume claim in the ops manager yaml file
Question:
username_0: can anyone help me, please !!!
Answers:
username_0: can anyone help me, please !!!
username_1: hi @username_0
Could you describe in more details what you are trying to achieve? Do you want to use the custom PVC for appdb or OM or both?
Can you show the code for the `mongo-pvc` PVC?
In the example given I see some inconsistencies:
* `mongodb-ops-manager` container is mentioned for both `spec.applicationDatabase` and `spec`
* the same mount configuration is used for both `spec` and `spec.applicationDatabase`
username_0: HI @username_1
Thanks for your reply,
Actually what I need is define the persistent volumes for both appdb and OM, because when I create the om without defining the PVC the created containers takes a random persistent volume claim
Here is my PVC definition
```
piVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-ops-manager-db-1
spec:
storageClassName: mongodb-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
```
The PVC is created successfully and it's based on CEPH block storage, the Idea here I have tried different things to define the volumes and volume mounts but not working so how can I define them inside the ops manager definition?
username_1: Not sure I understand this. Let me try to explain the architecture a bit.
For each MongoDBOpsManager resource the Operator creates at least two StatefulSets: one for the application database and one for the Ops Manager application. Optionally the third StatefulSet can be created for Backup Daemon.
The only `PersistentVolumeClaim` created is for the Application database in case if `persistent` is set to `true` (it's enabled by default). The other PVC is created for the Backup Daemon (for the HeadDB) but I guess it's not relevant for the given example.
There are some additional volumes/mounts for each pod which serve as internal mechanisms to pass some configuration/scripts to the pod but they don't relate to PVC.
If you want to configure the PVCs created for the Application database then you can set the `podSpec.persistence` configuration: https://docs.mongodb.com/kubernetes-operator/stable/reference/k8s-operator-om-specification/#opsmgrkube.spec.applicationDatabase and this will change the parameters of the PVC created by the Operator.
username_0: @username_1
here is a brief about what I want to achieve
I used the default template of ops manager to create MongoDB ops manager but when I apply it in Kubernetes, its applied but when checking the om status by running this command
```
kubectl get om -n mongodb
```
I get that the created om has reconciling status and not run ever, when I checked the created pods internally that related to this om I found that their status is pending and when I described them I found that the pod failed to initialize the pod with this message:
```
Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
```
This happens because this internal pod that is related to the created om selects a persistentVolumeClaim randomly which is `ops-manager-db-0` with the name of the pod
My problem now:
I have a created PVC and I want to select this PVC name in mongodb ops manager YAML file to allow the internal pods to use this persistent volume claim to mount the data to it
username_1: thanks for the update @username_0
I see the problem now.
So the PVC template for the AppDB is created by the Operator in case if `persistent` is set to `true` or omitted as I've described above. It has an empty `storageClass`, some default `size` and empty `labelSelector`.
The Kubernetes cluster usually creates a dynamic PV based on this PVC values. In your case it failed to create a dynamic PV because:
- either it doesn't support the dynamic volumes provisioning
- or it cannot create a dynamic volume based on the default parameters for the PVC
What I'd recommend to you is to play with a simplified example and create a pod with `busybox` and try to use the same PVC that is created by the Operator for the appdb and check if it gets mounted to the volume automatically and if not - why not.
Maybe you just need to specify special PVC configuration that will make things work
username_0: thanks for your response @username_1
I already checked the dynamic PVC that is created and found it has some fails to mount the data to the pv
so please guide me how can I specify a special PVC configuration in the MongoDB ops manager definition
username_0: Hi @username_1
can you please help me because this blocks me
username_1: @username_0
My recommendation stays the same: first try playing with the PVC values using `spec.applicationDatabase.podSpec.persistence.single` (https://docs.mongodb.com/kubernetes-operator/stable/reference/k8s-operator-om-specification/#opsmgrkube.spec.applicationDatabase). This will allow you to change the PVC Template configuration for the AppDB that the Operator will create. You don't need to provide custom PVC or mounts for your need. In fact, it's not possible to provide custom PVC **Templates** for AppDB or for the MongoDB resources - only for OpsManager pods.
The second action is to simplify the task and get some sample pod with some PVC and make sure dynamic volume provisioning works. If you find such PVC values then you can replicate them into `podSpec.persistence.single` and the volume will be provisioned for your appdb.
username_0: @username_1
I actually trying to simplify the task, I followed the documentation you mentioned and added this definition for MongoDB ops manager
```
apiVersion: mongodb.com/v1
kind: MongoDBOpsManager
metadata:
name: ops-manager
namespace: mongodb
spec:
replicas: 3
version: 4.4.1
adminCredentials: ops-manager-admin-secret
configuration:
mms.testUtil.enabled: "true"
externalConnectivity:
type: NodePort
applicationDatabase:
members: 3
persistent: true
podSpec:
persistence:
multiple:
data:
storage: 15G
storageClass: mongodb-storage
journal:
storage: 15G
storageClass: mongodb-storage
logs:
storage: 15G
storageClass: mongodb-storage
```
the problem still exists my om status is reconciling and when checking its related pods I get this message
```
Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "ops-manager-db-0": pod has unbound immediate PersistentVolumeClaims
```
the only thing that I want is to enable the MongoDB ops manager to run successfully to go to the next step in setting MongoDB on Kubernetes
thanks for your help
username_1: @username_0
You have a wrong indentation for the `multiple` field in your spec.
as a temporary solution you can specify `applicationDatabase.persistent:false` and in this case PVC won't be created. Of course this should be used only for testing purposes.
Status: Issue closed
username_2: In order to help us manage issues and requests we have switched from GitHub Issues to the [MongoDB Support Center](https://support.mongodb.com/). Please use your Enterprise support account to raise your requests.
If this issue is still affecting you, please open a [Support Case](https://support.mongodb.com/) or for non-blocking issues, you can make a [Feature Request](https://feedback.mongodb.com/forums/924355-ops-tools) |
hilongjw/vue-lazyload | 484263771 | Title: Cancelling load
Question:
username_0: Hello, is there a way to cancel or stop loading? I have an object that can or cannot have an image source. I'm handling it this way
```js
<div v-lazy:background-image="obj.bg === 'image' ? obj.bg.src : undefined"></div>
```
I was trying to handle it in the adapter by checking if `src=== undefined` and removing attributes but it continues to try to process.
Is there a way for me to say "hey this is undefined stop trying to process it"?
Answers:
username_1: Did anyone find a solution to this issue? |
Canop/broot | 1180089187 | Title: capture_mouse: false doesn't work
Question:
username_0: Hi,
Thank you for this awesome project!
I want to select filepath using mouse and copy it. I tried it by disabling mouse capture and documentation says I can disable it by setting `capture_mouse: false` but it doesn't work. I've tested it with broot 1.9.4 running in:
- xfce4-terminal on ubuntu 22.04
- cmd/poweshell/windows terminal on Windows 10
Thank you. |
Shm013/certbot-dns-selectel | 691780904 | Title: Obtain though certbot
Question:
username_0: Hello. Thanks for great plugin!))
I've tried to get wildcard certificate through certbot but got error
```
sudo certbot certonly -a certbot-dns-selectel:dns-selectel --certbot-dns-selectel:dns-selectel-credentials /home/ubuntu/certbot/credentials.ini --certbot-dns-selectel:dns-selectel-propagation-seconds 30 -d domain.com -d "*.domain.com" -m <EMAIL> --agree-tos -n
usage:
certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
certificate.
certbot: error: unrecognized arguments: --certbot-dns-selectel:dns-selectel-credentials /home/ubuntu/certbot/credentials.ini --certbot-dns-selectel:dns-selectel-propagation-seconds 30
```
Also tried to connect plugin to certbot like [official documentation](https://certbot.eff.org/lets-encrypt/snap-nginx) say... but also can't.
```
sudo snap connect certbot:plugin certbot-dns-selectel
error: snap "certbot-dns-selectel" has no "content" interface slots
```
Does this plugin requires specific version of certbot?
Answers:
username_1: Hi! Sorry for so long waiting.
Yes it might be so, but I forget why.
Anyway you can use docker version of this plugin - it will be working.
I hope that I might do better support for this plugin soon. It was abandoned for a long time |
Sinotrade/Shioaji | 445667597 | Title: 無法從Contracts中取得Stock的商品資料
Question:
username_0: 我等入以後,透過api.list_accounts()可以看到有一個期貨帳戶跟一個股票帳戶
再來我透過api.set_default_account(api.stock_account)將股票帳戶設為預設
但設定完後,在api.Contracts還是沒有股票商品的資訊
Contracts(Stocks=(), Futures=(BRF, EXF, GDF, MXF, RHF, RTF, SPF, TGF, TXF, UDF, XAF, XBF, XEF, XJF), Options=(TXO))
同時api.get_account_margin()回傳的也是期貨帳戶的資訊
請問我有任何辦法可以解決該問題,取得股票商品的Contracts資訊嗎?<issue_closed>
Status: Issue closed |
decred/decrediton | 520082954 | Title: [1.5.0-rc1] Testnet: Proposals are not being displayed
Question:
username_0: All tabs are empty despite proposals showing on https://test-proposals.decred.org/

Answers:
username_1: Steps to reproduce?
username_0: Open decrediton in testnet mode - click on the governance tab.
username_0: Potential cause, 403 response:

username_1: Weird, here it works.
can you try disabling and then enabling to see what happens?
Status: Issue closed
|
saltstack/salt | 222237948 | Title: State compilation errors exit with code 0 and errors written to stdout
Question:
username_0: ### Description of Issue/Question
I just noticed I've had some highstate cron jobs failing for some time silently, since state compilation errors doesn't exit with non-zero status code or write anything to stderr (which is required for cron to send me an email).
### Setup
(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
```yaml
# mystate.sls
mystate:
- list: True
```
### Steps to Reproduce Issue
```sh
$ sudo salt-call state.sls mystate 2>/dev/null
local:
Data failed to compile:
----------
ID mystate in SLS teststate is not a dictionary
$ echo $?
0
```
The output is written to stdout, as you can see since stderr is redirected. All error output should be written to stderr and the process should exit with a non-zero status code to signal error.
### Versions Report
(Provided by running `salt --versions-report`. Please also mention any differences in master/minion versions.)
2016.11.3.
Status: Issue closed
Answers:
username_1: This is a duplicate of https://github.com/saltstack/salt/issues/18510
We plan on fixing this in the Oxygen release later this year.
Thanks,
Daniel |
ManageIQ/manageiq-appliance | 849765764 | Title: bin/rpm-build.sh -t release -r jansa-3
Question:
username_0: Hi
My last successful build was long time ago.
Can you advise where I did wrong ?
```
[me@centos8t01 manageiq-appliance-build]$ cat /etc/redhat-release
CentOS Linux release 8.3.2011
[me@centos8t01 manageiq-appliance-build]$ ls
bin CHANGELOG.md config kickstarts lib LICENSE.txt OPTIONS Rakefile README.md rpm_cache scripts
[me@centos8t01 manageiq-appliance-build]$ bin/rpm-build.sh -t release -r jansa-3
+++ readlink -f bin/rpm-build.sh
++ dirname /build/home/me/github/manageiq-appliance-build/bin/rpm-build.sh
+ BUILD_DIR=/build/home/me/github/manageiq-appliance-build/bin/..
+ CONFIG_OPTION=/build/home/me/github/manageiq-appliance-build/bin/../OPTIONS
+ RPM_BUILD_IMAGE=manageiq/rpm_build
+ RPM_CACHE_DIR=/build/home/me/github/manageiq-appliance-build/bin/../rpm_cache
+ getopts t:r:h opt
+ case $opt in
+ BUILD_TYPE=release
+ getopts t:r:h opt
+ case $opt in
+ REF=jansa-3
+ getopts t:r:h opt
+ '[' release '!=' nightly ']'
+ '[' release '!=' release ']'
+ '[' -z jansa-3 ']'
+ '[' jansa-3 = master ']'
+ tag=latest-jansa
+ RPM_BUILD_IMAGE=manageiq/rpm_build:latest-jansa
+ cmd='build --build-type release --git-ref jansa-3 --update-rpm-repo'
+ docker pull manageiq/rpm_build:latest-jansa
latest-jansa: Pulling from manageiq/rpm_build
Digest: sha256:94adaccec8d7d6d14666de2953f34695095f28ccf771ed1518f98931c5835245
Status: Image is up to date for manageiq/rpm_build:latest-jansa
docker.io/manageiq/rpm_build:latest-jansa
+ docker run --rm -v /build/home/me/github/manageiq-appliance-build/bin/../OPTIONS:/root/OPTIONS:Z -v /build/home/me/github/manageiq-appliance-build/bin/../rpm_cache:/root/rpm_cache:Z manageiq/rpm_build:latest-jansa build --build-type release --git-ref jansa-3 --update-rpm-repo
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/.........
Resolving dependencies...
Fetching rake 13.0.3
Installing rake 13.0.3
Fetching concurrent-ruby 1.1.8
Installing concurrent-ruby 1.1.8
Fetching i18n 1.8.10
Installing i18n 1.8.10
Fetching minitest 5.14.4
Installing minitest 5.14.4
Fetching tzinfo 2.0.4
Installing tzinfo 2.0.4
Fetching zeitwerk 2.4.2
Installing zeitwerk 2.4.2
Fetching activesupport 6.1.3.1
Installing activesupport 6.1.3.1
Fetching public_suffix 4.0.6
Installing public_suffix 4.0.6
Fetching addressable 2.7.0
Installing addressable 2.7.0
[Truncated]
---> bundle _2.1.4_ install --jobs 4 --retry 3
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching gem metadata from https://rubygems.org/.
Resolving dependencies...
Using bundler 2.1.4
Fetching bundler-inject 1.1.0
Installing bundler-inject 1.1.0
Installed plugin bundler-inject
Fetching source index from https://rubygems.manageiq.org/
Fetching gem metadata from https://rubygems.org/......
Your bundle is locked to mimemagic (0.3.5), but that version could not be found
in any of the sources listed in your Gemfile. If you haven't changed sources,
that means the author of mimemagic (0.3.5) has removed it. You'll need to update
your bundle to a version other than mimemagic (0.3.5) that hasn't been removed
in order to install.
[me@centos8t01 manageiq-appliance-build]$
```
Answers:
username_1: This is due to the major Rails breakage with the mimemagic gem. To fix this, you would have to unlock rails in the Gemfile.lock and upgrade it to at least 5.2.5. Are you using the https://github.com/ManageIQ/manageiq/blob/jansa/Gemfile.lock.release file? If so, that would need to be updated.
cc @bdunne - looks like we need to upgrade rails on a few of the old versions. |
rattias/mscviewer | 89198833 | Title: LogList should have fixed with/height for elements
Question:
username_0: We need to compute maximum width of each line and height of a line, and set those in the LogList, otherwise when first updating the list after a load the whole list is traversed to compute the actual line size. Also, non-fixed size lists take more memory, as for each line a height is maintained.
It may be acceptable to approximate the actual line width with the with of a "large character" multiplied by maximum line length.<issue_closed>
Status: Issue closed |
obophenotype/uberon | 833927659 | Title: NTR: lymph node interfollicular cortex
Question:
username_0: **Preferred term label:**
lymph node interfollicular cortex
**Synonyms**
interfollicular cortex of lymph node
**Definition (free text, please give PubMed ID)**
Follicules in the lymph node are surrounded and separated by the interfollicular cortex. Together with the lymph node follicle, the interfollicular cortex of the lymph node lobule make up the superficial cortex of the lymph node. Capillaries empty into the high endothelial venules located in the interfollicular cortex. The interfollicular cortex also serves as transit corriders for lymphocytes migrating between the B and T cell areas. PMID: 17067937
**Parent term (use https://www.ebi.ac.uk/ols/ontologies/uberon)**
lymph node: http://purl.obolibrary.org/obo/UBERON_0000029
**Your nano-attribution (ORCID)**
0000-0003-4183-8865
Answers:
username_1: Note to self:
Nancy requested lymph node interfollicular cortex;
1) UBERON:0010417 'lymph node T cell domain' is defined as "The paracortex and interfollicular cortex of the lymph node in which T lymphocytes home to survey dendritic cells";
so should the new term be part_of it?
2) The recently created, not-yet-released UBERON:8410036 'medullary venule of lymph node' is defined as "Medullary venules are a continuation of high endothelial venules which condense repeatedly in the interfollicular cortex and peripheral deep cortical unit and then transition to medullary venules at the corticomedullary junction. The medullary venules condense and return centripetally to the hilar vein.";
so should the new term be linked to it?
username_2: 1. I agree this new term should be part_of 'lymph node T cell domain', based on that term's definition.
2. I don't think there's a need to link 'lymph node interfollicular cortex' to 'medullary venule of lymph node' because the latter term refers to a blood vessel that seemingly may span multiple parts of the lymph node. But I don't fully understand the definition of 'medullary venule of lymph node': Are medullary venues only at the corticomedullary junction or are they also part of the interfollicular cortex and peripheral deep cortical unit? I think the definition favors the first interpretation but could be read to favor the second. Are medullary venules a type of high endothelial venule, or a separate type of entity?
-- Alex
username_1: Note to self:
- [ ] Add as subclass of (_not_ equivalent to) 'cortex' and part of 'lymph node T cell domain'
- [ ] Address Alex' question on medullary venules separately.
username_1: @username_0 and @sfexova
UBERON:8410067 lymph node interfollicular cortex
username_1: Note to self: left to do:
- [ ] Address Alex' question 2 here https://github.com/obophenotype/uberon/issues/1809#issuecomment-857165004.
username_1: Discussed at curators meeting today, @username_0 will address. Thanks.
username_1: I added the requested Uberon term previously, so I'll close this ticket.
@username_0 if you wish to address Alex' question as per last two comments above, please feel free to comment tagging both of us, and I'll reopen the ticket if needed.
Thanks,
Paola
Status: Issue closed
|
erigones/esdc-ce | 688687606 | Title: `Get latest version` button does not work with recent Firefox Nightly browser
Question:
username_0: There is (_probably_) a CORS issue with `Get latest version` button in `System>Maintenance>Update Danube Cloud on mgmt`. The button does not work as expected in recent Firefox Nightly (and probably also in other browsers in near future).
The error displayed in developer tools is:
```Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://danubecloud.org/api/releases?system_version=4.2. (Reason: CORS request did not succeed).```
After entering a particular version (like, `v4.3`, the upgrade as such works).
More information on CORS can be found here:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors
Answers:
username_0: After CORS adjustments, this works.
Status: Issue closed
username_1: Fixed by adding `Access-Control-Allow-Origin: *` to the service. |
backdrop/backdrop-issues | 516325177 | Title: EntityDatabaseStorageController::create() should require entity class
Question:
username_0: **Description of the bug**
```
public function create(array $values) {
$class = isset($this->entityInfo['entity class']) ? $this->entityInfo['entity class'] : 'Entity';
return new $class($values);
}
```
This implies that if `$this->entityInfo['entity class']` the function will then return `new Entity()`, but thats bogus, since Entity is abstract and trying to initialize `new Entity()` will result in a PHP fatal.
**Steps To Reproduce**
Create an Entity and dont declare `'entity class'` in the `hook_entity_info()` then try to call `entity_create()` on your new Entity.
We should either require ['entity class'] or throw an exception or something.
Answers:
username_0: Just realised this is almost a dupe of https://github.com/backdrop/backdrop-issues/issues/2558. We fixed the documentation but not this bit of code.
Would something like this be in order here?:
```
public function create(array $values) {
$class = $this->entityInfo['entity class']) ? $this->entityInfo['entity class'] : NULL;
if empty($class) {
throw new EntityMalformedException(t('Missing Entity class'));
}
return new $class($values);
}
```
username_0: Immediately wondering if would be better to use the entity machine name instead of its label, in case it also doesnt have a proper label. Thoughts?
username_1: Not sure if it's a big deal but then I haven't come across improper entity class labels.
username_0: Can we get a RTBC? Minor change I think.
username_1: I haven't tested this but the code change looks simple enough.
Status: Issue closed
username_2: Yep, looks good to me! Throwing an exception is much better than causing a PHP fatal! Thanks @username_0! Merged https://github.com/backdrop/backdrop/pull/2973 into 1.x and 1.14.x. |
haikuports/haikuports | 761660831 | Title: APFS: Implement for Haiku
Question:
username_0: APFS is Apple's next-generation '64-bit oriented' File System for Apple Watch, Apple TV, iPhone, iPad, MacBook, iMac, and Mac Pro optimized for flash and solid-state storage devices. APFS replaces Apple's HFS+ file system (i.e. on all current and newer devices).
NOTE: APFS implementation may not perform efficiently or properly on Haiku 32-bit platforms (i.e. if ported). Read/write/modify and file attributes may depend entirely on underlying APFS driver support.
1. Ref: https://dev.haiku-os.org/ticket/16333
2. Ref: https://github.com/sgan81/apfs-fuse |
easylist/easylist | 1163533829 | Title: EasyPrivacy: whitelisted `fingerprint2.min.js` for finvasia.com(a finance website)
Question:
username_0: <!--
Easyprivacy requests:
** If a site implements any tracking or monitoring, UA/IP/Geo checks, browser detection, analytics, telemetry, linking to third-partys, pixels, referrers, fingerprinting, event/perf logging etc. Regardless how helpful or needed the script(s) are, it will be blocked in Easyprivacy. Privacy comes first and the block on these scripts will remain in place.
Any additions, changes or removals is at the Authors discretion.
You're free to counterargue (to a certain point) if you disagree with the decision.
To avoid being banned, don't constantly re-open or create new (related) issue reports.
-->
<!-- Just include the website URL in the Title line of this issue report -->
### List the website(s) you're having issues:
`shoonya.finvasia.com`
### What happens?
`https://shoonya.finvasia.com/fingerprint2.min.js` needs to be `whitelisted ` it is a finance website
### List Subscriptions you're using:
<!-- Which adblock lists are you're using? -->
EasyPrivacy (Default one in uBlockOrigin)
### Your settings
<!-- Just to ensure there is no issues or conflicts with other webbrowser extensions.
Disable Noscript, Ghostery, Disconnect, HTTPS Everywhere, Privacy Badger before reporting (and re-test with them disabled).
Just ensure you're running just one Adblock extension only -->
- OS/version:
- Browser/version:
- Adblock Extension/version: uBlockOrigin
### Other details:
<!-- If you suspect certain filters (this helps spending time to debug it manually).
If you have a screen shot of the issue or advert, this will help to highlight it. --><issue_closed>
Status: Issue closed |
entria/react-native-fontawesome | 215177859 | Title: Please update NPM JS Readme
Question:
username_0: Hey @username_2,
can you please update the NPM JS Readme :)
That would be awesome for beginners to better get started with the awesome libary.
Greetings,
Arne
Answers:
username_1: Hey @username_0 see https://github.com/entria/react-native-fontawesome#installation-process
username_0: Hey @username_1,
Thanks, but I wrote that doku.
Its just a task reminder for Rafael (@username_2 ) because nobody else has access to the npm repo.
Greetings
username_2: Hi @username_0 will try to update as soon as possible
username_2: Including @entria team as owners of this NPM
https://docs.npmjs.com/cli/owner
username_3: A [new version](https://www.npmjs.com/package/react-native-fontawesome) was released, please, check if in the new version this problem was fixed and feel free to reopen this issue if wasn't.
**PS: REMEBER TO SEE THE NEW DOCUMENTATION BECAUSE THERE ARE SOME BREAKING CHANGES**
Status: Issue closed
|
fo-dicom/fo-dicom | 428868957 | Title: Photometric Interpretation updates on Transfer Syntax changes
Question:
username_0: I found a need to update the PHOTOMETRIC INTERPRETATION when converting images to a baseline jpeg syntax.
df = DicomFile.Open(SourceFile.FullName, DicomEncoding.Default)
Dim djp As New Dicom.Imaging.Codec.DicomJpegParams
djp.Quality = 100
Dim Transcoder As New
Dicom.Imaging.Codec.DicomTranscoder(df.Dataset.InternalTransferSyntax,
DicomTransferSyntax.ExplicitVRLittleEndian)
Dim newds As Dicom.DicomDataset =
Dicom.Imaging.Codec.DicomCodecExtensions.Clone(df.Dataset,
DicomTransferSyntax.JPEGProcess1, djp)
'need to update photometric interpretation
newds.AddOrUpdate(DicomTag.PhotometricInterpretation, "YBR_FULL_422")
Dim newdf As New DicomFile(newds)
newdf.Save(TargetFile.FullName)
Status: Issue closed
Answers:
username_0: Thanks for the fix!
username_1: See issue #921 : this fix caused some critical bugs when encoding in JpegLossless.
A more stable solution is required.
username_1: I found a need to update the PHOTOMETRIC INTERPRETATION when converting images to a baseline jpeg syntax.
df = DicomFile.Open(SourceFile.FullName, DicomEncoding.Default)
Dim djp As New Dicom.Imaging.Codec.DicomJpegParams
djp.Quality = 100
Dim Transcoder As New
Dicom.Imaging.Codec.DicomTranscoder(df.Dataset.InternalTransferSyntax,
DicomTransferSyntax.ExplicitVRLittleEndian)
Dim newds As Dicom.DicomDataset =
Dicom.Imaging.Codec.DicomCodecExtensions.Clone(df.Dataset,
DicomTransferSyntax.JPEGProcess1, djp)
'need to update photometric interpretation
newds.AddOrUpdate(DicomTag.PhotometricInterpretation, "YBR_FULL_422")
Dim newdf As New DicomFile(newds)
newdf.Save(TargetFile.FullName)
username_1: this is done by the Codec-library in fo-dicom.Codec. Has been fixed with version -beta6 for fo-dicom 5 |
WoWManiaUK/Blackwing-Lair | 444142310 | Title: [Daily JC] Nibbler! No!
Question:
username_0: **Links:**
https://www.wowhead.com/quest=25158/nibbler-no#see-also
**What is happening:**
It gives me progression only for crafting Solid Zephyrite
**What should happen:**
Should give me progression for both : cut+craft
Status: Issue closed
Answers:
username_2: refixed with next update |
cerner/terra-toolkit | 361302686 | Title: Add watch flag to tt-wdio
Question:
username_0: # Feature Request
## Description
I was thinking of ways to tighten the feedback loop when writing wdio tests. It currently takes ~1 minute and 45 seconds to run the CLI, running locally, find all the tests using the specified patterns, compile everything, and execute 6 basic assertions in one form factor. Although this is not unreasonable when just executing these tests, it can get a little unwieldy when first learning the framework, writing wdio tests the first time, and then realizing you have written the test incorrectly and have to redo this process multiple times.
Informal Time breakdown
* ~at 50 seconds before seeing anything appear in CLI, which then prints up to "[Terra-Toolkit:serve-static] Webpack compilation started"
* ~at 1:15, prints "Server started listening at port:8080"
* 6 assertions take 37 seconds.
The wdio CLI runner already accepts a --watch flag, which we could expose to tt-wdio. I've been looking into the scripts in this project and I'm not totally sure how much the watch flag would help, as I don't know enough about the wdio lifecycle to know what would and wouldn't run. If it just runs everything again when a file has been changed, it wouldn't be much help. But if it just executes the assertions on the files that were found on initial run of the file-search-pattern-match, it could possibly save on that ~50 seconds from above.
Answers:
username_1: @username_0 Are you wanted to see immediate output on the test execution?
username_0: I think wdio-spec-reporter is more for formatting how the specs look. It doesn't actually change how the tests are executed. What I'm really looking for is to reduce dead time when I'm writing tests. For example, if I'm writing my very first wdio test, my one and only assertion might take 5 seconds to execute. However, it takes 1 minute and 30 seconds to find all the test files in the project, compile the project, create the docker containers, start the static server, and then clean up after all that. I don't really want to hit that 1:30 cost every single time I try to introduce a change to my test. In an ideal environment, I might hit it one time, execute 5 second test, make change, excute 5 second test, rinse repeat until I'm happy with the tests, and then have some time for cleaning everything up.
username_1: I don't recommend running tt-wdio for all locales and form factors if you are trying to learn how to run wdio tests. You can use webdriver's bin directly with `--watch` if desired. I am not familiar with this flag and haven't tested how it works.
My recommendation for quickly running & rerunning tests (when not changing src code), would be to pack your site assets with webpack in production mode and then supply the build directory to your wdio.config.js with the `site` key. This will directly server your assets and remove compiling before tests start.
Status: Issue closed
|
openHPI/jenz | 240233320 | Title: m.e.i.n.e.l for statistics
Question:
username_0: the [m.e.i.n.e.l-project](https://github.com/openhpi/m.e.i.n.e.l) provides polymer components for easily displaying statistics from data apis.
ToDos:
- [ ] learn how one can use Polymer 1 components in Angular 2
- [ ] embed them
Answers:
username_0: which basically means that `PolymerElement('barchart-basic')` in `app.module.ts` fails.
If you google that, you get a couple hints, but angular-cli guys basically say that this [won't be supported anymore](https://github.com/angular/angular-cli/issues/3707#issuecomment-268864098). A bummer.
## How to fix
Either, figure out a way to bypass the mentioned error (maybe by exporting them by hand), or find a completely different way to turn Polymer components into angular2 ones in general.
Every change leading to this error is shown in the `add-meinel` branch.
username_0: ## How to bypass bootstrapModule() missing
This. is. ugly.
Yes.
But it works and I don't see a better way atm thanks to angular cli.
```
// Ugly hack to convince Angular CLI that we call bootstrapModule somewhere
const hackThis = false;
if (hackThis) {
platformBrowserDynamic().bootstrapModule(AppModule);
}
```
in `main-polymer.ts` |
MangoTheCat/goodpractice | 309402756 | Title: Test for CI
Question:
username_0: This is sort of meta, but a good check would be too look for CI infrastructure. This could include
- Checking for CI by looking for a `.travis.yml` or a `.circleci/config.yml`
- Looking for a call to **covr** in those files
- Looking for badges in the package README
- One could even ping the service APIs to check on package status on the services as well - there would be enough info in the badge URLs. |
waLLxAck/Rabbit-Population-Explosion | 731642021 | Title: User Story - one male fox can breed with max of one female at all times
Question:
username_0: As a user I want one male fox to be able to breed with only one female fox during each breeding period so that I can accurately represent the breeding pattern of these foxes.
**Acceptance Criteria**
- one male fox can impregnate only one female fox during one breeding period (12 months)
**Testing**
- check there is a *min(maleFoxes, femaleFoxes)* number of pregnant foxes<issue_closed>
Status: Issue closed |
geneontology/go-ontology | 311519249 | Title: GO:0036396; C:RNA N6-methyladenosine methyltransferase complex: change definition
Question:
username_0: Hi,
The definition of the RNA N6-methyladenosine methyltransferase complex should be updated to reflect that it (1) not only mediates mRNA methylation and (2) extent the composition of the complex in vertebrates
Thanks
Sylvain
FROM:
An mRNA methyltransferase complex that catalyzes the post-transcriptional methylation of adenosine to form N6-methyladenosine (m6A). In budding yeast, the MIS complex consists of Mum2p, Ime4p and Slz1p. In vertebrates, the complex consists of METTL3, METTL14 and WTAP. PMID:22685417 PMID:24316715 PMID:24407421
TO:
A RNA methyltransferase complex that catalyzes the post-transcriptional methylation of adenosine to form N6-methyladenosine (m6A). In budding yeast, the MIS complex consists of Mum2p, Ime4p and Slz1p. In vertebrates, the complex consists of METTL3, METTL14 and associated components WTAP, ZC3H13, VIRMA, CBLL1/HAKAI and RBM15 (RBM15 or RBM15B). PMID:22685417 PMID:24316715 PMID:24407421 PMID:29535189 PMID:29547716 PMID:29507755
Answers:
username_1: Thanks for the update, Sylvain. I requested the term for a complex I curated earlier this year, just missed the new papers!
I agree with adding ZC3H13, VIRMA and CBLL1/HAKAI as the TAP-MS has been confirmed by small-scale CoIPs in various compinations but I'm not sure about RBM15/RBM15B. PMID:29535189 only has large-scale evidence from TAP-MS and and even then doesn't seem to pull down METTL3-METTL14. k/downs also don't seem to affect the core complex.
Birgit
username_0: Hi Birgit,
concerning RBM15 there are many evidences. In vertebrates:
PubMed=29535189, PubMed=27602518
and Drosophila: PubMed=27602518, PubMed=27919077
I think it is quite important to include it since it is probably the
RNA-binding bridging factor that will target METTL3-METTL14
Thanks
Sylvain
--
<NAME>, PhD
Swiss-Prot group
SIB Swiss Institute of Bioinformatics
1, rue <NAME>
CH-1211 Geneva 4
Switzerland
<EMAIL> - www.sib.swiss
username_1: PMID:29535189 is only large-scale from a whole cell extract, I wouldn't consider it evidence for a stable complex as they could be a mix of smaller complexes.
PMID:27602518: only has evidence for the RNA binding and RBM15(B) - METTL3 binding, not the whole complex.
Also PMID:24100041: large-scale TAP-MS for WTAP and WTAP-HAKAI binary CoIP - no confirmatory assay for the full complex.
And PMID:29507755: doesn't even find RBM15(B) despite performing TAP-MS for all 3 core subunits, METTL3, METTL14 and WTAP.
I agree, RBM15(B) appears to acts as the **RNA binding adaptor or chaperon to the substrate RNA** but METTL3 and METTL14 have RNA binding function themselves (PMID:24316715).
Also, complexes may have cell-type- or substrate-specific composition as the paper either look at mRNAs or ncRNAs and different cell lines and find subtle differences even in the large-scale experiments.
username_1: Yes, always tricky to find a clear delineation of where a complex starts and finishes...
Your new entry keeps the ambiguity, I'm happy with that. Some papers would regard WTAP as part of the core complex which is why I had added it originally. I guess I'll have to split mine into the core MAC and associated MACON complexes... At least we have now implemented the versioning nad it will give us a use case for lateral cross-referencing as well!
username_1: Pascale,
none of the refs are wrong, but none provide the full picture either. Happy to keep them all in.
Thanks, Birgit
Status: Issue closed
|
boto/boto3 | 547841734 | Title: s3.generate_presigned_post() ignoring "content-length-range" condition
Question:
username_0: I am trying to create a presigned URL via Lambda for uploading short videos for demos for work. We have used generate_presigned_post with images before and not had issues. Now when trying to upload videos above 10MB we get a 403 error. We have tried using:
`
response = s3_client.generate_presigned_post(
BUCKET_NAME,
fileName,
Fields={"acl": "public-read", "Content-Type": fileType},
Conditions = [
{"acl": "public-read"},
{"Content-Type": fileType},
["content-length-range", 1000000, 1000000000]
]
)
`
to get ~1MB to ~1GB of acceptable range. However when doing a put to the responded URL anything after 10,485,760B (10MB) returns a 403. From what I can tell from the [docs](https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/s3.html#S3.Client.generate_presigned_post) this is how we should be doing it
Status: Issue closed
Answers:
username_1: Does it enforce the `content-length-range` condition? In other words, does it refuse to upload files smaller than 1000000 or bigger than 1000000000? I have some code very similar to yours and it seems to ignore the `content-length-range` condition. |
Prettyhtml/prettyhtml | 356972684 | Title: Be framework aware
Question:
username_0: - Format template expression correctly
While template expressions like {{}} are handled correctly in some other frameworks you can nest them e.g with loops or conditions. Those inner expressions should be indented one level further.<issue_closed>
Status: Issue closed |
BadIdeaFactory/corporate | 410727551 | Title: The On1on
Question:
username_0: # I HEREBY INVOKE MY RIGHTS...
As described in section 11.11.B of the BIFFUD Corporate Bylaws I hereby invoke
my rights as a sentient being who has not uploaded their mind to the cloud for consideration of this project application by the BIFFUD Hive Mind. With this
application I submit my interest in becoming a Member of BIFFUD and having this
project supported and adored by all who can 🤔.
## Project Information
- Project Name: The On1on
- Project Haiku:
Some news is so dumb
It belongs in The Onion
So we put it there
- Project Analogy: It's like The Onion but with actual current events that are truly occurring in our world, the same one we have to live in every day.
### Project Description
In 2013 or so Dan and I built this project to parody a parody newspaper. News from Not the Onion automatically populates our handy Onion template: http://theon1on.com/
## Bylaw Questions
### How is this project a bad idea?
Well, let's see.
- We already negotiated the legalese with the CEO of The Onion when he visited MIT Media Lab. The main point of contention was that we had accidentally stolen the expensive font they license from a fine font foundry. We've since replaced it with a less criminal font. Dan also offended their engineers by lazily hotlinking their images onto our site. But now that's fixed, too, and we're all good! Except that the company has been sold for parts like 4 times since this all went down. So maybe we'll get some more angry lawyer letters.
- Another reason this project is a bad idea is that its news comes from Reddit, where it occasionally picks up some regressive, misogynist, and/or xenophobic funk. Future work on the project, should anyone be inspired, could include a moderation queue for this news.
- And maybe even fixing the Justin Bieber twitter embed, which has been stuck on this one tweet from 21 February 2013: https://twitter.com/justinbieber/status/304554593331843072.
- The site's premier banner sponsor, http://newsjack.in/, links to a website and project that is no longer online.
The good news is that this site continues to attract thousands of unique views a month with literally no effort in the past ~5 years. This proposal is mainly to bring the project into the BIFFUD stable, put it on the website, and reimburse the annual domain fee.
### If this project were a D&D Character, what alignment would it be and why?
Alignment (CHOOSE ONE):
Chaotic good
### Where are the lulz?
Our cruel sad world continues to produce headlines like "Burn Injuries From Viral Boiling Water Challenge Sending People To The Hospital"
### How does this project make people thinking face emoji?
This project forces people to reconsider the very idea of parody news by appropriating (fair use!) the brand and design of one of the most popular parody products around. And then those people need to reconsider whether parody is still funny if they live in a world more bizarre than the dark, twisted minds of comedy writers can even create. It's great!
### Who is involved?
[For each member...]
- Name: <NAME>
- Twitter: @username_0
- Github ID: @username_0
- Skillz: Had the idea, convinced Dan to do it rather than his graduate coursework, made a rose logo in the style of an onion that everyone always totally notices
- Project role / expectations: renewing the domain name until the day I die
- Project stake: %50
- Name: <NAME>
- Twitter: @slifty
- Github ID: @slifty
- Skillz: Built pretty much the entire thing, Belieber
- Project role / expectations: maaaaybe fixing the Twitter integration?
- Project stake: %50
### Who will be the project's Comptroller?
<NAME>eck
### Is this realistic to implement via BIFFUD?
Yeah, it's had exactly 6 years to off-gas any noxious properties. Plus, ONE THOUSAND TWITTER FOLLOWERS: https://twitter.com/theon1on
## Next Steps
1. Attend the next scheduled BIFFUD plotting session to plead your case.
2. If approved, we will create an entry for this project on BIFFUD.com, and work with anyone who would like to help refresh it for today's modern era.
### How (often) will you be providing updates to the organization?
Updates will be action-oriented |
trek10inc/awsume | 269458808 | Title: when region not set for profile, awsume should set the default region
Question:
username_0: most of my roles/users in `.aws/config` do not have region set. I expect `awsume` to fall back to the `[default]` region and set the `AWS_REGION` env var appropriately.
Instead, the AWS_REGION remains unset.
```
=========================AWS Profiles========================
PROFILE TYPE SOURCE MFA? REGION
ca-master User None No ap-southeast-2
default User None No ap-southeast-2
itoc User None Yes None
itoc-preprod Role ca-master Yes ap-southeast-2
mycli User None No ap-southeast-2
myuser User None No ap-southeast-2
osssio-audit Role itoc Yes None
osssio-consbilling Role itoc Yes None
osssio-dev Role itoc Yes ap-southeast-2
osssio-ops Role itoc Yes None
osssio-prod Role itoc Yes None
osssio-qldonline Role itoc Yes None
osssio-sandpit Role itoc Yes None
osssio-staging Role itoc Yes ap-southeast-2
osssio-test Role itoc Yes ap-southeast-2
seeingmachines-preprod Role ca-master Yes ap-southeast-2
seeingmachines-prod Role ca-master Yes ap-southeast-2
```
Answers:
username_1: I have released an update , 1.3.2, that will use the default profile's `region` when the profile Awsume is using does not have a `region` listed, which will be set to the `AWS_REGION` environment variable.
Thank you for the suggestion!
Status: Issue closed
|
vertica/vertica-python | 588942758 | Title: [Flask integration] Vertica-python integration with flask is allowing to insert duplicate value for primary key/unique column
Question:
username_0: Vertica-python integration with flask framework is allowing to insert duplicate value for primary key column.However on browser it was not showing duplicates but in database records are inserted.
import os
from flask import Flask
from flask import render_template
from flask import request
from flask import redirect
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config.update(
SECRET_KEY='',
SQLALCHEMY_DATABASE_URI='vertica+vertica_python://<dbuser>:<dbpassword>@<dbmachineip>:<dbport>/<dbname>',
SQLALCHEMY_TRACK_MODIFICATIONS= False
)
db = SQLAlchemy(app)
class Employee(db.Model):
emp_name = db.Column(db.String(80), unique=True, nullable=False, primary_key=True)
def __repr__(self):
return "<Employeename: {}>".format(self.emp_name)
@app.route("/", methods=["GET", "POST"])
def home():
employees = None
if request.form:
try:
employee = Employee(emp_name=request.form.get("emp_name"))
db.session.add(employee)
db.session.commit()
except Exception as e:
print("Failed to show employee name ")
print(e)
employees = Employee.query.all()
return render_template("home.html", employees=employees)
@app.route("/update", methods=["POST"])
try:
new_name = request.form.get("newname")
old_name = request.form.get("oldname")
employees = Employee.query.filter_by(emp_name=old_name).first()
employees.emp_name = new_name
db.session.commit()
except Exception as e:
print("Couldn't update employee name")
print(e)
return redirect("/")
@app.route("/delete", methods=["POST"])
def delete():
emp_name = request.form.get("delname")
employees = Employee.query.filter_by(emp_name=emp_name).first()
db.session.delete(employees)
db.session.commit()
return redirect("/")
if __name__ == "__main__":
db.create_all()
app.run(host='ServerHostIP')
Answers:
username_1: Please provide the version info for those packages you installed. Thanks.
username_0: I have used 0.10.2 driver version.
username_1: @username_0 I mean what packages have you installed? Such as Flask, and SQLAlchemy. I guess you also used a SQLAlchemy Vertica dialect, where does it come from?
username_0: I have these versions
Package Version
------------------------- -------
Flask 1.1.1
Flask-Migrate 2.5.2
Flask-SQLAlchemy 2.4.1
Flask-WTF 0.14.3
SQLAlchemy 1.3.13
sqlalchemy-vertica 0.0.5
sqlalchemy-vertica-python 0.4.4
vertica-python 0.10.2
I used pip command to setup this. |
commaai/openpilot | 1047190765 | Title: Na navigation if ui restarts while onroad
Question:
username_0: If you hit a ui freeze/watchdog, the ui restarts but doesn't fetch the prime status properly leaving you with no navigation after the restart.
Answers:
username_1: Not sure if I was suppose to add my nav crashes here, but #22855 is likely the same issue. My nav UI did not restart without having to reboot the comma 3. I tried various things like clearing the route, which it let me do.... start a new route and nothing would bring back Navigation.
username_0: Should be fixed by https://github.com/commaai/openpilot/pull/22850
Status: Issue closed
|
daviderusso1984/blazormaterialise | 734017089 | Title: Add to Blazor Projekt
Question:
username_0: how i bind it to my blazor project correct?
_host.cshtml head
` <link rel="stylesheet" href="_content/blazormaterialise/sass/materialize.css">`
host.cshtml body
` <script src="_content/BlazorMaterialize/materialize.js"></script>`
Answers:
username_1: Add nuget project and use
<script src="_content/BlazorMaterialize/materialize.js"></script>
<link rel="stylesheet" href="_content/BlazorMaterialize/materialize.css">
if you use css custom copy files in folder blazormaterialise/test/wwwroot/sass/ |
sdispater/pendulum | 189189174 | Title: Adding Interval to Pendulum produces incorrect results
Question:
username_0: ```
p = Pendulum.now()
p2 = p + Interval(minutes=15)
assert p == p2
```
The reason is that `Pendulum.add_timedelta()` assumes a default python `timedelta` object and can't cope with the non-normalizing behavior of `Interval`.
Answers:
username_1: False
```
This fix will land in the next `0.7.0` release.
Status: Issue closed
|
pytorch/pytorch | 565577873 | Title: Dropout of attention weights in function F.multi_head_attention_forward() breaks sum-to-1 constraint
Question:
username_0: -> attn_output = torch.bmm(attn_output_weights, v)
(Pdb) attn_output_weights.sum(2)
tensor([[1.0307, 1.0081, 1.0560, ..., 1.0524, 1.0065, 0.9454],
[1.0181, 1.0151, 1.0170, ..., 1.0090, 0.9986, 0.9864],
[0.9757, 0.9827, 0.9963, ..., 1.0014, 0.9632, 0.9294],
...,
[1.0049, 0.9946, 0.9487, ..., 1.0083, 1.0059, 0.9952],
[0.9971, 1.0024, 1.0114, ..., 0.9894, 1.0492, 1.0207],
[1.0050, 1.0084, 1.0159, ..., 0.9867, 0.9799, 1.0328]],
device='cuda:0', grad_fn=<SumBackward1>)
What I expect to happen is that the weights still sum-to-1, or sum to a smaller than 1 value after attention dropout.
## Environment
python 3.7, pytorch 1.4.0
## Additional context
Answers:
username_1: Hi! Can you please add code how you construct `dropout` in this example. Is it module or imported function?
username_0: It is pytorch built-in function, i.e., ``torch.nn.functional.dropout()''
See https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L788-L807
username_0: Basically we need a non-scaling dropout function in cpp backend, and then call that function instead of F.dropout() in all cases that have sum-to-1 constraints or similar constraints. |
yegor256/netbout | 125148837 | Title: netbout-web installs bower each time it is built
Question:
username_0: Each time you build netbout-web by doing `mvn clean install -Pqulice`, the bower component is re-installed whether anything is changed/not.
Running "npm install bower"
I've also noted that there is at least 20 megabytes of additional mavan package downloads when the build is done after some interval (like building again after a gap of 4-5 hours).
Answers:
username_1: @username_0 how can we fix it? what do you think?
username_0: @username_1 I think the easy approach is to check for an existing bower version first:
npm list bower
This returns the available version and if present, there is no need to re-install bower again. Agreed that this is a pretty trivial thing and even running `npm install bower` might get it from cache instead of actually downloading, but installing a package each time you build doesn't seem like a good workflow (or maybe I'm missing something, @username_1 ?)
username_1: @username_0 let's try to speed the build a little bit
username_1: @username_2 valid bug
username_2: @username_1 tag "bug" added
username_2: @username_0 I added milestone `3.1` to this issue, let me know if there has to be something else
username_2: @username_0 many thanks for the report, I topped your account for 15 mins, transaction 75042101
username_2: @username_3 can you please help? Keep in mind [this](http://www.xdsd.org/2014/04/17/how-xdsd-is-different.html). If you have any technical questions, don't hesitate to ask right here; The budget of this issue is **30 mins**, which is exactly how much will be paid when the task is *done* (see [this](http://www.xdsd.org/2014/04/17/how-xdsd-is-different.html) for explanation)
username_3: @username_2 What I figured out is that "npm ls | grep bower || npm install bower" / "npm ls | grep bower && npm update bower" works on the command line, anyhow the frontend-maven-plugin does not execute this probably when given as <arguments>. I see no way to implement it, at least not without using something else then the maven-frontend-plugin or hacking it.
username_2: @username_3 you should address your technical comments to the project architect (@username_1)
username_0: @username_1 My original idea was that if we just do a `npm ls|grep bower`, the program returns the version if the package exists on the first line:
└─┬ [email protected]
├─┬ [email protected]
├── [email protected]
├─┬ [email protected]
├── [email protected]
├─┬ [email protected]
Now, if our Java class does the same thing while building (using `java.lang.Runtime` or any other means), it should also get the same output, then depending on the output, it can decide whether to install `bower` or not.
@username_3 Here is a [StackOverflow example](http://stackoverflow.com/questions/3062305/executing-shell-commands-from-java) of how to use the `Runtime` object from java to execute a shell command.
username_1: @username_0 I didn't get why we need java class in this case?
username_0: @username_1 I assumed that our build process might be calling a Java class, which might be installing the `bower` and other stuff, my mistake if that is not the case. In case the build process is just an `ant` script, then I don't see how this could be achieved. Maybe, we can delegate the installing part to a separate `bash` script which then does this check? But as @username_3 said, that method would be a bit hackish.
username_3: @username_0 @username_1 Sure, on bash it's a no-brainer. But how to do it with maven? The currently used maven-frontend-plugin does not support shell commands. Directions to go are:
* replace install via maven-frontend-plugin with a maven plugin that executes shell commands (and do this probably for our case)
* patch maven-frontend-plugin so that it offers a method to install if not present and update of old
Both are a day of work (or better: read and try) for me at least, as I am not a maven expert.
username_1: @username_3 @username_0 take a look at the example project
https://github.com/eirslett/frontend-maven-plugin/blob/master/frontend-maven-plugin/src/it/example%20project/pom.xml#L38
they use `package.json` instead of direct call of `npm install bower`
I guess `bower` would be installed as needed if we had did the same
username_3: @username_1 Thank you, that worked. Unfortunately the travis build fails for a reason that is hopefully unrelated to my patch. Can you have a look?
username_1: @username_3 looks like a temporary network issue
try close/open the PR to trigger the build
username_3: @username_2 Please see PR #975
username_2: @username_3 thanks, I'll take a look
username_2: @username_0 the code made here contains a puzzle `914-ecab4401`/#982, which will be resolved soon
username_3: @username_2 Does closing this issue has to wait until #982 is solved?
username_2: @username_3 no, it's just an information message
Status: Issue closed
username_3: @username_2 Why isn't this task removed from my agenda?
username_0: @username_3 I have already closed this bug. Are you referring to the labeling? I don't think its up to us to add/remove those.
username_2: @username_4 please, review this ticket for compliance with our [QA rules](http://at.teamed.io/qa.html)
username_4: @username_2 Quality is good.
username_2: @username_4 thanks!
username_2: @username_3 paied 10 mins to @username_4 for QA review (payment ID is `76503769`). Many thanks! **30 mins** were added to your account in Transaction ID `AP-2F923971Y9576242X` (task took 216 hours and 48 mins). added +30 to your rating, now it is equal to [+120](http://www.netbout.com/b/36320?open=rating)
username_2: @username_0 the last puzzle `914-ecab4401`/#982 solved |
lspitzner/pqueue | 745884971 | Title: Get rid of insertBehind
Question:
username_0: It's extremely slow, breaks on `union`, and relies in a brittle fashion on the exact implementation of `minView`. Let's be rid of it! The operation makes much more sense in priority queues based on finger trees, where the elements are stored in insertion order and the structure is annotated with subtree minima. (Specifically, a 2-3 finger tree whose digits can be 0 or 1 should do the trick).
Answers:
username_0: Serious question: does this function even work at the moment? I have no idea if the new deletion implementation satisfies its brittle needs. If not, we may need to make a make a major release next just so we can kill this thing. It should never have gone in to begin with.
username_0: Maybe I've been unfair. I thought this broke on union, but maybe it doesn't. And I haven't been able to break it (yet) with the new deletion function. So... maybe it's okay? It's really slow though!
username_0: I just don't like how brittle it is. Every single operation in the module has to be written to preserve the order of `insertBehind` even though the module header explicitly documents unstable ordering. If it were *fast*, that might be worth doing (for stable sorts), but it's not, so I'd much rather not have that landmine. |
chef/automate | 1020961318 | Title: View Environments if IAM permissions are available
Question:
username_0: ## User Story
As an Automate User I should view the list of environments if the view permission is available for me.
## Acceptance Criteria
- Admin should be able to view the tab
- Any user with view permission for environments should be able to view the list of environments
## Implementation Details
## UI Design
## Definition of Done
- All things specified in User Story Acceptance Criteria should be fulfilled.
- UI should exactly match UX design.
- Cypress Coverage for new Feature.
- All Exceptions are Handled Properly
- Ensure logs have no unnecessary data.
- Test coverage for new feature is done to at-least 70%.
- Docs changes PR is Raised.
- Swagger Documentation updated.
- Smoke Test done.
- Ensure Build and Integration Pipelines are Green.
- PR has 2 approvers.
- All Code Review Comments are Resolved.
- Screen shot of the feature developed is provided in the issue.
- Ensure compatibility with all supporting browser.
- Demo the feature to UX.
- Link to the UX wireframe/Requirement Document. |
mrtamm/sqlstore | 154104497 | Title: Add env properties to customize script file lookup.
Question:
username_0: SQL scripts file (*.sqls) lookup is implemented in `ScriptReader.load(Class)` method.
Let's enhance this step with two optional JVM properties for customizing this step:
* `sqlstore.path.prefix` - to specify custom path prefix, e.g. "/db/postgresql". It will be appended with the class package path, so the final resource name would be like "/db/postgresql/cls/pkg/ClassName.sqls"
* `sqlstore.path.suffix` - to specify custom path suffix instead of the default ".sqls". It can be used to activate DB specific prefixes, e.g. "_Postgre.sql". |
callstack/react-native-paper | 651638186 | Title: Compile error with `react-native-web`
Question:
username_0: ### Current behaviour
With a brand new React app created with `create-react-app`, the current `react-native-paper` instructions result in a a compile error. When following the [Using on Web](https://callstack.github.io/react-native-paper/using-on-the-web.html) instructions, after adding the following to the root component, a compile error occurs.
```
<PaperProvider>
<App />
</PaperProvider>
```
Error:
```
Failed to compile.
./node_modules/react-native-vector-icons/lib/tab-bar-item-ios.js
Attempted import error: 'TabBarIOS' is not exported from './react-native'.
```
### Expected behaviour
The build should succeed and the application should render as normal in a web browser.
### Code sample
App.js
```
import React from 'react';
import {Text, View} from 'react-native';
import {Provider as PaperProvider} from 'react-native-paper';
function App() {
return (
<PaperProvider>
<View style={{backgroundColor: 'pink'}}>
<Text>Success!</Text>
</View>
</PaperProvider>
);
}
export default App;
```
package.json
```
{
"name": "paperdemo",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.3.2",
"@testing-library/user-event": "^7.1.2",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-native-paper": "^3.10.1",
"react-native-vector-icons": "^7.0.0",
"react-native-web": "^0.13.1",
"react-scripts": "3.4.1"
},
"scripts": {
[Truncated]
}
});
return config;
};
```
### What have you tried
I have tried:
* starting from scratch with `create-react-app`
* using older versions of `react-native-paper` and `react-native-vector-icons`.
* searching for known issues
### Your Environment
| software | version
| --------------------- | -------
| node | v10.19.0
| npm or yarn | 1.22.4
Answers:
username_0: FWIW, I was able to get around this issue by downgrading `react-native-web` to version `0.12.3`. I see in the `0.13.0` release notes that `TabBarIOS` was removed, however it seems that `react-native-vector-icons` still has a dependency on it.
Status: Issue closed
|
SignalR/SignalR | 258234469 | Title: SignalR only Working when both person is on Different type of browser
Question:
username_0: I am having some bad time here. I have a chat application. Our async functionality using signalR only works when both person are in different types of browser. I need some expert advice here. I can show code if required. Please let me know if anyone face same issue. I tried to log signalr in browser. Still logs only show when both person in different type browser ( But I can see connection started successful response in XHR tab).
Answers:
username_1: SignalR works just fine across different browsers. In fact, the client does not know what browser (if any - because the other client can use a C# client, a C++ client etc.) other connections are using, so it is unlikely a SignalR issue. I would recommend asking a question on stackoverflow and provide more details.
Status: Issue closed
|
sahabe1/Purchasing-Power | 704392343 | Title: Please Remove or Make Repo Private
Question:
username_0: Sahabe,
I hope that you are doing well. I work for Purchasing Power and was wondering if you could remove or make this repo private.
We want to make sure that our code is not sitting in public repositories.
Pleas let me know if you have any questions.
Thank You,
<NAME>
director, enterprise security
<EMAIL>
PurchasingPower.com |
xJon/The-1.12.2-Pack | 561935140 | Title: tick error
Question:
username_0: <!--
Please fill in the following information.
-->
**Basic details**
1. Are you using a legitimate launcher and Minecraft account: yes
2. Are you using a computer with 8GB or more of RAM? Yes
3. Are you using Java 8, 64-bit? yes
<!--
Please post logs only using paste-tools like https://paste.ubuntu.com or https://pastebin.com/index.
-->
**Information required**
1. Is the modpack crashing: **yes client crashes and server crashes
2. Crash/latest [log](https://pastebin.com/FjCSxyBm)
3. Is Optifine installed or any other additional mods: tried both ways
**Describe the issue**
Tick error crashing issue between botania and VaxalMap. There's already an official post done on this. I'll provide a link so you can see all the info. [botania GitHub bug reporting](https://github.com/Vazkii/Botania/issues/2205)
We have resolved this issue on our own server by removing Vaxelmap and switching to journeymap
Status: Issue closed
Answers:
username_1: This is a known issue from https://github.com/username_1/The-1.12.2-Pack/issues/13
We are still trying to resolve it with the authors of Botania & VoxelMap. |
alanhogan/bookmarklets | 719817909 | Title: Neither text selection nor right-clicking work on donegaldaily.com
Question:
username_0: As per the title, neither text selection nor right-clicking work on donegaldaily.com.
Donegaldaily.com seems to be run in an extraordinarily nasty way by people who want your web traffic but believe that they should get to fight you for control of your computer, and they seem to be going to great lengths to disrespect your property rights and your equal and fair use rights.
Tested with an embarrassingly old Firefox version.
The add-on Enable Right Click and Copy 2017.4.12 does make this work, as does Absolute Enable Right Click & Copy.
I tried to have a poke around under the hood (which they're also trying to block), and maybe these are involved:
script#wpcp_disable_selection
script#wpcp_disable_Right_Click
However I didn't really get on top of things myself, and those two may also be red herrings, because there's loads that site tries to do to fight you for control. And just like devious spyware or malware authors, the nasty folks at donegaldaily.com see nothing wrong with their antisocial coding, using the same excuse of, "but you're accessing OUR stuff, so we can do whatever we like on YOUR system." Cue aggravation and support calls from people who think their browser is broken. Not that the entitled DonegalDaily delinquents care one bit for that, because those are other people's problems, and they only care for themselves. |
GodXuebi/Leetcode_study | 449848218 | Title: Binary Tree Right Side View leetcode_199
Question:
username_0: ```
/**
* Definition for a binary tree node.
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode(int x) : val(x), left(NULL), right(NULL) {}
* };
*/
class Solution {
public:
vector<int> rightSideView(TreeNode* root) {
vector <int> result;
queue <pair <TreeNode*,int>> Q; //宽度优先搜索
if(root)
{
Q.push(make_pair(root,0));
}
while(!Q.empty())
{
TreeNode*node=Q.front().first;
int depth=Q.front().second;
if(depth>=result.size())
{
result.push_back(node->val);
}
else
{
result[depth]=node->val;
}
Q.pop();
if(node->left)
{
Q.push(make_pair(node->left,depth+1));
}
if(node->right)
{
Q.push(make_pair(node->right,depth+1));
}
}
return result;
}
};
``` |
vueuse/vueuse | 998020595 | Title: feature: useFocusElement
Question:
username_0: I found that focusTrap isn't really what I wanted, so I put together a useFocusElement that is passed a ref to an element and returns a Ref<boolean> that indicates if the element has focus or not and can be set true to give focus to the element.
Is this something that could become part of vueuse?
Here is the code I have. I think if I put together a PR, I would change the target argument to a MaybeElementRef like useFocusTrap.
```JS
import { ref, Ref, watch } from '@vue/composition-api';
export interface UseFocusElementReturn {
focus: Ref<boolean>;
}
export function useFocusElement(target: Ref<any>): UseFocusElementReturn {
const focus = ref(false);
let input: undefined | HTMLElement;
const focusTrue = () => {
focus.value = true;
};
const focusFalse = () => {
focus.value = false;
};
if ('value' in target) {
// watch for changes in the target
watch(
() => target.value,
(newTarget) => {
if (input) {
input.removeEventListener('blur', focusFalse);
input.removeEventListener('focus', focusTrue);
}
if (newTarget && newTarget.$el) {
input = newTarget.$el.querySelector('input, select, textarea');
if (input) {
input.addEventListener('blur', focusFalse);
input.addEventListener('focus', focusTrue);
}
}
},
{ immediate: true }
);
// watch for someone setting focus to true in order to focus the element
watch(
() => focus.value,
(newValue, oldValue) => {
if (newValue && !oldValue && input) {
input.focus();
}
}
);
}
return { focus };
}
```
Answers:
username_1: Hey @username_0!
Just so you know - I'm not either author nor maintaner of vueuse. But I also was looking for a similar thing regarding focus/blur handling.
I gave your code some thought and I've reworked it as bit, have a look:
```ts
import { MaybeElementRef, unrefElement, useEventListener } from '@vueuse/core';
import type { MaybeRef } from '@vueuse/shared';
import { computed, ref, unref, watch } from 'vue-demi';
export const useFocused = (
{ target, initialValue = false }:
{ target?: MaybeElementRef, initialValue?: MaybeRef<boolean> },
) => {
const element = computed(() => unrefElement(target) ?? window.document.body);
const focused = ref(unref(initialValue));
const onFocus = () => { focused.value = true; };
const onBlur = () => { focused.value = false; };
const cleanupFocus = useEventListener(element, 'focus', onFocus, { passive: true });
const cleanupBlur = useEventListener(element, 'blur', onBlur, { passive: true });
const stopWatch = watch(focused, (focused, oldFocused) => {
if (focused) {
if (!oldFocused) element.value.focus();
} else {
if (!focused) element.value.blur();
}
}, { immediate: true, flush: 'post' });
return {
focused,
cleanup() {
stopWatch();
cleanupFocus();
cleanupBlur();
},
};
};
```
To simplify, I've used some of the shared vueuse methods. Also, I've added `blur` handling and possibility to cleanup all of the event handlers and a watcher (I needed that when used this composable in a directive scope where, unfortunately, there is no onUnmounted hook available :<). And lastly - now the function is accepting `initialValue` for the `focused` ref and the watcher runs immediately using [`flush: 'post'` option](https://v3.vuejs.org/guide/reactivity-computed-watchers.html#effect-flush-timing) (to make sure it's being run always after the component updates).
Please write what do you think about that - IMO this implementation is a bit closer to what is already within vueuse codebase.
username_0: HI @username_1
Nice. First, I wasn't sure there would be others who wanted this functionality as well. I would like to hear your use cases. Mostly because of your comment that the watcher might not "fit" with the vueuse approach. For me, my entire use case centres on the need for the watcher. To create a top notch UI experience for my users, I need to programmatically set focus on certain put fields at the appropriate times. To do that, I need the watcher so that my code can just set focused.value = true and I know that the input now has focus.
You mentioned using it in a directive. What does the directive do?
I didn't add the blur handling because I have never needed to remove focus from an element, but it might as well be there for consistency.
I think the initialValue is a good addition. I was going to convert to using things like MaybeElementRef if the maintainers said they wanted to add it. But I didn't know about the useEventListener. That makes things much easier than having to use a watch on the target.
And finally, I think that splitting out the input search is good for a library, I don't have a use case (yet) for focusing anything other than an input, but I could see a use case for wanting to focus a button or something similar.
On a style note, I am not a big fan of the way you did the parameters for the use function, but I see that some of the vueuse functions do that. It just isn't consistent though.
overall, nice improvements.
username_1: My use case is an implementation of a tooltip directive. The directive might be attached to buttons, inputs, anything with `tabindex=""` (or it will add a tabindex by itself). Tooltip should be shown not only on hover, but also on focus, so it's accessible for keyboard users as well.
And well, in directive I cannot use things like `@focus` on a DOM element, so I need some other way of attaching events - such as this composable function.
Okay, you make a valid argument about the watcher. IMO it could be added here, but I'm not sure how that fits with general architecture of the `vueuse`. For the first implementation it should be fine I guess.
Regarding the parameters - yes, I see that many `vueuse` methods have differences in that regard. In the PR I would probably write an interface like it's done [here](https://github.com/vueuse/vueuse/blob/main/packages/core/useMousePressed/index.ts#L7-L33) - I just didn't want to make the example longer. My comment was already long enough 😄
username_1: Okey, I've just reminded myself about the EffectScope API. So instead of returning cleanup method I would just rely on `tryOnScopeDispose` used in `useEventListener` and `watch` to clean up itself. Also, I've reworked the example so it's similar to the `useMousePressed` composable (the main point being possibility to test it in `window`-lacking environments). See the example:
```ts
import { computed, ref, unref, watch } from 'vue-demi';
import { MaybeElementRef, unrefElement } from '../unrefElement'
import { useEventListener } from '../useEventListener'
import { ConfigurableWindow, defaultWindow } from '../_configurable'
export interface MousePressedOptions extends ConfigurableWindow {
/**
* Initial values
*
* @default false
*/
initialValue?: boolean
/**
* Element target to be capture the focus/blur events
*/
target?: MaybeElementRef
}
/**
* Reactive element focus boolean.
*
* @see https://vueuse.org/useFocused
* @param options
*/
export function useFocused(options: MousePressedOptions = {}) {
const {
initialValue = false,
window = defaultWindow,
} = options
const focused = ref(initialValue)
if (!window) {
return { focused };
}
const onFocus = () => { focused.value = true; };
const onBlur = () => { focused.value = false; };
const target = computed(() => unrefElement(options.target) ?? window);
useEventListener(target, 'focus', onFocus, { passive: true });
useEventListener(target, 'blur', onBlur, { passive: true });
watch(focused, (focused, oldFocused) => {
if (focused) {
if (!oldFocused) target.value?.focus();
} else {
if (!focused) target.value?.blur();
}
}, { immediate: true, flush: 'post' });
return { focused };
}
```
username_0: This looks good. I think the watch should be this though:
```JS
watch(focused, (focused, oldFocused) => {
if (focused) {
if (!oldFocused) target.value?.focus();
} else {
if (oldFocused) target.value?.blur();
}
}, { immediate: true, flush: 'post' });
```
in the else clause you were checking !focused, which was already tested as true by the outer if statement.
username_1: @username_0 Yea, that's exactly what I had in mind, thanks! 🤝
I'll file a PR with this one and link the issue. Let's see if `vueuse` team seems that as a valuable addition or not
username_0: Hi @username_1
I have started to create a PR for this, with the hopes that the maintainers will accept it. In setting up the demo, I have discovered that your code doesn't handle the initial value properly. The issue is that you lost my original code of setting up the watch on the target. The watch on the target was necessary to setup the event listeners, which you properly moved into useEventListner, but I also did the watch on the target because when you use the ref="" directive in vue, target is originally undefined. Then after the component renders, the ref is filled in. At this point is when we have a chance to focus or blur based on the initial value.
Here is the code I am going to submit in the PR:
```JS
import { computed, Ref, ref, watch } from 'vue-demi'
import { MaybeElementRef, unrefElement } from '../unrefElement'
import { useEventListener } from '../useEventListener'
import { ConfigurableWindow, defaultWindow } from '../_configurable'
export interface FocusOptions extends ConfigurableWindow {
/**
* Initial value. If set true, then focus will be set on the target
*
* @default false
*/
initialValue?: boolean
/**
* The target element for the focus and blur events.
*/
target?: MaybeElementRef
}
/**
* Reactive element focus boolean.
*
* @see https://vueuse.org/useFocus
* @param options
*/
export function useFocus(options: FocusOptions = {}): Ref<boolean> {
const {
initialValue = false,
window = defaultWindow,
} = options
const focused = ref(initialValue)
if (!window)
return focused
const onFocus = () => { focused.value = true }
const onBlur = () => { focused.value = false }
const target = computed(() => unrefElement(options.target) ?? window)
useEventListener(target, 'focus', onFocus, { passive: true })
useEventListener(target, 'blur', onBlur, { passive: true })
const setFocus = (focused: boolean, oldFocused: boolean) => {
console.log(`in watch: ${focused}, ${oldFocused}`)
if (focused) {
if (oldFocused !== true) {
target.value?.focus()
if (target.value)
console.log('focus called')
}
}
else {
if (oldFocused === true) target.value?.blur()
}
}
watch(focused, setFocus, { immediate: true, flush: 'post' })
watch(target, () => {
setFocus(focused.value, false)
}, { immediate: true, flush: 'post' })
return focused
}
```
username_0: Ah, I see we both replied at the same time. :)
I pretty much have the PR code ready to go.
Here is the demo that I was going to submit with the PR:
```JS
<script setup lang="ts">
import { ref } from 'vue-demi'
import { useFocus } from '.'
const text = ref()
const input = ref()
const button = ref()
const textFocus = useFocus({ target: text })
const inputFocus = useFocus({ target: input, initialValue: true })
const buttonFocus = useFocus({ target: button })
const setTextFocus = () => { textFocus.value = true }
const setInputFocus = () => { inputFocus.value = true }
const setButtonFocus = () => { buttonFocus.value = true }
const unsetTextFocus = () => { textFocus.value = false }
const unsetInputFocus = () => { inputFocus.value = false }
const unsetButtonFocus = () => { buttonFocus.value = false }
</script>
<template>
<div>
<p ref="text" :class="{'bg-green-200': textFocus}">
Some text that can be focused
</p>
<input ref="input" class="item" />
<button ref="button" class="item">
button
</button>
<hr />
<p v-if="textFocus">
The text has focus
</p>
<p v-if="inputFocus">
The input control has focus
</p>
<p v-if="buttonFocus">
The button has focus
</p>
<div>
<button @click="setTextFocus">
focus text
</button>
<button @click="setInputFocus">
focus input
</button>
<button @click="setButtonFocus">
focus button
</button>
</div>
<div>
<button @click="unsetTextFocus">
unfocus text
</button>
<button @click="unsetInputFocus">
unfocus input
</button>
<button @click="unsetButtonFocus">
unfocus button
</button>
</div>
</div>
</template>
<style scoped>
.item:focus, bg {
background-color: cadetblue;
}
</style>
```
username_0: You will notice two significant differences from what you had put together:
1. the name: I am suggesting useFocus, not useFocused. The reason is because I am coming at it from the control point of view, where you are coming at it from the status point of view. I think it makes more sense that the control point of view also has a status component. But it seems strange that the status point of view has a control mechanism.
2. I am returning the ref directly and not wrapping it into an object. When I went to use the object in a component that had multiple useFocus() calls, I found it more awkward to use the object. It was much easier to have the useFocus return the ref directly so that I could easily assign it to whatever variable name I wanted.
username_0: Ugg, I am on windows without WSL so contributing seems more difficult because a couple hundred files changed. Here is the .md file I would have submitted.
```md
---
category: Browser
---
# useFocus
Utility to track or set the focus state of a DOM element.
## Usage
```ts
import { useFocus } from '@vueuse/core'
const text = ref();
const textFocused = useFocus({ target: text })
```
```
username_1: Uf, good that I've commented here before doing any actual work then!
Okey, I think that part was missing form the original piece of code and I didn't think of that! It's a cool addition. I'm not sure if we couldn't make it a single watcher though, but maybe that'd be already too much pre-optimisation.
1. I get it, IMO it's a good explanation. I think it fits with the overall naming convention in the library.
2. This one I'm not sure - generally I prefer objects because they're more flexible when it comes to the possibility of returning additional APIs from the composable in the future. `const { focused: focusedInput } = useFocus(/* ... */);` for me looks pretty good.
Okay, can you commit the most important things you've got to the new branch in your fork repository and invite me to it with write privilege please? I'm on Mac, so maybe it'll be easier for me to rebuild everything and commit.
username_0: I have created the fork, checked in the code and invited you as a contributor
username_0: and I changed the return back to being an object.
I made sure the line endings were LF only.
username_1: @username_0
Sorry I got carried away with other things and could handle it only now.
I've splitted your work into two branches - one for `useFocus` and second one for `useFocusInputElement`. I've finished up work, added tests and opened up PR for `useFocus` (#818).
Regarding `useFocusInputElement` - I think this one might be too narrow for the vueuse library itself. Not sure, but I propose to go in a bit different direction. My idea is to create `useQuerySelector` composable which would look for a child element and return it as a reactive value. The proposed solution would then look like this:
```ts
import { useQuerySelector, useFocus } from '@vueuse/core';
const target = ref<HTMLElement | undefined>();
const control = useQuerySelector('input,textarea,select', { target });
const { focused } = useFocused({ target: control });
```
I think that could be a way to go - `useQuerySelector` should be flexible enough to open up the door for other use cases.
I can already think of an extensible API for it - e.g. toggleable configuration option for tracking of DOM changes with mutation observers. Or possibility to define default value (when no element is found).
I'll probably open up a separate issue for it to track the idea.
username_1: Hey! Just to do an update on this one - [useFocus PR](https://github.com/vueuse/vueuse/pull/818) just got merged!
Now the question about `useQuerySelector` composable. Imo it could be an useful util for handling cases like this - you get a child input element as a ref and pass it to `useFocus` - voila, you have your input focused (as visible on the code example in my message right above).
What do you think @username_0, @antfu - any comments on this one? If it's all good I can give it a try with an actual implementation |
TerriaJS/terriajs | 624061077 | Title: Mobx: Clip and ship
Question:
username_0: Port master's clip and ship to mobx
For: https://github.com/TerriaJS/terria-cube/issues/48
Similar: https://github.com/TerriaJS/terriajs/issues/3109
Answers:
username_0: "Download" is a high level problem that DEA want fixed in next FY - want different ways for accessing data incl. download.
One easy way is a high res image
Second is actual data
username_0: Required for June30
username_1: Sorry I'm doing this now
Status: Issue closed
|
newrelic/newrelic-ruby-agent | 866095253 | Title: Add latest Pumas to Rack instrumentation test matrix
Question:
username_0: We're only testing up to Puma 3.11.x in our Rack instrumentation tests.
Add 4.x, 5.x to the matrix. 5.2.2 is latest Puma version at the time of this writing.
Answers:
username_1: Does this just involve adding [new Puma versions here](https://github.com/newrelic/newrelic-ruby-agent/blob/dev/test/multiverse/suites/rack/Envfile)? Or is it more complicated?
Status: Issue closed
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.