repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
parkervcp/eggs | 608206152 | Title: Atlas Game
Question:
username_0: Service: Atlas
Does this expand an already existing service: N
Link to game: https://store.steampowered.com/app/834910/ATLAS/
Links for server downloads: https://atlas.gamepedia.com/Server_setup#Downloads
Links for install steps/docs: https://atlas.gamepedia.com/Server_setup#Downloads
Steam-Server-ID: 1006030
I tried around using the ark egg and changed some settings in the startup command, but I can't get this running.
Also on cmd based steamcmd, I didn't get the ShooterGame binary, maybe I'm just stupid :D
Answers:
username_1: Sry, don't have the game to test, so i can't create a egg :) But game looks nice :)
username_0: You don't need the game to test the server startup or am I wrong :o
SteamCMD is anonymous.
It doesn't just look nice ;)
username_0: I tried around and get it mostly installed.
But if I want to start the game, the following error appears:
`[Pterodactyl Daemon] Running server preflight.
[Pterodactyl Daemon] Starting server container.
:/home/container$ ./ShooterGame/Binaries/Linux/ShooterGameServer Ocean?ServerX=0?ServerY=0?AltSaveDirectoryName=00?ServerAdminPassword=?MaxPlayers=20 -log -server -NoCrashDialog -NoBattlEye
./ShooterGame/Binaries/Linux/ShooterGameServer: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
[Pterodactyl Daemon] Server marked as OFF`
I added:
`apt -y --no-install-recommends install curl lib32gcc1 ca-certificates libssl1.0.0 libssl-dev
cd /lib/x86_64-linux-gnu
sudo ln -s libssl.so.1.0.0 libssl.so.10
sudo ln -s libcrypto.so.1.0.0 libcrypto.so.10`
To the install script for the server but this also doesn't seems to work.
wings are restarted...
username_2: The packages need to be added to the image that is used to run the server not the install script.
username_0: You mean the docker image itself?
username_2: the one running the server, yes.
username_1: I have the Game New and create an egg, when i have more time
username_2: I have started looking at this and have it "somewhat" running
username_3: Atlas is not realy Easy to Host
You need a "Map Grid too the Servers"
You Need Main Servers and Expansion Server to use the Full Map
The Card Blackwood is made for 1 Server
If you want to Use the Complete World of Atlas you need an Server Grid with 225 Servers they are linkend into an Redis Database
username_2: Someone got it working using my egg with this map set
username_4: is there an egg for atlas now?
username_1: not a working one |
taoneill/war | 750443882 | Title: Size for custom inventory must be a multiple of 9 and 54 slots (got 63) #858
Question:
username_0: Good morning,
Server version: Spigot 1.16.4
After running a new install of your plugin using your 2.0.0 release branch #858 , I noticed that after clicking the "Manage War" option, there is a stack trace saying that the chestUI being built is too large to be handled.
https://i.imgur.com/opnJP9V.png
After looking, I found the issue in WarAdminUI.java in getSize() was returning 9 * 7.
After changing that to be 9 * 6 (maximum size supported), I am now getting an ArrayIndexOutOfBoundsException due to there being too many items trying to be added to the ChestUI.
Answers:
username_1: Can confirm the new size limitation is causing an issue. I'm going to try moving the warzone default config out of the main war config UI I think.
username_1: Here's my new UI proposal to solve the issue:

username_1: Fixed in the latest 2.0.0-RC6 available at http://maven.username_1.me/wardevbuilds/devbuilds.html. Please let me know if you run into more issues with the 1.16 support! I don't have a server anymore so hard to test multiplayer features.
Status: Issue closed
|
mixxamm/SmartpassAndroid | 306149442 | Title: Bug login
Question:
username_0: ### Verwacht gedrag: Er komt bericht dat gegevens fout zijn
### Daadwerkelijk gedrag: Het bericht dat inloggen als leerkracht mislukt
### Stappen om te reproduceren: inloggen als leerling kiezen en four credentials opgeven. Doe dit nog is en je krijgt het bericht inloggen als leerkracht mislukt. Je moet terug naar begin scherm en inloggen als leerling opnieuw kiezen om opnieuw te kunne proberen
### Smartphone model: moto g4 plus
### Android-versie:android 7.0 nougat
Answers:
username_1: Dit is al verholpen, nieuwe release komt later vandaag.
Status: Issue closed
|
agda/agda-stdlib | 289455425 | Title: We shouldn't pollute the namespace with index.agda
Question:
username_0: Cf. https://github.com/agda/agda/issues/2852#issuecomment-345299160
I'm marking it as a bug because the sole purpose of `index.agda` was to be used
to generate the library's landing page. It was never meant to be an exported module.<issue_closed>
Status: Issue closed |
martin9700/Surly.PowerShell.SQL.Tools | 161074415 | Title: Build failing loading Pester!
Question:
username_0: Appveyor build is failing
`Build started
git clone -q --branch=master https://github.com/username_0/Surly.PowerShell.SQL.Tools.git c:\ps.sql
git checkout -qf 78fa1aee74b5d17685ee4113276582044f895607
Running Install scripts
cinst pester
Installing the following packages:
pester
By installing you accept licenses for the packages.
pester not installed. An error occurred during installation:
Unable to read package from path 'pester.3.4.0.nupkg'.
The install of pester was NOT successful.
pester not installed. An error occurred during installation:
Unable to read package from path 'pester.3.4.0.nupkg'.
Chocolatey installed 0/1 package(s). 1 package(s) failed.
See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
Failures:
- pester
Command exited with code 1`
Not something I can fix. Will wait and see if it gets resolved on its own or if I have to open a support ticket with Appveyor. |
ibis-project/ibis | 330506153 | Title: AttributeError: 'NoneType' object has no attribute 'lower' when using partition tables
Question:
username_0: Following the tutorial with partition tables, but unable to get anywhere as soon as I am trying to select the table.
Code:
schema = ibis.schema([('run_date', 'timestamp'),
('seq_num', 'int64'),
('month', 'int16')])
name = 'new_table'
hive_conn.create_table(name, schema=schema, partition=['run_date', 'seq_num'])
table = hive_conn.table(name)
print table.schema()
print table.partitions
It seems that when it runs description on the table, it is trying to parse the whole output of the description output. Output (after adding "print names, types" in client.py line 722):
('month', 'run_date', 'seq_num', '', '# Partition Information', '# col_name ') ('smallint', 'timestamp', 'bigint', None, None, 'data_type ')
The error is:
Traceback (most recent call last):
File "ois_hdfs.py", line 56, in <module>
table = hive_conn.table(name)
File "/usr/lib/python2.7/site-packages/ibis/client.py", line 119, in table
schema = self._get_table_schema(qualified_name)
File "/usr/lib/python2.7/site-packages/ibis/impala/client.py", line 1193, in _get_table_schema
return self.get_schema(tname)
File "/usr/lib/python2.7/site-packages/ibis/impala/client.py", line 726, in get_schema
t = t.lower()
AttributeError: 'NoneType' object has no attribute 'lower'
Ibis version:
ibis-framework==0.13.0
Hadoop:
hadoop version
Hadoop 2.7.3.2.6.4.0-91
Hive:
hive> !hive --version;
Hive 1.2.1000.2.6.4.0-91
Answers:
username_1: @username_0 Thanks for the report. Marking as a bug for 0.14.
username_1: @username_0 It looks like you're using Hive, which we do not explicitly support. Are you using an impala connection and you just happened to call the variable `hive_conn`?
username_1: @username_0 Can you show me the SQL output of `DESCRIBE new_table`? It looks like that is returning something different than what ibis expects. It looks like that is doing something more like `DESCRIBE EXTENDED new_table`.
Was this a change made in a version of Hive later than 1.1.0?
username_1: @username_0 Right, so you're using actual Hive and not impala. Hive is not a backend that we support at the moment, though we would be extremely happy to have someone work on such a thing. Therefore, there's no guarantee that using ibis with hive will behave in any kind of reasonable way.
I'll move your issue to the Future milestone.
Just out of curiosity, does everything *except* this work? For example, can you execute complicated queries? If so, I'd be very surprised.
username_0: Surprisingly, the basic functionality works. My need was mostly for the creation table part in Hive using Dataframe column types, which was generated from Oracle. If there is no partitioning, then create_table works flawlessly in Hive. There is no other library currently that does this out there.
The Hive alter and load queries I executed with raw_sql. Only thing that would make this better is if the temp table for the schema could be created in directly in Hive so we don't depend on hdfs_client connection.
Status: Issue closed
username_1: This is supported on a best effort basis, since Hive isn't explicitly supported. Happy to accept PRs. |
abunghez/super-duo | 99005678 | Title: Sporadic crash after scanning book (visible after issue #2)
Question:
username_0: Process: it.jaschke.alexandria, PID: 15859
java.lang.IllegalStateException: Fragment AddBook{2a18260f} not attached to Activity
at android.support.v4.app.Fragment.getLoaderManager(Fragment.java:881)
at it.jaschke.alexandria.AddBook.restartLoader(AddBook.java:149)
at it.jaschke.alexandria.AddBook.access$300(AddBook.java:34)
at it.jaschke.alexandria.AddBook$AddBookMessageReceiver.onReceive(AddBook.java:242)
at android.support.v4.content.LocalBroadcastManager.executePendingBroadcasts(LocalBroadcastManager.java:297)
at android.support.v4.content.LocalBroadcastManager.access$000(LocalBroadcastManager.java:46)
at android.support.v4.content.LocalBroadcastManager$1.handleMessage(LocalBroadcastManager.java:116)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:155)
at android.app.ActivityThread.main(ActivityThread.java:5696)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1028)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:823)
Status: Issue closed
Answers:
username_0: Reverted the fix for issue #2 and thought of a more clever one. |
PaddlePaddle/Paddle | 1080516213 | Title: PaddleNLP使用ERNIE后接CRF模型with_start_stop_tag设置为True时报错,尺寸问题
Question:
username_0: 为使您的问题得到快速解决,在建立Issue前,请您先通过如下方式搜索是否有相似问题:【搜索issue关键字】【使用labels筛选】【官方文档】
建立issue时,为快速解决问题,请您根据使用情况给出如下信息:
- 标题:简洁、精准描述您的问题,例如“ssd 模型前置lstm报错 ”
- 版本、环境信息:
1)PaddlePaddle版本:请提供PaddlePaddle版本号,例如1.1或CommitID
2)CPU:请提供CPU型号,MKL/OpenBlas/MKLDNN/等数学库的使用情况
3)GPU:请提供GPU型号,CUDA和CUDNN版本号
4)系统环境:请说明系统类型、版本(例如Mac OS 10.14),Python版本
注:您可以通过执行[summary_env.py](https://github.com/PaddlePaddle/Paddle/blob/develop/tools/summary_env.py)获取以上信息。
- 模型信息
1)模型名称 2)使用数据集名称 3)使用算法名称 4)模型链接
- 复现信息:如为报错,请给出复现环境、复现步骤
- 问题描述:请详细描述您的问题,同步贴出报错信息、日志/代码关键片段
Thank you for contributing to PaddlePaddle.
Before submitting the issue, you could search issue in the github.Probably there was a similar issue submitted or resolved before.
If there is no solution,please make sure that this is a issue of models including the following details:
**System information**
-PaddlePaddle version (eg.1.1)or CommitID
-CPU: including CPUMKL/OpenBlas/MKLDNN version
-GPU: including CUDA/CUDNN version
-OS Platform (eg.Mac OS 10.14)
-Python version
-Name of Models&Dataset/details of operator
Note: You can get most of the information by running [summary_env.py](https://github.com/PaddlePaddle/Paddle/blob/develop/tools/summary_env.py).
**To Reproduce**
Steps to reproduce the behavior
**Describe your current behavior**
**Code to reproduce the issue**
**Other info / logs**
Answers:
username_1: 辛苦提供一下具体报错信息。 |
DefinitelyTyped/DefinitelyTyped | 350087092 | Title: bluebird-global instance methods lose promise type information
Question:
username_0: If you know how to fix the issue, make a pull request instead.
- [x] I tried using the `@types/bluebird-global` package and had problems.
- [x] I tried using the latest stable version of tsc. https://www.npmjs.com/package/typescript
- [x] I have a question that is inappropriate for [StackOverflow](https://stackoverflow.com/). (Please ask any appropriate questions there).
- [x] [Mention](https://github.com/blog/821-mention-somebody-they-re-notified) the authors (see `Definitions by:` in `index.d.ts`) so they can respond.
- Authors: @username_1
The following does not, but should typecheck:
```ts
Promise.resolve([3]).map((n: number) => true);
```
See: https://github.com/DefinitelyTyped/DefinitelyTyped/issues/10801#issuecomment-412355785
This is a related, but slightly different reproduction:
https://www.typescriptlang.org/play/index.html#src=declare%20class%20Foo%3CU%3E%20%7B%0A%20%20map%3CU%3E(mapper%3A%20(u%3A%20U)%20%3D%3E%20boolean)%3A%20void%3B%0A%7D%0A%0Ainterface%20Bar%3CT%3E%20%7B%0A%20%20map%3A%20typeof%20Foo.prototype.map%3B%0A%7D%0A%0Adeclare%20var%20bar%3A%20Bar%3Cstring%3E%3B%0A%0Abar.map((n%3A%20number)%20%3D%3E%20false)
My suggestion of `map: typeof Bluebird<T>.prototype.map` isn't valid--I'm not sure how to do that in TS.
Answers:
username_1: Status:
I created a TypeScript Feature Request, that suggests allowing devs to write `typeof Bluebird<T>.prototype.map` (i.e. provide the generic type of the class in the `typeof` expression). See the linked issue above.
To walk-around this problem, the `strictFunctionTypes` option can be set to `false` in `tsconfig.json`. This is obviously a *bad* idea, since the issue is still there (it just allows the compilation to pass). The issue being: the type of the Promise is lost, when `.map` (and possibly other instance methods) is used.
username_0: @username_1 Awesome, thanks for filing that. I wonder if they'd be open to a feature that would allow completely replacing/subverting the `Promise` and `PromiseConstructor` definitions instead since ultimately, that is more or less what we want, yeah? I'm sure that's a bad idea for reasons I haven't thought of...
username_0: Oh, and in the meantime, would it make sense to copy-paste the definitions for now so that they work properly?
username_0: @username_1 there's something to this, I think:
```ts
type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>
interface Promise<T> extends Omit<Bluebird<T>, 'call' | 'then' | 'catch' | 'finally'> {
```
It fixes the issue above. I'm still working through some problems with `call` that make `Bluebird` incompatible with `Promise`, but it's showing some potential.
[Here's a minimal example](https://www.typescriptlang.org/play/index.html#src=declare%20class%20Foo%3CT%3E%20%7B%0A%20%20map(t%3A%20T)%3A%20boolean%20%0A%7D%0A%0Ainterface%20Promise%3CT%3E%20extends%20Foo%3CT%3E%20%7B%20%7D%0A%0Adeclare%20var%20p%3A%20Promise%3Cnumber%3E%0A%0Ap.map(4)%0Ap.map('hi'))
username_1: I see how your `extend Omit<>` works, however I'd rather go with what Ryan suggested in the other ticket. Both ways achieve the same goal, but I find the verbose method of specifically listing all methods on the `Promise` global interface more controllable (white-listing) than saying "take all methods from Bluebird but the following few" (black-listing). Though, I admit it requires significantly more typing.
I'll create a PR that does what Ryan recommended (to all Promise's instance methods) and this will resolve this ticket's immediate issue with getting an error just for using one of the methods.
---
In regards to the inability to assign `Bluebird<any>` to `Promise<string>` -- for now I'm interested only in why it's possible to assign `Bluebird<any>` to `Bluebird<string>`, but not `Bluebird<any>` to `Promise<string>` (i.e. the example you gave in the linked TypeScript issue).
Let's leave this and the linked ticket open until it's found out what the reason is and then see what to do next.
Status: Issue closed
username_2: Hi thread, we're moving DefinitelyTyped to use [GitHub Discussions](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/53377) for conversations the `@types` modules in DefinitelyTyped.
To help with the transition, we're closing all issues which haven't had activity in the last 6 months, which includes this issue. If you think closing this issue is a mistake, please pop into the [TypeScript Community Discord](https://discord.gg/typescript) and mention the issue in the `definitely-typed` channel. |
RandomFractals/vscode-data-preview | 713298632 | Title: Extension causes high cpu load
Question:
username_0: - Issue Type: `Performance`
- Extension Name: `vscode-data-preview`
- Extension Version: `2.2.0`
- OS Version: `Windows_NT x64 10.0.18362`
- VSCode version: `1.49.2`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`C:\Users\<NAME>\username_1Inc.vscode-data-preview-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-high-cpu-load
Status: Issue closed
Answers:
username_1: duplicate of #173 |
parnustk/lisbackend | 137663162 | Title: GradingType in Core\Entity create unittest
Question:
username_0: Examples Core\Entity\Absence and CoreTest\Entity\AbsenceTest.
Cover all properties and methods.
Use typehinting, (LisUser $createdBy), and type casting, $this->trashed = (int) $trashed, where possible and needed.
Comment all properties and methods.
Answers:
username_1: Finished
Status: Issue closed
|
helix-toolkit/helix-toolkit | 246789190 | Title: Can not implement a UserControl in another project
Question:
username_0: I create "Controls" project which contains a UserControl that include some Viewport3DX. Both project import the latest version.
Here is the picture:

However, when I tried to assign the ViewModel to the UserControl , the MainWindow would show the error "**Unable to load DLL 'sharpdx_direct3d11_1_effects_x86.dll': The specified module could not be found**"
But if I remove comment "this.DataContext=this" and add comment to "this.DataContext = new ViewPort()", it would not show the error.

The designer of MainWindow show

Does anybody know how I can solve this issue? Thanks in advance.
Answers:
username_1: Did you set those three effect dll to copy to the build location?
username_0: @username_1 Yes, I surely post three effect in bin file(both project). Actually , I posted everywhere I can think about.
username_1: What is in your view model?
username_0: @username_1 I thought it is the same question with #522
Because it would show the error if I created the instance of DefaultEffectsManager in my viewmodel class.
However, it all be fine after I updated library. Very appreciate your reply. Thank you.
Status: Issue closed
|
WoTTsecurity/agent | 505994005 | Title: Improve Snap detection
Question:
username_0: Currently the code decides that it's running in Snap only if `SNAP` env var is set. However in some cases like when running in PyCharm installed as classic snap (meaning unconfined) it misdetects Snap environment.
Something more reliable should be chosen to detect Snap environment. |
ipython/ipython | 3796481 | Title: Better LaTeX printing in the qtconsole with the sympy profile
Question:
username_0: Right now, the qtconsole uses matplotlib.mathtex to print LaTeX with the SymPy profile. This isn't very good looking, is slow, requires matplotlib, and doesn't support full LaTeX.
Better would be to support MathJax in the qtconsole, or to use LaTeX itself, using something like http://research.scios.ch/inet/doku.php?id=ipy_tex.
Also, it does not support printing lists of SymPy expressions with LaTeX a la #1399 correctly.
Answers:
username_1: ^ this was closed in https://github.com/sympy/sympy/pull/1776 |
AlexKhymenko/ngx-permissions | 299654247 | Title: No provider for InjectionToken USE_PERMISSIONS_STORE
Question:
username_0: ERROR Error: Uncaught (in promise): Error: StaticInjectorError[InjectionToken USE_PERMISSIONS_STORE]:
StaticInjectorError[InjectionToken USE_PERMISSIONS_STORE]:
NullInjectorError: No provider for InjectionToken USE_PERMISSIONS_STORE!
Error: StaticInjectorError[InjectionToken USE_PERMISSIONS_STORE]:
StaticInjectorError[InjectionToken USE_PERMISSIONS_STORE]:
NullInjectorError: No provider for InjectionToken USE_PERMISSIONS_STORE!
at _NullInjector.get (core.js:993)
at resolveToken (core.js:1281)
at tryResolveToken (core.js:1223)
at StaticInjector.get (core.js:1094)
at resolveToken (core.js:1281)
at tryResolveToken (core.js:1223)
at StaticInjector.get (core.js:1094)
at resolveNgModuleDep (core.js:10878)
at NgModuleRef_.get (core.js:12110)
at resolveDep (core.js:12608)
at _NullInjector.get (core.js:993)
at resolveToken (core.js:1281)
at tryResolveToken (core.js:1223)
at StaticInjector.get (core.js:1094)
at resolveToken (core.js:1281)
at tryResolveToken (core.js:1223)
at StaticInjector.get (core.js:1094)
at resolveNgModuleDep (core.js:10878)
at NgModuleRef_.get (core.js:12110)
at resolveDep (core.js:12608)
at resolvePromise (zone.js:821)
at resolvePromise (zone.js:785)
at eval (zone.js:870)
at ZoneDelegate.invokeTask (zone.js:421)
at Object.onInvokeTask (core.js:4744)
at ZoneDelegate.invokeTask (zone.js:420)
at Zone.runTask (zone.js:188)
at drainMicroTaskQueue (zone.js:594)
at <anonymous>
Answers:
username_1: A little more information will be better. Can't help You with this information.
Status: Issue closed
username_1: Closing cause of not enough information. If You still have problem reopen it. |
containerd/containerd | 512287918 | Title: Docker container process becomes a Zombie process to make k8s pod not ready.
Question:
username_0: **Description**
Docker container process becomes a Zombie process to make k8s pod not ready.
**Steps to reproduce the issue:**
1. With k8s to deploy scaled docker based application
2. Keep the system running for 2-3 days, some k8s pods will stuck at not ready status due to the container process becomes a Zombie process.
**Describe the results you received:**
- zookeeper-1 pod is not ready.
cloud-user@hm6-opshub-aio:~$ kubectl get pod -n opshub-data zookeeper-1
NAME READY STATUS RESTARTS AGE
zookeeper-1 0/1 Running 0 4d1h
- zookeeper-1 container is UP and the container process(661) is defunct but the parent shim process(662) is still alive.
cloud-user@hm6-opshub-aio:~$ docker ps | grep zookeeper-1
81cc1c688c86 d43be510ec44 "/custom-entrypoint.…" 4 days ago Up 4 days k8s_zookeeper_zookeeper-1_opshub-data_04f9ccf7-39a3-4cbd-b5cc-032759116f54_0
3fb3d3015ea2 k8s.gcr.io/pause:3.1 "/pause" 4 days ago Up 4 days k8s_POD_zookeeper-1_opshub-data_04f9ccf7-39a3-4cbd-b5cc-032759116f54_0
cloud-user@hm6-opshub-aio:~$ sudo ls -lart /run/docker/runtime-runc/moby | grep 81cc1c688c86
drwx--x--x 2 root root 60 Oct 19 23:34 81cc1c68<KEY>
cloud-user@hm6-opshub-aio:~$
cloud-user@hm6-opshub-aio:~$ sudo ls -lart /run/docker/runtime-runc/moby/81cc1c688c867dd051cb0356bdc60d2c6d79758b4bbc4c13d93957e86befc452/
total 16
-rw-r--r-- 1 root root 13135 Oct 19 23:34 state.json
drwx--x--x 2 root root 60 Oct 19 23:34 .
drwx------ 226 root root 4520 Oct 24 01:15 ..
cloud-user@hm6-opshub-aio:~$ sudo grep -nr "process" /run/docker/runtime-runc/moby/81cc1c688c867dd051cb0356bdc60d2c6d79758b4bbc4c13d93957e86befc452/state.json
1:{"id":"81cc1c688c867dd051cb0356bdc60d2c6d79758b4bbc4c13d93957e86befc452","init_process_pid":661,
.......
cloud-user@hm6-opshub-aio:~$ ps -ef | grep " 661 "
cloud-u+ 661 626 0 Oct19 ? 00:00:00 [custom-entrypoi] <defunct>
cloud-u+ 18196 18352 0 01:26 pts/1 00:00:00 grep --color=auto 661
cloud-user@hm6-opshub-aio:~$
cloud-user@hm6-opshub-aio:~$ sudo ls -lart /proc/661/ns/
ls: cannot read symbolic link '/proc/661/ns/net': No such file or directory
ls: cannot read symbolic link '/proc/661/ns/uts': No such file or directory
ls: cannot read symbolic link '/proc/661/ns/ipc': No such file or directory
ls: cannot read symbolic link '/proc/661/ns/pid_for_children': No such file or directory
ls: cannot read symbolic link '/proc/661/ns/mnt': No such file or directory
ls: cannot read symbolic link '/proc/661/ns/cgroup': No such file or directory
total 0
dr-xr-xr-x 9 cloud-user docker 0 Oct 23 07:04 ..
dr-x--x--x 2 root root 0 Oct 23 15:55 .
lrwxrwxrwx 1 root root 0 Oct 23 15:55 uts
lrwxrwxrwx 1 root root 0 Oct 23 15:55 user -> 'user:[4026531837]'
lrwxrwxrwx 1 root root 0 Oct 23 15:55 pid_for_children
lrwxrwxrwx 1 root root 0 Oct 23 15:55 pid -> 'pid:[4026538421]'
lrwxrwxrwx 1 root root 0 Oct 23 15:55 net
lrwxrwxrwx 1 root root 0 Oct 23 15:55 mnt
lrwxrwxrwx 1 root root 0 Oct 23 15:55 ipc
lrwxrwxrwx 1 root root 0 Oct 23 15:55 cgroup
cloud-user@hm6-opshub-aio:~$
cloud-user@hm6-opshub-aio:~$ ps -ef | grep " 626 "
root 626 2952 0 Oct19 ? 00:06:25 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/81cc1c688c867dd051cb0356bdc60d2c6d79758b4bbc4c13d93957e86befc452 -address /var/run/docker/containerd/docker-containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
cloud-u+ 661 626 0 Oct19 ? 00:00:00 [custom-entrypoi] <defunct>
cloud-u+ 28657 18352 0 01:26 pts/1 00:00:00 grep --color=auto 626
cloud-user@hm6-opshub-aio:~$
[Truncated]
API version: 1.38
Go version: go1.10.3
Git commit: d7080c1
Built: Wed Feb 20 02:28:10 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.3-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: d7080c1
Built: Wed Feb 20 02:26:34 2019
OS/Arch: linux/amd64
Experimental: false
cloud-user@clab2-master-01:~$
**Any other relevant information:**
Answers:
username_1: Hi @username_0
I think the issue has been fixed by https://github.com/containerd/containerd/pull/3540. you can update your docker version to fix this issue. Thanks for reporting.
username_0: Thanks, Fu.
We are try to upgrade the docker-ce to 18.0.9 since that is the latest version claimed to be verified by k8s. I checked the docker-ce 18.0.9 and it seems that it still using containerd 1.2.6. So the issue is not fixed on docker-ce 18.0.9 and we need to wait for the new docker-ce published and verified by k8s team, right?
username_1: Hi @username_0 , sorry for late reply. I don't think k8s community will do the check for this. It is about container runtime. I checked that moby code base and containerd vendor has been updated to v1.3.0. I think we just wait for docker-ce to update containerd version.
cc @thaJeztah
username_0: Hi, Fu,
Here seems to be the k8s clarification about the docker version verified.
https://kubernetes.io/docs/setup/release/notes/
The list of validated docker versions remains unchanged.
The current list is 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. (#72823, #72831)
We double checked the 18.09.9 docker-ce still using v1.2.6 containerd. We will try to upgrade the docker version to 19.03 with the latest k8s setup and keep monitoring the issue for some time.
Thanks again for taking care of the issue.
username_1: Hi @username_0 , I checked that docker-ce 18.09.9 requires containerd version >=1.2.2. And the repository for packages has containerd 1.2.10 version. If you re-install docker-ce, I think the containerd will be 1.2.10.
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
username_0: Hi @username_1,
After we upgraded docker to 18.09.9 with containerd 1.2.10, we didn't observe zombine container process. But we still observed some other issues here.
The 1st issue is that zookeeper-0 container process seems to be still alive when container seems to be hang and can't be inspected and executed after kubelet health-probe detected the failure of "code = Unknown desc = operation timeout: context deadline exceeded".
The 2nd issue is that zookeeper-2 container process seems to be alive and container is not hang but the container is not restarted by kubelet after after kubelet health-probe detected the failure of "code = DeadlineExceeded desc = context deadline exceeded".
I would think both issues are kubelet health-probe issue. The 1st one is that the container status not updated to False after health-probe detected the failuer. The 2nd one is that even the container status updated to False after health-probe detected the failure, the container process was not killed.
1. zookeeper-0 container process seems still alive but the container can't be inspected. kubelet health-probe failed but the container is not removed and recreated by kubelet when it should be restarted since the restartPolicy is set with "Always".
cloud-user@hm6-opshub-aio:~$ kubectl get pod -n opshub-data | grep zookeeper
zookeeper-0 1/1 Running 1 45h
zookeeper-1 1/1 Running 10 45h
zookeeper-2 0/1 Running 7 45h
cloud-user@hm6-opshub-aio:~$ kubectl describe pod -n opshub-data zookeeper-0 | grep "healthy\|True"
Ready: True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Warning Unhealthy 3m26s (x73 over 7h15m) kubelet, hm6-opshub-aio Readiness probe errored: rpc error: code = Unknown desc = operation timeout: context deadline exceeded
Warning Unhealthy 3m22s (x73 over 7h15m) kubelet, hm6-opshub-aio Liveness probe errored: rpc error: code = Unknown desc = operation timeout: context deadline exceeded
cloud-user@hm6-opshub-aio:~$
cloud-user@hm6-opshub-aio:~$ docker ps | grep zookeeper-0
7d1dbb28dac3 d43be510ec44 "/custom-entrypoint.…" 7 hours ago Up 7 hours k8s_zookeeper_zookeeper-0_opshub-data_ca906dcf-c623-460a-9271-1389b09912b0_1
fb0855d4e2e6 k8s.gcr.io/pause:3.1 "/pause" 46 hours ago Up 46 hours k8s_POD_zookeeper-0_opshub-data_ca906dcf-c623-460a-9271-1389b09912b0_0
cloud-user@hm6-opshub-aio:~$ docker inspect 7d1dbb28dac3
^C
cloud-user@hm6-opshub-aio:~$
cloud-user@hm6-opshub-aio:~$ ps -ef | grep 7d1dbb28dac3
cloud-u+ 5797 2133 0 00:31 pts/0 00:00:00 grep --color=auto 7d1dbb28dac3
root 29106 1640 0 Nov11 ? 00:00:03 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/7d1dbb28dac3216b2b5cbafa5c6b9dbb4a6565e56d9bd8c6f060ba7487b55501 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
cloud-user@hm6-opshub-aio:~$ pstree -Aps 29106
systemd(1)---containerd(1640)---containerd-shim(29106)-+-custom-entrypoi(28972)---java(27621)-+-{java}(9130)
| |-{java}(11029)
| |-{java}(11286)
| |-{java}(12763)
| |-{java}(13747)
| |-{java}(28433)
| |-{java}(28894)
| |-{java}(28967)
| |-{java}(28974)
| |-{java}(28978)
| |-{java}(14529)
| |-{java}(15240)
| |-{java}(15293)
| |-{java}(15301)
| |-{java}(9400)
| |-{java}(15534)
| |-{java}(15629)
| |-{java}(15729)
| |-{java}(15734)
| |-{java}(15916)
| |-{java}(16188)
| |-{java}(18701)
| |-{java}(15917)
| |-{java}(26674)
| |-{java}(12770)
| |-{java}(12851)
| |-{java}(31435)
| |-{java}(32660)
[Truncated]
cloud-user@hm6-opshub-aio:~$ sudo journalctl -u docker -l | grep "0a50977d41d1\|d43be510ec44"
cloud-user@hm6-opshub-aio:~$ sudo journalctl -u kubelet -l | grep "0a50977d41d1\|d43be510ec44"
Nov 12 00:11:02 hm6-opshub-aio kubelet[1608]: E1112 00:11:02.995813 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:11:11 hm6-opshub-aio kubelet[1608]: E1112 00:11:11.142359 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:13:07 hm6-opshub-aio kubelet[1608]: E1112 00:13:07.995993 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:13:16 hm6-opshub-aio kubelet[1608]: E1112 00:13:16.142612 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:15:12 hm6-opshub-aio kubelet[1608]: E1112 00:15:12.996215 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:15:21 hm6-opshub-aio kubelet[1608]: E1112 00:15:21.142848 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:17:17 hm6-opshub-aio kubelet[1608]: E1112 00:17:17.996402 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:17:26 hm6-opshub-aio kubelet[1608]: E1112 00:17:26.143059 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b<KEY>cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:19:22 hm6-opshub-aio kubelet[1608]: E1112 00:19:22.996563 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:19:31 hm6-opshub-aio kubelet[1608]: E1112 00:19:31.143296 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:21:27 hm6-opshub-aio kubelet[1608]: E1112 00:21:27.996782 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:21:36 hm6-opshub-aio kubelet[1608]: E1112 00:21:36.143505 1608 remote_runtime.go:351] ExecSync 0a5<KEY>faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:23:32 hm6-opshub-aio kubelet[1608]: E1112 00:23:32.996973 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:23:41 hm6-opshub-aio kubelet[1608]: E1112 00:23:41.143690 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:25:37 hm6-opshub-aio kubelet[1608]: E1112 00:25:37.997161 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Nov 12 00:25:46 hm6-opshub-aio kubelet[1608]: E1112 00:25:46.144075 1608 remote_runtime.go:351] ExecSync 0a50977d41d12b458ad853f2bae5976b7d275faa4d195559cb06e588ee78cf0a 'sh -c zookeeper-ready 2181' from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
I didn't observe the container removed/recreated/restarted logs in kubelet and docker. Not sure if it is related to the latest docker change. Any ideas about this?
username_0: @username_1. We found the trigger of our issue. The trigger is the thread leak in some of our micro-services. When the thread number reaches to the kernel pid_max limitation, the new thread or process can not be created that may cause the container to fail to be stopped by kuebelet. After we fixed the thread leak, the issue has never been observed even with docker-ce 18.06.
So please go ahead to close this issue and thanks for the help on this issue.
Status: Issue closed
username_1: @username_0 thank you! The root cause information is helpful! |
Cardiox12/devis-automation | 663929786 | Title: Create user asking interface
Question:
username_0: Create webpage asking user to enter useful information :
Client :
- Name
- First Name
- Society Name
- Adress
- SIRET id
- Phone number
Service provider :
- Name
- First Name
- Society Name
- Adress
- SIRET id
- Phone number |
testshallpass/react-native-dropdownalert | 328593746 | Title: Not visible when react-native-modal is open, with a higher zIndex
Question:
username_0: #### Short Description
I am using react-native-modal, with a zIndex of 9, and react-native-dropdownalert with zIndex of 10. It seems to not appear above the react-native-modal unfortunately. I have also tried this with a regular Modal from react-native.
#### Steps to Reproduce / Code Snippets / Usage
Have a view using the react-native-modal that takes the full screen. Call the dropDownAlert.
#### Expected Results
drop down alert visibility. Unfortunately, not visible.
### Additional Information
* React Native version: 0.55.4
* react-native-dropdownalert version: ^3.3.0
* Platform(s) (iOS, Android, or both?): iOS
* iOS version: 11.3
Answers:
username_1: Try <DropdownAlert /> as last element of document tree inside the modal component.
username_1: close stale
Status: Issue closed
|
kubernetes-sigs/controller-runtime | 666680559 | Title: unable to update resource when using generic channel
Question:
username_0: I am getting in a situation in which multiple events from normal watches on kinds and also a watch on an event channel form a race condition that generates an error when updating a resource.
So a reconcile cycle is triggered on a resource, and the resource is updated, this changes the `resourceVersion` of it in etcd, but not in the local cache. Immediately after another reconcile cycle is triggered via the event channel.
The first thing that happens in the reconcile cycle is to look up the resource relative to the current request. This uses the manager provided client which finds the resource in the cache. The cache has not been updated yet and the resource still has the old resourceVersion. After this the reconcile cycle completes either and when the code tries to update the status it gets an error.
This is generally innocuous because the error will just trigger another reconcile cycle and eventually the resource will be correctly updated in the local cache. however it's annoying and confuses the users.
I'd like to know if it's possible to fix this. I suppose one way would be to not use the manager's client when looking up the resource and accepting the performance hit that comes from that, but I wanted to ask if there is a way for the framework to provide a solution to this. For example by enforcing a cache invalidation if an event is coming from the generic channel. |
LiyuanLucasLiu/LM-LSTM-CRF | 1080869575 | Title: Line 530 in utils.py is too slow with huge datasets
Question:
username_0: Line 530 in **construct_bucket_vb_wc** function in **utils.py** is too slow with huge datasets. It even freezes if dataset is larger than 300k objects.
I propose to change line
`forw_corpus = [pad_char_feature] + list(reduce(lambda x, y: x + [pad_char_feature] + y, forw_features)) + [pad_char_feature]`
to
```
forw_corpus = [pad_char_feature]
for i in forw_features:
forw_corpus.extend(i + [pad_char_feature])
```
Which works considerably faster with no freezes.
Answers:
username_1: thanks and fixed in https://github.com/username_1/LM-LSTM-CRF/commit/4f35e0a18ade83ee46718a3a8b4f6f0790f9da58
PS: a more up-to-date lib is available at https://github.com/username_1/Vanilla_NER
Status: Issue closed
|
jupyterlab/jupyterlab | 598696980 | Title: Create new notebook via shortcuts w/out mouse bait clicks
Question:
username_0: Hi,
I would like to know why there is no shortcut at present available for creating new notebooks
without the need to use the mouse and have to click here and there.
(For instance, see classic instructions here: https://jupyterlab.readthedocs.io/en/stable/user/notebook.html . I don't want to go through 10 clicks per session to have a new notebook created. It's frustrating and lame.)
Additionally, I would like to be able to rename the new notebook without the need to have to use my mouse to left/right click on 'Untitled.ipynb' name.
I understand that there might be a way to create a shortcut perhaps ? If so, I would be grateful for detailed description that enable me to go ahead and produce such a keystroke shortcut.
Answers:
username_1: You can make keyboard shortcuts in the Settings > Advanced Settings interface. Click on the Keyboard Shortcuts page. It lists the available commands in the system to which you can assign shortcuts. For example, a new notebook shortcut looks like:
```json5
{
command: "notebook:create-new",
selector: "body",
keys: ["Ctrl Shift N"]
}
```
username_1: Rename looks like:
```json5
{
command: "docmanager:rename",
selector: "body",
keys: ["Ctrl Shift N"], // default shortcut
}
```
Status: Issue closed
username_1: Closing as answered.
username_1: The answer to this is no one has implemented it as a default shortcut.
username_1: Note that you can create a new Launcher panel with the default shortcut of `Shift Accel L`, and then it's one click to create a notebook with a given kernel in that panel.
username_1: You can also activate the command palette with `Shift Accel C` and type "notebook", and for me the first item is to create a new notebook.
username_1: To be clear, that example shows it is 2 clicks to create a notebook: one to create a launcher, then one to select a notebook with a given kernel. If 2 clicks is too much, of course feel free to create a shortcut - that's why we made the shortcuts customizable. But I think we'd all appreciate being clear and accurate in your statement rather than resorting to hyperbole. |
stewrav/py1 | 121755023 | Title: Password cracking
Question:
username_0: My password is hashed using the MD5 algorithm. The checksum is e358efa489f58062f10dd7316b65649e. Can you find my password?
Answers:
username_1: password is t
username_0: Nice. Next hash is e22428ccf96cda9674a939c209ad1000.
username_0: Password was S8. Excellent!
Next hash is *e2cc35d756ad1652e5ecd1b3417ef564*. The password is obviously at least one character and I'll help you out and say that it's less than twenty characters.
username_0: I wrote my password cracking program and I realized that the above hash is too difficult with the computers we have. Try db6c0caafd2c276603e0252dcfed61db instead. It's less than 10 characters.
Status: Issue closed
|
ossrs/srs | 584213695 | Title: Live streaming CDN for SRS
Question:
username_0: **描述(Description)**
SRS做源站,能推荐个支持HTTP-FLV直播拉流的CDN服务(相当于edge)?(国内国外CDN均可)
Any ideas about Live streaming CDN services/providers to support SRS/HTTP-FLV?
Thanks.
Answers:
username_1: 阿里云CDN,腾讯云CDN,网宿,都支持。
Status: Issue closed
username_2: @username_1 没看到设CDN的资料。在哪可以找到呢? |
tandriamil/hashcode2018 | 302378102 | Title: Another solution using only sqlite3
Question:
username_0: Hello !
I did not participated officially on the Hashcode 2018 but implemented a solution for it that only requires sqlite3 command line and the data files, and thought it could be of your interest to know about it and maybe give you ideas to improve your and maybe mine.
https://github.com/username_0/sqlite3-hashcode-2018
Cheers !<issue_closed>
Status: Issue closed |
JuliaImages/ImageSegmentation.jl | 1062460672 | Title: prune_segments - ERROR: MethodError: no method matching add_vertices!(::SimpleWeightedGraph{Int64, Float64}, ::Int64)
Question:
username_0: I'm trying to run some examples from the documentation, such as the following:
```
using ImageSegmentation, TestImages
img = testimage("camera")
seg = fast_scanning(img, 0.1)
seg = prune_segments(seg, i->(segment_pixel_count(seg,i)<50), (i,j)->(-segment_pixel_count(seg,j)))
```
Unfortunately this returns the following error:
```
ERROR: MethodError: no method matching add_vertices!(::SimpleWeightedGraphs.SimpleWeightedGraph{Int64, Float64}, ::Int64)
Closest candidates are:
add_vertices!(::LightGraphs.AbstractGraph, ::Integer) at /Users/christopher/.julia/packages/LightGraphs/IgJif/src/core.jl:45
Stacktrace:
[1] region_adjacency_graph(s::SegmentedImage{Matrix{Int64}, ColorTypes.Gray{Float64}}, weight_fn::ImageSegmentation.var"#12#13")
@ ImageSegmentation ~/.julia/packages/ImageSegmentation/MX8ga/src/core.jl:109
[2] prune_segments(s::SegmentedImage{Matrix{Int64}, ColorTypes.Gray{Float64}}, is_rem::var"#1#3", diff_fn::var"#2#4")
@ ImageSegmentation ~/.julia/packages/ImageSegmentation/MX8ga/src/core.jl:221
[3] top-level scope
@ REPL[4]:1
```
Any ideas as to what might be going on here?
Julia version 1.6.4 on macOS 12.0.1.
Answers:
username_1: Can you checkout to #85 and see if it works?
username_0: Thanks for getting back to me. Can I switch branch if I installed with Pkg, or do I need to install the repo manually?
username_0: I was literally just coming back to say I had figured out how to install the branch with pkg. Unfortunately, doing so gives the following error:
```
(@v1.6) pkg> add ImageSegmentation#jc/compat
Updating git-repo `https://github.com/JuliaImages/ImageSegmentation.jl.git`
Updating registry at `~/.julia/registries/General`
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package ImageCore [a09fc81d]:
ImageCore [a09fc81d] log:
├─possible versions are: 0.7.0-0.9.3 or uninstalled
├─restricted to versions 0.9 by ImageSegmentation [80713f31], leaving only versions 0.9.0-0.9.3
│ └─ImageSegmentation [80713f31] log:
│ ├─possible versions are: 1.6.0 or uninstalled
│ └─ImageSegmentation [80713f31] is fixed to version 1.6.0
└─restricted by compatibility requirements with Images [916415d5] to versions: 0.7.0-0.8.22 — no versions left
└─Images [916415d5] log:
├─possible versions are: 0.17.3-0.24.1 or uninstalled
├─restricted to versions * by an explicit requirement, leaving only versions 0.17.3-0.24.1
└─restricted by compatibility requirements with ImageMorphology [787d08f9] to versions: 0.18.0-0.24.1 or uninstalled, leaving only versions: 0.18.0-0.24.1
└─ImageMorphology [787d08f9] log:
├─possible versions are: 0.1.0-0.3.0 or uninstalled
├─restricted to versions 0.2.6-0.3 by ImageSegmentation [80713f31], leaving only versions 0.2.6-0.3.0
│ └─ImageSegmentation [80713f31] log: see above
└─restricted by compatibility requirements with Images [916415d5] to versions: 0.1.1-0.2.12, leaving only versions: 0.2.6-0.2.12
└─Images [916415d5] log: see above
```
username_1: Yeah... We're currently having a hard time with Images compatibility. Maybe you can temporarily remove `Images` or use this branch https://github.com/JuliaImages/Images.jl/pull/971
username_0: The branches you have specified for Images and ImageSegmentation appear to solve the problem with `prune_segments` 👍 Thanks very much for the help.
username_1: Okay. Then I think we should release both versions recently.
Status: Issue closed
username_1: With Images v0.25 and ImageSegmentation v1.7 released, it should work now. |
go-gitea/gitea | 983555517 | Title: wrong comment count on PRs (caused by reviews?)
Question:
username_0: - Gitea version (or commit ref): 1.16.0+dev-109-gd17f555fe
- Git version: ?
- Operating system: linux
<!-- Please include information on whether you built gitea yourself, used one of our downloads or are using some other package -->
<!-- Please also tell us how you are running gitea, e.g. if it is being run from docker, a command-line, systemd etc. --->
<!-- If you are using a package or systemd tell us what distribution you are using -->
- Database (use `[x]`):
- [x] MySQL
- Can you reproduce the bug at https://try.gitea.io:
- [x] Yes (provide example URL): https://gitea.com/gitea/tea/pulls/391, https://gitea.com/gitea/tea/pulls
- Log gist:
<!-- It really is important to provide pertinent logs -->
<!-- Please read https://docs.gitea.io/en-us/logging-configuration/#debugging-problems -->
<!-- In addition, if your problem relates to git commands set `RUN_MODE=dev` at the top of app.ini -->
## Description
A PR is listed as having comments, while none show up in the PR detail view (see gitea.com example above.)
This may be about approving reviews, and is probably caused by a fairly recent change, as [an older PR](https://gitea.com/gitea/tea/pulls/388) has an approval too, but does not list additional comments that are not visible on the PR detail page.
The API also returns a higher comment count, while not returning any comments
## Screenshots


Answers:
username_1: Could it be that `approval` counts as a comment, even if it does not have any comment associated?
username_0: hmm, now the comment count on that specific issue is correct.. 🤔
username_0: This is a regression of #16075:
It adds reviews to the comment counter on a PR, but
- this does not check if the review has a comment body
- this deviates from the logic of [CheckRepoStats()](https://github.com/go-gitea/gitea/blob/7bcbdd07072d375eb9f24a64a047879ae2aa7aed/models/repo.go#L1917-L1921), which runs via cronjob and corrects the counter again
username_0: github does the following:
- in the UI, they count any comment (including code comments) that has a body.
- the `/pulls/{index}/comments` API endpoint only returns non-review comments
- the `/pulls/{index}` endpoint returns two separate counters `comments` + `review_comments`
this feels like a reasonable solution, that could even be backported
username_2: It seems now it's correct. I think maybe we could close this one and remove it from v1.15.7 .
username_0: No this bug is not fixed yet, just reproduced on 1.15.6 and gitea.com.
I'm currently short on time, but I'll try to come up with a fix in the coming days
username_2: If we follow Github's method, we need a new number column but I think it's difficult to add a new column in a stable version. Maybe we can do it in v1.16 if possible. I'll move this to v1.15.8
username_2: Could you have time to confirm if it's gone?
username_0: fyi: bug is still present.
@username_2 I'll submit a bugfix for 1.15.10 that reverts [this two line change of #16075](https://github.com/go-gitea/gitea/pull/16075/files#diff-78c2c18ca14c9554e7f5a73716a4c65b659d751ea76dfbc57d87c68d91a0f37fR765-R766) so we at least have consistent state back.
In a followup for 1.16 or 1.17 we can aim at implementing PR comment counts similar to github with separate counters.
Status: Issue closed
|
dotnet/cli | 366876036 | Title: Flexibility in SDK install folder
Answers:
username_1: As per dotnet/core-setup#4904 it might be beneficial to add the highest installed runtime and SDK version number into the registry as well. I could see app installers looking for the latest installed dotnet so that they can decide whether to use what's on the machine versus installing dotnet as a dependency.
username_2: Hi,
In reference of #4904 I would appreciate a static place to store the latest .NET runtime version, preferably as DWORD value in registry like it is the case for .NET Framework. The reason why I believe this is useful is that many companies use installers which don't embed the required dependencies but check if they are missing and then load them from the web. These installers rely on having a static place to gain this information from.
For users creating installers we mostly cannot run command or enumerate files and load libraries in these installers etc. but rather look for registry values or files to exist. Looking for files is not a good choice as the installed path may be different from version to version and retrieving the exact .NET Core version from a path might not always be possible. I would also want to avoid the need of parsing a string to obtain the version number.
A registry key similar to .NET framework (https://docs.microsoft.com/en-us/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) is clearly the easiest solution for programs (installers etc) to check if .NET core runtime is available and which version. This request is for windows only.
username_1: @username_2 what version number you're looking for?
Unlike the .NET Framework, .NET Core can have many different versions of both the runtime and the SDK installed side-by-side. So it's kind of unclear which one should go to the registry.
I can imagine:
- Latest non-preview runtime (Microsoft.NetCore.App)
- Latest non-preview SDK
- Latest runtime (including previews)
- Latest SDK (including previews)
Which one would satisfy your scenario?
username_2: Assuming .NET Core is backward compatible it is only important to get the latest installed **runtime** version (can include previews). The point is that applications crash or won't start at all if the required .NET Core version is missing on the user's system and we get tons of complaints from customers asking why the software isn't working. I wish there would be a native messagebox telling them .NET Core is missing but that isn't the case. Therefore we need to make sure:
a) .NET Core is installed at all
b) Installed .NET Core runtime version is greater than X
And we do this from the installer when the user is installing our products. The SDK version is not important for us because customers mostly only have the runtime installed. It should be a static non-version-related path in registry. For example
`Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Core\Version`
and NOT something like
`Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Core\3.0.2100\Version`
username_3: Please, keep in mind that .NET Core does not just roll forward to latest. It rolls forward first on patch versions. Meaning, if you target 2.2, we will rollforward from 2.2.0, to 2.2.1, 2.2.2, etc. If we can't find a major.minor, we will roll forward on the minor version, for instance, 2.1.X to 2.2.X. Though, in this case, it is not really recommended and things may break during runtime on your application. The same is true for major version roll-forward, but that one you need to opt-in, I believe.
Also, Why not chain the runtime installer into your own installer then? Or, if this is windows only, have you considered distributing a self-contained app?
As I explained above, you would need this version per major.minor and not just latest. Also, this is not the right issue to discuss this. Please, file a new one asking for this additional registry key and we can discuss there.
username_2: Because we release many small apps and building it as self-contained bloats up every app with +100mb on size, so that's not an option. Regardless of my situation I still believe there might be a great interest in such version indicator, especially when .NET Core gets a popularity boost with .NET Core 3.0 update (atm its mostly used for cross platform or ASP.NET). And I have got created an issue but then got redirected by vitak-karas to discuss it here.
username_4: Hm, the registry (`HKLM\SOFTWARE\dotnet\Setup\InstalledVersions\x64\sharedhost`) says `3.0.0-preview-27324-5`, but `dotnet --version` returns `3.0.100-preview-010184`. Is that expected?
username_1: I personally don't know what the `sharedhost` key should specify, but the versions are very likely of two different things:
For example on my machine `dotnet --info` reports:
```
.NET Core SDK (reflecting any global.json):
Version: 3.0.100-preview-010184
```
And also
```
Host (useful for support):
Version: 3.0.0-preview-27324-5
```
The first is the version of the SDK, while the second is the version of the host.
Then there's also the version of the runtime, which is part of the `Microsoft.NETCore.App` so this line (in my case):
```
Microsoft.NETCore.App 3.0.0-preview-27324-5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
```
There's no rule that these versions should be the same. SDK is definitely versioned differently than host/runtime. Host is also different as there's only one on the machine (so typically the latest version), while there can be many "runtimes".
username_4: I see, thank you. So as per your response to #4904, I can get:
* Host version from the registry (there is only one host)
* Installed SDK versions by interop call to `hostfxr_get_available_sdks` in hardcoded `%ProgramFiles%\dotnet\host\fxr\*\hostfxr.dll` libraries.
* Available runtime versions by checking hardcoded `%ProgramFiles%\dotnet\shared\Microsoft.NETCore.App\*\.version` files.
Correct?
username_1: @username_0 might know the meaning of all the registry keys - do we have a document describing those? (if not, we definitely should).
@username_4 Just curious what is the scenario you're trying to solve:
* All this information is already available if you run `dotnet --info`
* Do you need this as an API - so that you can get this info without parsing the text output of `dotnet --info`?
* What is the scenario where you need to know all the installed versions of SDKs?
* What is the scenario where you need to know all the installed runtimes?
* What about the other frameworks - runtime is in the `Microsoft.NETCore.App`, but there are other shared frameworks (ASP.NET, with .NET Core 3 there will be the WinForms/WPF one as well)
With the proposed changes by this issue these should work:
* Host lives in the location specified by the `HKLM\Software\dotnet\Setup\InstalledVersions\x64\InstallLocation` (this is a new key) - there you should find the `dotnet.exe` - you can look at its version if you need to, but I would be very curious what is the scenario for that.
* `hostfxr` - lives next to the host (See above) in the `host/fxr` subdirectory - the rule is to always use the latest available version (the older versions are effectively ignored)
* Frameworks should also live next to the host in the `shared` subdirectory. I don't know about the `.version` file, the host doesn't use that. The host solely relies on the directory names to recognize the versions. |
apache/airflow | 936951273 | Title: Wrong template field types for AWS operators
Question:
username_0: **Apache Airflow version**: 2.1.1
**Environment**:
- **Cloud provider or hardware configuration**: AWS
**What happened**:
After migrating from Airflow 2.0.1 to 2.1.1, the Rendered Template tab stopped working for dictionaries. I checked it for Sagemaker and ECS operators, the code is not shown.
<img width="400" src="https://user-images.githubusercontent.com/2885779/124456386-ead53a00-dd8a-11eb-9b30-f1a627c8f8f2.png">
However for some operators where i have customized list of `template_fields`, it partly works (only for extra fields).
<img width="400" alt="Screenshot 2021-07-05 at 12 18 37" src="https://user-images.githubusercontent.com/2885779/124457148-c9288280-dd8b-11eb-9c2c-082bbafaa99c.png">
So then i checked a release notes and it looks like it's related to #15130 and for some AWS operators the type of the fields should be not `py`, but `json` since all those fields mostly simply propagated to boto3 as json.
Answers:
username_0: It might be not just for AWS operators, but these are the ones i use and i could test
username_1: Hi! Would you be interested in submitting a PR to fix this?
username_0: @username_1 i can make pr but only for the operators i use
username_1: That’s perfectly fine! If some other operators also have this issue, people who use them can submit fixes for them.
username_2: As mentioned previously, this is not specific to AWS operators. This issue also happens with @task decorated (TaskFlow API) operators, where the op_kwargs is a dict and is setup to be rendered as py.

Possible solution is to update the "get_python_source" in utils/code_utils.py to handle list and dict. Possibly something like:
```
if isinstance(x, list):
return [str(v) for v in x]
if isinstance(x, dict):
return {k: str(v) for k, v in x.items()}
```
The converting values to strings seems necessary in order to avoid errors when this is passed to the pygments lexer.

username_3: Closed by https://github.com/apache/airflow/pull/16820
Status: Issue closed
|
tkuri/papers | 970779532 | Title: Impact of Aliasing on Generalization in Deep Convolutional Networks
Question:
username_0: ## 論文概要
DNNの特徴量空間におけるエイリアシング(折り返し)の影響を調査しData Augmentationだけではエイリアシングを防ぐことができないと主張。学習しないLPFをネットワークの特定の位置に挿入することで解決できることを提示。ほかの手法と比較し実装が簡単で計算コストも低い。

https://arxiv.org/abs/2108.03489
## Code
未確認。 |
segmentio/kafka-go | 1086481970 | Title: Regression after upgrading from v0.4.23: unexpected EOF
Question:
username_0: **Describe the bug**
If the topic has no messages, the kafka reader starts to report "the kafka reader got an unknown error reading partition x of my-topic at offset y: unexpected EOF"
**Kafka Version**
2.3.0
**To Reproduce**
Run kafka reader in consumer group mode when the partition has no more messages.
**Expected behavior**
No unexpected EOF reported, as pre 0.4.24
**Additional context**
The problem only occurs in v0.2.24 and above (tested with v0.2.23, v0.2.24, v0.2.25). I believe the regression is caused by PR #788 where the highwater mark check is removed just to pass a test. The check seems to be added intentionally in prior PRs.
```
Answers:
username_1: We had a similar error that disappeared as soon as we reverted the version from 0.4.25 to 0.4.23.
username_2: yes, found the same error, and reverted to 0.4.23 solved this problem
username_3: Hello, thanks for submitting this issue!
Is anyone able to provide a reproduction of this issue? I haven't had any success in replicating it.
username_4: @username_3 here you go https://gist.github.com/username_4/a7cc16279e990a8d42c41fd9f557b341.
username_3: Thanks for the reproduction @username_4 For now I'm going to revert #788
username_3: The revert of #788 is available in https://github.com/segmentio/kafka-go/releases/tag/v0.4.26. Based on the reproduction provided by @username_4 I believe that this regression should now be fixed.
Status: Issue closed
|
jcarsique/maven-incremental-build | 809218923 | Title: Save the plugin files after deleting the target directory
Question:
username_0: Some files stored too early, before the directory deletion, are lost, breaking the change detection.
```
$ mvn net.java.maven-incremental-build:incremental-build-plugin:1.6-NX1:incremental-build -X
[INFO] resources updated, module have to be cleaned
[DEBUG] Saving file /.../target/resourcesList ...
[DEBUG] Module updated, cleaning module
[INFO] Deleting /.../target
[DEBUG] Saving timestamps..
[DEBUG] Saving file /.../target/timestamp ...
```
```
$ mvn package -X
[INFO] Verifying resources...
[DEBUG] Using file : /.../target/resourcesList
[DEBUG] Previous file /.../target/resourcesList not found.
``` |
citusdata/django-multitenant | 516794452 | Title: PK(field name 'id') of the tenant model('Store') not auto-incriminating
Question:
username_0: class Store(TenantModel):
tenant_id = 'id'
name = models.CharField(max_length=50)
address = models.CharField(max_length=255)
email = models.CharField(max_length=50)
class Product(TenantModel):
store = models.ForeignKey(Store)
tenant_id='store_id'
name = models.CharField(max_length=255)
description = models.TextField()
class Purchase(TenantModel):
store = models.ForeignKey(Store)
tenant_id='store_id'
product_purchased = TenantForeignKey(Product)<issue_closed>
Status: Issue closed |
opencontainers/image-spec | 152976215 | Title: The "image JSON"/"Container config JSON" does have a media-type but no schema, no headers
Question:
username_0: The "image JSON" also described as "Container config JSON" https://github.com/opencontainers/image-spec/blob/master/serialization.md#image-json-description does have a media-type `application/vnd.oci.image.serialization.config.v1+json` but no schema defined.
Also the example image/config JSON does not have the `schemaVersion` and `mediaType` headers defined. I am aware this was taken over, but it seems not to be consistent with the manifest/manifest list schemas.
Finally I suggest to have a unique wording for this JSON file, either "Image JSON" or "Container config JSON" and use this wording consistently across the spec.
Answers:
username_1: @username_0 is this a dupe of #56?
Status: Issue closed
username_0: @username_1 yep, good catch. #56 refers to all the JSON formats in `serialization.md` which is even better, hence closing this in favor of #56. |
sairion/buble-loader | 183947277 | Title: Remove support for Webpack 1
Question:
username_0: Buble doesn't transpile `import`/`export` statements, making Buble *impossible* to work with Webpack 1. Moreover, Webpack 2 already does this.
Alternatively, we can add a warning for Webpack 1 users about this.
Answers:
username_1: +1 for warning
username_2: Good idea!
Status: Issue closed
username_0: I was supposed to make a PR to add a warning, but found it closed. Didn't find any PRs either. Should I continue? :D
username_2: @username_0 Oh, sorry. Let me reopen :)
username_2: Buble doesn't transpile `import`/`export` statements, making Buble _impossible_ to work with Webpack 1. Moreover, Webpack 2 already does this.
Alternatively, we can add a warning for Webpack 1 users about this.
username_3: I've just created a PR for this loader to provide support for ES6 modules in Webpack 1: https://github.com/username_2/buble-loader/pull/22
Status: Issue closed
|
beyondcode/laravel-confirm-email | 356363746 | Title: Users can't login afer logout
Question:
username_0: Sometimes my users have this issue that when they logout, they can't login back again! They get "These credentials do not match our records." error while they are registered and confirmed their registration! I think this is something to do with sessions maybe. Did you ever had this problem?
Answers:
username_1: This should usually come from the laravel authentication. I don't see how this package could affect that, as it has it's own error messages regarding verification.
Status: Issue closed
|
fractal-code/meteor-azure | 713267869 | Title: force-ssl
Question:
username_0: info: Targetting 32-bit Node architecture
error: The "force-ssl" package is not supported. Please read the docs to configure an HTTPS redirect in your web config.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
Status: Issue closed
Answers:
username_1: You need to point to the custom web config file in your deploy command:
```sh
meteor-azure --settings 'example/path/settings.json' --web-config 'example/path/web.config'
```
If you want to configure HTTPS redirect, remove the force-ssl package & add the rule to your web config. This is documented [here](https://meteor-azure.readthedocs.io/en/latest/configuration.html#custom-web-config) and there are [samples available](https://github.com/fractal-code/meteor-azure-web-config/tree/master/samples) to copy directly if you are new to IISNode.
Hope that helps. |
rfordatascience/tidytuesday | 452684950 | Title: Meteorite Landings
Question:
username_0: [https://data.nasa.gov/Space-Science/Meteorite-Landings/gh4g-9sfh/data](url)
For understanding meteorite classifications, the wikipage is pretty helpful: [https://en.wikipedia.org/wiki/Meteorite_classification](url)
Answers:
username_1: Using this week thanks!
Status: Issue closed
|
lykoss/lykos | 188298789 | Title: Cannot view old stats with !gstats if game mode player limits are changed
Question:
username_0: The alpha game mode was recently changed to require 10p, which makes it impossible to view 7 to 9p alpha game stats with `!gstats`:
```
17:57:04 -- lykos: Please enter an integer between 10 and 24.
```
Answers:
username_1: This happens with removed roles and game modes too. For player coutns it is a little simpler, we can probably just allow invalid numbers, and it would return no results, which should be fine (perhaps it should only say the range is invalid after getting no results back).
For roles / game modes we do autocorrect. Maybe on startup we could get a list of valid things to autocorrect to for those. Or just always allow anything to go through (while still attempting autocorrection - I don't think we have any fake roles / game modes which are a partial match of real ones)
username_2: Fixed in 4cf81714262b4190104b8db669c7504b48f166f3 (removed roles and removed game modes will now show up too provided there's some local bot config to re-enable stats for them)
Status: Issue closed
|
prometheus/mysqld_exporter | 393286500 | Title: Can mysqld_expoter read config file?
Question:
username_0: I use cloud database service(like aws rds).
I want to use open multi mysqld_expoter programs on one cloud host to monitor multi rds.
But mysqld_expoter read environment variable DATA_SOURCE_NAME.
So I must use docker to run multi mysqld_expoter program.
Can mysqld_expoter read config file to get DATA_SOURCE_NAME?
Answers:
username_1: For usage questions/help, please use our [community](https://prometheus.io/community/). On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.
Status: Issue closed
|
pagehelper/pagehelper-spring-boot | 262081249 | Title: 使用yaml配置启动发生ClassCastException错误
Question:
username_0: 项目使用spring boot 1.5.7,application.yml中的pagehelper配置如下时
```yaml
pagehelper:
auto-dialect: true
close-conn: true
```
启动出现org.springframework.beans.factory.BeanCreationException: Error creating bean with name '**com.github.pagehelper.autoconfigure.PageHelperAutoConfiguration**': Invocation of init method failed; nested exception is **java.lang.ClassCastException**: java.lang.Boolean cannot be cast to java.lang.String。
苦苦思索查找几日,发现问题出在**PageHelperProperties**类中。
现在我的折中做法是
```yaml
pagehelper:
auto-dialect: !!str true
close-conn: !!str true
```
但还请作者稍稍处理一下这个问题。
Answers:
username_1: 好,下个版本处理这个问题。
username_2: 请问下,pagehelper-spring-boot-starter:1.2.3 ,还是报这个错,目前折中做法也是
`
reasonable: !!str true
support-methods-arguments: !!str true
`
username_2: 请问下,pagehelper-spring-boot-starter:1.2.3 ,还是报这个错,目前折中做法也是
``` yaml
reasonable: !!str true
support-methods-arguments: !!str true
```
username_3: 引入com.google.guava:guava:19.0包也可以解决这个问题
username_4: pagehelper-spring-boot-starter:1.2.3 以下配置会报错
`pagehelper:
page-size-zero: true`
看到如下代码片段
`@Bean
@ConfigurationProperties(prefix = PageHelperProperties.PAGEHELPER_PREFIX)
public Properties pageHelperProperties() {
return new Properties();
}`
不知作者写入额外的属性是作为什么用途,spring在处理application.yml时注册了许多类型转换插件,其中包括了`StringToBooleanConverter`,该组件会尝试将true翻译为boolean注入到properties类中,留意到作者在处理属性的时候没有做key的转换而是全部添加到properties中
`properties.putAll(pageHelperProperties());
properties.putAll(this.properties.getProperties());`
而mybatis在读取properties时,强制转换为String再按方法类型进行注入,强制转换的过程中会发生Boolean转String的错误
jar:mybatis-3.4.5.jar class org.apache.ibatis.mapping.CacheBuilder line 147
`String value = (String) entry.getValue();`
username_4: pagehelper-spring-boot-starter:1.2.3 以下配置会报错
```yml
pagehelper:
page-size-zero: true
```
看到如下代码片段
```java
@Bean
@ConfigurationProperties(prefix = PageHelperProperties.PAGEHELPER_PREFIX)
public Properties pageHelperProperties() {
return new Properties();
}
```
不知作者写入额外的属性是作为什么用途,`spring`在处理`application.yml`时注册了许多类型转换插件,其中包括了`StringToBooleanConverter`,该组件会尝试将`true`翻译为`Boolean`注入到`Properties`类中,留意到作者在处理属性的时候没有做key的转换而是全部添加到properties中
```java
properties.putAll(pageHelperProperties());
properties.putAll(this.properties.getProperties());
```
而mybatis在读取properties时,强制转换为String再按方法类型进行注入,强制转换的过程中会发生Boolean转String的错误
jar:mybatis-3.4.5.jar class org.apache.ibatis.mapping.CacheBuilder line 147
```java
String value = (String) entry.getValue();
```
username_1: Spring Boot 版本的问题,别用 1.5.0 和 1.5.1 两个版本。
username_5: 在 spring-boot 1.5.14 版本也出现了这个问题` java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String`。
```xml
pagehelper:
helper-dialect: mysql
reasonable: true
support-methods-arguments: true
params: count=countSql
```
但奇怪的是 `reasonable: true` 就没问题,`support-methods-arguments: true` 就会报错,用`support-methods-arguments: 'true'`, 或者`support-methods-arguments: !!str true` 甚至 `supportMethodsArguments: true` 可以解决,还请作者再看下这个问题。 |
department-of-veterans-affairs/va.gov-team | 715147156 | Title: Project Kickoff [Feature-Name]
Question:
username_0: ## Steps to complete Project Kickoff
- [ ] Product Manager: create this issue and fill in feature name in the title and other bolded information appropriately
- [ ] Attach all relevent artifacts to the ticket
- [ ] Link to this issue once created in [#vfs-platform-support](https://dsva.slack.com/channels/vfs-platform-support) in Slack; tag @ <NAME> and @ AndreaHewitt
- [ ] Shira will schedule meeting with VSP reviewers and **requesting team** attendees (as listed below)
- [ ] VSP <> **requesting team** kickoff meeting completed and [Platform Collaboration Point Tracker](https://docs.google.com/spreadsheets/u/1/d/1d219oL1zCvCvnv1Bx-dI-GMzwgbarLv9_bzMSa3ULjA/edit#gid=1341642809) is updated
- [ ] All requesting team attendees complete brief [VSP Collaboration Cycle Feedback](https://adhoc.optimalworkshop.com/questions/20260uu8-0-0/questions/before) survey
## Artifacts - _please bring the following to the kickoff meeting_
- Link to rough draft of product outline (All out product outlines live in confluence so I don't have it in GH)
- Explanation of the problem space
- We are creating a list of Oauth Applications that Veterans can connect to in order to improve discoverability for Veterans, as well as give them information on the different applications.
- Any other artifacts you have so far
- There are a few, but I will link to relevant info:
- https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/products/identity-personalization/profile/connected-apps-data
- Original design ticket: https://github.com/department-of-veterans-affairs/va.gov-team/issues/9238
- New ticket to incorporate within the Learning Center: https://github.com/department-of-veterans-affairs/va.gov-team/issues/14279
- Draft PR for App Directory: https://github.com/department-of-veterans-affairs/vets-website/pull/14149
- You **do not** need to prepare a presentation
## Meeting attendees from **requesting team**
- Product Manager (required): **@username_0**
- DEPO Product Lead (required): **<NAME>**
- Entire VFS team (recommended): **@LeKeve, @bradenshipley, @ **
**Do you anticipate this initiative requiring updates to and/or creation of unauthenticated static content pages?** NO
## Scheduling
- Please reference the [VSP Collaboration Meetings Calendar](https://calendar.google.com/calendar/embed?src=adhocteam.us_4dn3o77gcm5e3vbiedlha96tc0%40group.calendar.google.com&ctz=America%2FNew_York) (times shown are ET)
- Note an available time slot within a recurring "[CALENDAR BLOCK] Review Group Office Hours" slot that works for your team and is at least 2 business days from now
- Write that preferred time slot here (on this ticket), and VSP will send you a meeting invite
- _If no available time slots work for your team, write your availability and requirements here and Shira/Andrea will reach out as needed_
the preferred time slot is for Thursday at 1:30-2:30 pm est
Answers:
username_1: @username_0 - we need to see the Product Outline to schedule the kickoff. Would you be able to download it and attach here as a comment ?
username_0: sure
[VAExternal-2152183501-051020-1701-14.pdf](https://github.com/department-of-veterans-affairs/va.gov-team/files/5330063/VAExternal-2152183501-051020-1701-14.pdf)
username_1: @username_0 could you also add more info to the title about your team? I don't think "App Directory" is enough information for our review team.
username_0: Hello @username_1, what kind of information are you looking for in the title? Happy to provide whatever is needed.
username_0: How is that?
username_1: Perfect. Thank you!
Status: Issue closed
|
Opentrons/opentrons | 366458184 | Title: Manage Robot Connection: Select Type of 802.1x Authentication
Question:
username_0: As a Run App user trying to connect my robot to a wifi network, I would like to be able to select my network's authentication method, so that I can enter the appropriate auth info.
## Acceptance Criteria
- [ ] If user selects a network with 802.1x authentication, display a modal asking them to select their network's authentication method.
- [ ] Modal includes a dropdown with supported 802.1x auth methods (will be returned by API)
- Note this list currently includes: EAP-TLS, PEAP/EAP-MSCHAPv2 (eduroam), TTLS/EAP-MSCHAPv2, TTLS/EAP-TLS, TTLS/MD5
## Design and Copy
#### Copy
- Title: "Select Authentication Method"
- Body: "The network '[network name]' requires 802.1x authentication. Please select your network's authentication method:"
- Design:
https://app.zeplin.io/project/5aa97729db58a2192f10d0d6/screen/5baa41a9398c9e25fb4ac8b6<issue_closed>
Status: Issue closed |
scarfacedeb/rails_admin_globalize_field | 507769754 | Title: Translated field names
Question:
username_0: Hi. Is it possible to use localized attribute names inside tab here?

`Name` and `subtitle` are translated for my default locale (not en) via `ru/active_record/attributes/MY_MODEL/name` and `ru/active_record/attributes/MY_MODEL/subtitle`. Should I add anything else in locale files?
Answers:
username_1: @username_0 Yes, it's possible. See: #32.
username_0: Oh, so sorry for my inattention. Thansk a lot!
Status: Issue closed
|
serde-rs/serde | 330429932 | Title: Conditionally enabling tagged enums for non "self-describing" (de)serializers.
Question:
username_0: Consider the following `enum` declaration which I am using for dynamic typing:
```rust
#[derive(Serialize, Deserialize)]
pub enum Property {
Boolean(bool),
Integer(i64),
Number(f64),
Text(String),
}
```
This is used in many places in my program, usually within a `Properties` struct which encapsulates a `String` -> `Property` map. I have implemented `Serialize` and `Deserialize` for `Properties` which simply delegates to its inner map. The rest of `Properties` implementation is irrelevant at this time, but I can add it later if needed.
Here are two scenarios where `Properties` is commonly used:
1. **Small configuration or metadata files.** Human-readable and human-editable formats like JSON or TOML would be nice in this scenario. In such formats, using tagged enums would make this less human-friendly and a bit redundant, as each enum variant can get unambiguously (de)serialized as its inner value if I'm using the formats I've previously noted, since they both can reasonably support `Deserializer::deserialize_any`. I can provide my own implementation to convert most types to one of those variants, or even potentially use `#[serde(untagged)]`, though I don't think it will automatically cast other integer/float layouts and would therefore not work in my situation.
2. **Files containing massive arrays where each element contains a `Properties` map.** Since the most reasonable way for users to interact with the elements in the array is through a specialized program, I would like to be space-efficient by serializing these arrays using `Bincode` or another compact binary format. Many of the most efficient binary formats do not support `deserialize_any` because they aren't "self-describing" (the term used in `deserialize_any` documentation); they assume you know the structure of the data, and I do... for the most part. In this scenario, tags are necessary; untagged enums would fail because they don't sufficiently describe their contents.
Currently, I can not provide a single implementation of `Serialize` and `Deserialize` to cover both cases in the way that I have described, as it is impossible for the traits' functions to determine whether tags are necessary for the given format.
This has implications for situations other than mine. Right now, if you serialize an untagged enum using Bincode, it will actually write the enum values without a tag. Once you try to deserialize it, it won't know what to do because it can't support `deserialize_any`. While this can be written off as a logic error by the user of the library who should know better, this is a reasonable thing that could happen to an unsuspecting user who may not actually know what went wrong.
Now that the problems are documented, here's my potential solution. Take advantage of Rust's strong typing. Deprecate `Deserializer::deserialize_any` and move the function to its own trait which is a child of `Deserializer`. Instead of having to document the fact that it may be unsupported by some deserializers, make the Rust compiler enforce that it is not accidentally called by an unsuspecting user. Then, a second method could be added to `Deserialize` and `Serialize` to handle cases where `deserialize_any` is (or will be) available, with a default implementation that calls the regular function.
Answers:
username_1: For deserialization you could run `deserialize_any` and if that fails, do the other deserialization. I don't think serialization allows similar tricks though.
We could add something in the spirit of `is_human_readable`. A `is_self_describing` method that has a reasonable default (`false`) seems totally doable.
username_0: @username_1 I was aware of the first option you suggest, but I didn't think that was reliable enough. Once a deserializer returns an error, some of them may assume (reasonably) that you're done. You're going to propagate that error and they won't have to properly put back the data they just tried to parse, which means your next deserialize may not be operating on the data that it should. It feels a lot like a hack, and like you say, this doesn't solve the problem for serialization.
Your second suggestion was somewhere along the lines of the idea that I had while writing this. It would work, but taking advantage of the right language features would definitely lead to better design. Adding a method to tell you whether a method is available does exactly the same thing as making a trait to tell you whether a method is available, but it is handled at run-time instead of compile-time.
username_0: Except for the behavior change that I proposed for `#[serde(untagged)]`, I'm not aware of how this could break any existing usages. To me it seems like anything still using the old interface would still compile and run normally as long as that deprecated method exists, so a major version bump wouldn't be required.
username_2: Thanks for the issue. Data structures that behave differently across different formats is something Serde's design has never supported well. Would you be interested in prototyping your suggestion with an any-deserializable separate trait in a PR?
username_0: Sure, it'll be my first time going through the serde source, but I think I can put something together.
username_0: It's been a while and I've been focusing on other things, but I'd eventually like to solve this problem. I don't think this is the right way to do it at this time, and I have another idea. Since this issue is specific to the way enums are (de)serialized, formats should be able to override the serialization method used, for example, if they require enums to be tagged.
A quick and dirty way would be to make an attribute for such formats to set, but the default provided schemes aren't efficient in many formats (bincode would be a lot better if integer tags can be used). For future portability, the best solution would be to add a provided method, `serialize_enum`, to `Serializer`, which would default to the implementation we currently have. The issue (which I suspect is the reason why we can only derive for enums) is that they can't know about all of the variants at runtime, much less access the internal ordinals or assign a unique key to each one.
username_0: Perhaps there is a way to do that with procedural macros, but that would mean `derive`ing another thing to use enum serialization.
Status: Issue closed
username_0: I think i've been overthinking this a bit too much - I'm going to try to come up with a better solution and propose it later.
username_3: @agausmann
Sorry to rehash a 3 year-old (closed) issue, but I'm currently running into the exact same problem and from what I've seen there are no documented solutions.
I have an enum that has to be deserialized from TOML for user configuration and also serialized/deserialized using Bincode internally. When I add internal tagging to the enum, Bincode _deserialization_ fails at runtime with the error `Bincode does not support the serde::Deserializer::deserialize_any method`. If I remove internal tagging, I can't use it with TOML.
```rust
#[derive(Serialize, Deserialize, Debug, Clone)]
// #[serde(tag = "type")]
// ^^ need this to (de)serialize the "Specified" variant for TOML
// ... all other ones work by default
pub enum Foo {
Specified { string_field: String },
Active,
All
}
```
This is a table with all the interactions.
| | internal tagging | no internal tagging |
| --- | --- | --- |
| toml serialization | yes | no |
| toml deserialization | yes | no |
| bincode serialization | yes | yes |
| bincode deserialization | no | yes |
The only "solution" I could think of was duplicating the enum, using internal tagging on one and not the other, and then using `mem::transmute` to convert it into the original type. However, this wouldn't actually work since the stored tag would change the memory layout of the object.
I'm at a loss for other solutions, and would like to avoid writing any custom deserialization code if possible.
Did you end up finding a reasonable solution for your problem?
Thanks
username_3: Thanks for the advice! I found [this](https://gitlab.com/mexus/fields-converter/) crate that lets you auto-impl the `From` trait between two structs/enums, and now the code looks like this:
```rust
// NOTE KEEP THESE ENUMS IN SYNC
#[derive(Serialize, Deserialize, Debug, Clone)]
pub enum Foo {
Specified { string_field: String },
Active,
All
}
// NOTE KEEP THESE ENUMS IN SYNC
#[derive(Serialize, Deserialize, Debug, Clone, MoveFields)]
#[destinations("Foo")]
#[serde(tag = "type")]
pub enum FooTOMLCompatible {
Specified { string_field: String },
Active,
All
}
```
The crate also allows you to duplicate the definition with a different name, but unfortunately there was no way to give the duplicate public visibility so I couldn't use it.
The default `Foo` value had to be read into a `Config` struct, so I made this function:
```rust
pub fn parse_default_foo<'de, D>(input: D) -> Result<Foo, D::Error> where D: serde::Deserializer<'de> {
let foo_toml: FooTOMLCompatible = Deserialize::deserialize(input)?;
Ok(foo_toml.into())
}
```
and put this in the config struct
```rust
#[derive(Serialize, Deserialize)]
struct Config {
...
#[serde(deserialize_with = "parse_default_foo")]
pub default_foo: Foo
}
``` |
wesaou/Zion | 296603210 | Title: Feedback
Question:
username_0: You're making a lot of progress! Some quick observations
- [ ] Checkout line 59 of Zion.py, update code to use user.txt, and change pw
- Investigate pre-commit hooks that will help save you from yourself https://github.com/awslabs/git-secrets looks promising
- Setup.py is valuable, seems intuitive. Look at encrypting the user.txt file so that if someone accesses it, they can't do much with it
- User.txt is just plain lines, is it worthwhile to store as YAML or JSON? If you needed to share the creds with another system, it may be easier to read the file as one of those formats
- Instantiate your user credentials as an object rather than plain vars- it'll provide you greater control over the data
- break your new functionality into separate files
- https://github.com/wesaou/Zion/blob/master/Zion.py#L159 what if the user is a woman? |
ros-perception/image_pipeline | 454203926 | Title: Is tit possible to save the
Question:
username_0: Is it possible to save the image timestamps and the name of the each image associated with it?
Answers:
username_0: Is it possible to save the image timestamps and the name of the each image associated with it?
username_1: Can you give more information? This is a little ambiguous, a few examples would be helpful.
An image is also like any other file in your filesystem, there will be a write time associated with it.
Status: Issue closed
|
Tsjippy/ChromecastPlugin | 496786485 | Title: Sets volume to 50% on start
Question:
username_0: Hi!
I have 2 absolutely same chromecast devices (MiBox S).
And when plugin starts it sets volume on one of them to 53%, but another one always works correctly.
Tried to set "Adjust volume" to true or false, but it doesn't help.
Logs:
2019-09-22 17:21:29.526 Status: (Chromecast Экран) Entering work loop.
2019-09-22 17:21:29.527 Status: (Chromecast Экран) Initialized version 4.5.0, author 'Tsjippy'
2019-09-22 17:21:29.779 Status: (Chromecast Экран) Checking if images are loaded
2019-09-22 17:21:29.830 Status: (Chromecast Экран) Starting file server on port 8000
2019-09-22 17:21:29.831 Status: (Chromecast Экран) Checking for available chromecasts
2019-09-22 17:21:29.779 Error: (Chromecast Экран) Error on line 328 Error is module 'pip' has no attribute 'get_installed_distributions'
2019-09-22 17:21:39.530 Status: (Chromecast Экран) Found these chromecasts: 'Телевизор в гостиной', 'Экран'
2019-09-22 17:21:39.649 Status: (Chromecast Экран) Connected to 'Экран'
2019-09-22 17:21:39.649 Status: (Chromecast Экран) Registering listeners for 'Экран'
2019-09-22 17:21:39.706 Status: (Chromecast Экран) Done registering listeners for 'Экран'
2019-09-22 17:21:45.674 Status: (Chromecast Гостиная) Started.
2019-09-22 17:21:47.806 Status: (Chromecast Гостиная) Entering work loop.
2019-09-22 17:21:47.807 Status: (Chromecast Гостиная) Initialized version 4.5.0, author 'Tsjippy'
2019-09-22 17:21:48.057 (Chromecast Гостиная) Checking devices for 'Телевизор в гостиной'
2019-09-22 17:21:48.071 (Chromecast Гостиная) Created uservariable for 'Телевизор в гостиной'
2019-09-22 17:21:48.071 (Chromecast Гостиная) Created 'Status' device for chromecast 'Телевизор в гостиной'
2019-09-22 17:21:48.073 (Chromecast Гостиная) Device Image update: 'Chromecast', Currently 0, should be 144
2019-09-22 17:21:48.088 (Chromecast Гостиная) Created 'Volume' device for chromecast 'Телевизор в гостиной'
2019-09-22 17:21:48.089 (Chromecast Гостиная) Device Image update: 'Chromecast', Currently 0, should be 144
2019-09-22 17:21:48.104 (Chromecast Гостиная) Created 'Title' device for chromecast 'Телевизор в гостиной'
2019-09-22 17:21:48.105 (Chromecast Гостиная) Device Image update: 'Chromecast', Currently 0, should be 144
2019-09-22 17:21:48.119 (Chromecast Гостиная) Created 'App' device for chromecast 'Телевизор в гостиной'
2019-09-22 17:21:48.121 (Chromecast Гостиная) Device Image update: 'Chromecast', Currently 0, should be 144
2019-09-22 17:21:48.137 (Chromecast Гостиная) Devices check done
2019-09-22 17:21:48.150 (Chromecast Гостиная) Found uservariable for 'Телевизор в гостиной'
2019-09-22 17:21:48.151 (Chromecast Гостиная) Port 8000 is already in use
2019-09-22 17:21:48.044 Status: (Chromecast Гостиная) Checking if images are loaded
2019-09-22 17:21:48.150 Status: (Chromecast Гостиная) Starting file server on port 8000
2019-09-22 17:21:48.151 Status: (Chromecast Гостиная) Checking for available chromecasts
2019-09-22 17:21:48.043 Error: (Chromecast Гостиная) Error on line 328 Error is module 'pip' has no attribute 'get_installed_distributions'
**2019-09-22 17:21:57.690 (Chromecast Гостиная) Updated volume to 53**
2019-09-22 17:21:57.532 Status: (Chromecast Гостиная) Found these chromecasts: 'Телевизор в гостиной', 'Экран'
2019-09-22 17:21:57.630 Status: (Chromecast Гостиная) Set volume of 'Телевизор в гостиной' to 50%
2019-09-22 17:21:57.630 Status: (Chromecast Гостиная) Connected to 'Телевизор в гостиной'
2019-09-22 17:21:57.630 Status: (Chromecast Гостиная) Registering listeners for 'Телевизор в гостиной'
2019-09-22 17:21:57.630 Status: (Chromecast Гостиная) Done registering listeners for 'Телевизор в гостиной' |
txperl/PixivBiu | 1056254386 | Title: 请教,没有找到callback
Question:
username_0: 进行到查找code那一步的时候,筛选没有找到callback(我确认是打开了网络之后再请求的,没过滤之前有很多的返回)


Answers:
username_1: 请问你登录后页面跳转了吗?如果有的话,可以检查一下自己是否开启了持续记录选项。
username_0: 跳转了,页面是这样的

请问一下持续记录选项是在哪里,不了解

username_1: 把那个「保留日志」打开应该就可以了。
username_0: 我试了,强制刷新了一遍还是没有

username_1: 页面上点了「继续使用此账号」了吗?
username_0: 没点(⊙﹏⊙),感谢提醒
username_1: 解决了吗?
username_0: 解决了,感谢
Status: Issue closed
username_1: ok |
LN-Zap/zap-desktop | 294203897 | Title: Connect dialog remains open when request payment link (from web) is clicked
Question:
username_0: 
It should hide itself when that happens, or stay open but invisible and once payment is done, it can go back again.
For case like this I personally use a popup stack and the payment would be just another popup that would be removed form the stack once completed, not messing with whatever was opened before.
Answers:
username_1: 👍
Status: Issue closed
|
processing/processing-android | 409630915 | Title: Copy parameters are reversed.
Question:
username_0: Version 4.0.4 of the mode
According to processing's reference,
`copy(sx, sy, sw, sh, dx, dy, dw, dh)`
is meant to take the region of the screen designated by a rect with the position of `sx, sy` and a size of `sw, sh` and copy it over to `dx, dy` with a size of `dw, dh`
In processing for android version 4.0.4, the order of parameters are
`copy(dx, dy, dw, dh, sx, sy, sw, sh)`
Demo code
```java
void setup(){
fullScreen();
background(0);
point(30,height-1);
}
void draw(){
stroke(color(random(255),random(255),random(255)));
point(random(width),height-1);
copy(0,1,width,height-1,0,0,width,height-1);
//copy(0,0,width,height-1,0,1,width,height-1); //fix for android
}```
Answers:
username_1: Thanks for the bug report. I can confirm this happens with the default renderer. Also, no output is generated when one uses the OpenGL renderer, either with the original order of parameters, or with the fix. So this may be pointing to another bug.
username_1: Fixed with https://github.com/processing/processing-android/commit/dbacaba1fb36e63e13c254501448b32fe0e683d3
Status: Issue closed
|
googleapis/google-cloud-python | 444891081 | Title: google.api_core.exceptions.ServiceUnavailable: 503 Connect Failed during automl_v1beta1
Question:
username_0: The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "Code.py", line 27, in <module>
print (get_prediction(content, project_id, model_id))
File "Code.py", line 19, in get_prediction
request = prediction_client.predict(name, payload, params)
File "C:\ProgramData\Anaconda3\lib\site-packages\google\cloud\automl_v1beta1\gapic\prediction_service_client.py", line 311, in predict
request, retry=retry, timeout=timeout, metadata=metadata
File "C:\ProgramData\Anaconda3\lib\site-packages\google\api_core\gapic_v1\method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\google\api_core\retry.py", line 270, in retry_wrapped_func
on_error=on_error,
File "C:\ProgramData\Anaconda3\lib\site-packages\google\api_core\retry.py", line 179, in retry_target
return target()
File "C:\ProgramData\Anaconda3\lib\site-packages\google\api_core\timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\google\api_core\grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 Connect Failed
```
Answers:
username_1: @beccasaurus Are there any known issues with the Auto ML API which would explain this issue? The GAPIC `Predict` method is marked as `non_idempotent`, which means that we don't automatically retry for 503 responses.
Status: Issue closed
username_2: The API has no gone, GA, but I've been unable to reproduce this and haven't seen this behavior on beta or v1 endpoints. Going to close this out unless you are still having issues. |
malpedia/feedback | 729674520 | Title: Filter inferred references by exclusiveness
Question:
username_0: **Is your feature request related to a problem? Please describe.**
For some actors, references may be impure because they have common tools (mimikatz, cobalt strike, meterpreter, ...) attributed to them. if malware families are easily identified as non-exclusive (reuse among several actors), they should be listed below the "core" references imported from MISP or inferred from signature tools.
**Describe the solution you'd like**
Split the references into "core" references and "related" references depending on the exclusiveness of families. |
jlippold/tweakCompatible | 524353987 | Title: `LatchKey` working on iOS 12.4
Question:
username_0: ```
{
"packageId": "ch.mdaus.latchkey",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "ch.mdaus.latchkey",
"deviceId": "iPhone11,6",
"url": "http://cydia.saurik.com/package/ch.mdaus.latchkey/",
"packageName": "LatchKey",
"packageVersionIndexed": true,
"iOSVersion": "12.4",
"category": "Tweaks",
"repository": "Maxwell Dausch's Repo",
"name": "LatchKey",
"installed": "2.1",
"packageInstalled": true,
"packageStatusExplaination": "This package version has been marked as Not working based on feedback from users in the community. The current positive rating is 0% with 0 working reports.",
"id": "ch.mdaus.latchkey",
"commercial": false,
"packageIndexed": true,
"tweakCompatVersion": "tweakcompatible-zebra-1.1.3",
"shortDescription": "Move and theme the Face-ID lock glyph",
"latest": "2.1",
"author": "<NAME>",
"packageStatus": "Not working"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": ""
}
``` |
scrapy/scrapy | 1092335721 | Title: Ja3 algorithm
Question:
username_0: + When I visited a website, it recognized me as a crawler
+ It is based on the ja3 algorithm. [ja3](https://ja3er.com/json)
+ Because scrapy has a fixed hash value every time it visits
+ Is there a way to modify ssl like requests?
---
+ Like this
```python
import random
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.ssl_ import create_urllib3_context
ORIGIN_CIPHERS = ('DH+3DES:RSA+3DES:ECDH+AES256:DH+AESGCM:DH+AES256:DH+AES:ECDH+AES128:'
'DH+HIGH:RSA+AESGCM:ECDH+3DES:RSA+AES:RSA+HIGH:ECDH+AESGCM:ECDH+HIGH')
class DESAdapter(HTTPAdapter):
def __init__(self, *args, **kwargs):
CIPHERS = ORIGIN_CIPHERS.split(':')
random.shuffle(CIPHERS)
CIPHERS = ':'.join(CIPHERS)
self.CIPHERS = CIPHERS + ':!aNULL:!eNULL:!MD5'
super().__init__(*args, **kwargs)
def init_poolmanager(self, *args, **kwargs):
context = create_urllib3_context(ciphers=self.CIPHERS)
kwargs['ssl_context'] = context
return super(DESAdapter, self).init_poolmanager(*args, **kwargs)
def proxy_manager_for(self, *args, **kwargs):
context = create_urllib3_context(ciphers=self.CIPHERS)
kwargs['ssl_context'] = context
return super(DESAdapter, self).proxy_manager_for(*args, **kwargs)
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67'}
session = requests.Session()
session.headers.update(headers)
ssl = DESAdapter()
for _ in range(5):
session.mount('https://', adapter=ssl)
result = session.get('https://ja3er.com/json').json()
print(result)
```
Status: Issue closed
Answers:
username_0: 抱歉,我想[这个](https://docs.scrapy.org/en/latest/topics/settings.html#downloader-client-tls-method)可以解决我的问题
username_0: + When I visited a website, it recognized me as a crawler
+ It is based on the ja3 algorithm. [ja3](https://ja3er.com/json)
+ Because scrapy has a fixed hash value every time it visits
+ Is there a way to modify ssl like requests?
---
+ Like this
```python
import random
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.ssl_ import create_urllib3_context
ORIGIN_CIPHERS = ('DH+3DES:RSA+3DES:ECDH+AES256:DH+AESGCM:DH+AES256:DH+AES:ECDH+AES128:'
'DH+HIGH:RSA+AESGCM:ECDH+3DES:RSA+AES:RSA+HIGH:ECDH+AESGCM:ECDH+HIGH')
class DESAdapter(HTTPAdapter):
def __init__(self, *args, **kwargs):
CIPHERS = ORIGIN_CIPHERS.split(':')
random.shuffle(CIPHERS)
CIPHERS = ':'.join(CIPHERS)
self.CIPHERS = CIPHERS + ':!aNULL:!eNULL:!MD5'
super().__init__(*args, **kwargs)
def init_poolmanager(self, *args, **kwargs):
context = create_urllib3_context(ciphers=self.CIPHERS)
kwargs['ssl_context'] = context
return super(DESAdapter, self).init_poolmanager(*args, **kwargs)
def proxy_manager_for(self, *args, **kwargs):
context = create_urllib3_context(ciphers=self.CIPHERS)
kwargs['ssl_context'] = context
return super(DESAdapter, self).proxy_manager_for(*args, **kwargs)
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67'}
session = requests.Session()
session.headers.update(headers)
ssl = DESAdapter()
for _ in range(5):
session.mount('https://', adapter=ssl)
result = session.get('https://ja3er.com/json').json()
print(result)
```
username_0: 抱歉,我想[这个](https://docs.scrapy.org/en/latest/topics/settings.html#downloader-client-tls-method)可以解决我的问题
Status: Issue closed
|
WelsyMC/MinecraftServerInformations | 1009034022 | Title: Can't return the correct value
Question:
username_0: MinecraftServerInformations » Building a new Pinger...
MinecraftServerInformations » Pinging server mypkm.cn with port 25566...
MinecraftServerInformations » Pinged successfully, calling callback !
0/0
null |
Azure/azure-sdk-for-java | 595921757 | Title: Add whitelist include entries to the versioning tooling in order to pin down specific versions
Question:
username_0: Right now the whitelist includes are of the form
`<include><groupId>:<artifactId></include> `
and we'd like to be able to pin a specific version to each include which would change the entries to be
`<include><groupId>:<artifactId>:<version></include>`
The current versioning tooling would be modified to support these with a slightly different tag (x-include-update instead of x-version-update)
`<!-- {x-include-update;<groupId>:<artifactId>;external_dependency} -->`
The reason for this slight modification is that these entries are going to have to be processed a little differently than version entries. Changes would need to be made to the [update_version.py](https://github.com/Azure/azure-sdk-for-java/blob/master/eng/versioning/update_versions.py) script to deal with version changes and the [pom_file_version_scanner.ps1](https://github.com/Azure/azure-sdk-for-java/blob/master/eng/versioning/pom_file_version_scanner.ps1) would need to be updated for verification of entry correctness.<issue_closed>
Status: Issue closed |
Ebiroll/RAK811_BreakBoard | 341954083 | Title: Possible to Change this project for WisNode-Lora?
Question:
username_0: Hi username_1
Thanks for your great effort on this project. I would like to know if it is possible and how I could go about changing this project for RAK WisNode-Lora. Its using the same RAK811 and STM32 chip set. It's just the board layout that's different.
Thanks for your response
Answers:
username_1: Hello @username_0 .
I have used it on the WisNode-Lora.
Just remove the init() calls to the GpsInit() and LIS3DH_Init( ); in the function BoardInitPeriph() located in
src/RAK811BreakBoard/board.c
Then you will have to find something else to send in PrepareTxFrame() in src/main.c
GL
username_1: Also this schematic might be helpful. https://github.com/username_1/beer_tracker/blob/master/RAK-wisnode/wisnode-lora_schematic_diagram.pdf
username_0: Thanks I will try it. I recompiled the app using pio run and found allot of warnings. When I program my Tracker Board I don't seem to get a response. Is there a blinky app to make sure I didn't kill the board?
username_1: In the end of src/main.c you have a blinky app. Just rename the other main(). Unless you overwrote the bootloader I dont think you ruined your board. And if you did you can probably flash a new bootloader by using the SWD pins. https://github.com/username_1/esp32_blackmagic. Maybe you just forgot to move the strap after flashing. P1. Boot Switch Pin,
username_0: Is there a better form in which to have these discussions?
I stripped out the whole program in an attempt to get the LED's to blink. I don't think the bootloaded was overridden. It seems to write the bin from 0x08000000. Do you know what the entrypoint for the application is?
The output from running stm32flash from my ununtu pc is:
stm32flash /dev/ttyUSB0 -e 0 -w .pioenvs/rak811/firmware.bin
stm32flash 0.5
http://stm32flash.sourceforge.net/
Using Parser : Raw BINARY
Interface serial_posix: 57600 8E1
Version : 0x31
Option 1 : 0x00
Option 2 : 0x00
Device ID : 0x0429 (STM32L1xxx6(8/B)A)
- RAM : 32KiB (4096b reserved by bootloader)
- Flash : 128KiB (size first sector: 16x256)
- Option RAM : 32b
- System RAM : 4KiB
Write to memory
Wrote address 0x08005ce4 (100.00%) Done.
username_1: Are you using the Trackerboard or the WisNode-Lora? The blinky app only works on the Breakboard/tracker board.
On the wisnode I did some change in src/peripherals/lis3dh.c
In the init function I did this.
//return 0;
I use screen to check serial output. Press reset after starting screen.
screen /dev/ttyUSB0 115200
Also check pio --version
more ~/.platformio/platforms/ststm32/platform.json
I used the flash application located in the stm32flash directory
[olof@atrash RAK811_BreakBoard]$ ./stm32flash -w .pioenvs/rak811/firmware.bin /dev/ttyUSB0
stm32flash 0.5
http://stm32flash.sourceforge.net/
Using Parser : Raw BINARY
Interface serial_posix: 115200 8E1
Version : 0x31
Option 1 : 0x00
Option 2 : 0x00
Device ID : 0x0429 (STM32L1xxx6(8/B)A)
- RAM : 32KiB (4096b reserved by bootloader)
- Flash : 128KiB (size first sector: 16x256)
- Option RAM : 32b
- System RAM : 4KiB
Write to memory
Data size: 60880 bytes
Erasing memory
Wrote address 0x0800edd0 (100.00%) Done.
----------------
Latest versions of everything.. Here is serial output on Wisnode-Lora without any changes in the code.
LIS3DH no ack
LIS3DH no ack
LIS3DH no ack
+++++++++++++++++++++++++++++++
RAK811 BreakBoard soft version: 1.0.2
Selected LoraWAN 1.0.2 Region: EU868
ABP:
Dev_EUI: 60 XX XX XX
DevAddr: 26011FDA
NwkSKey: <KEY> XX
AppSKey: DD XX XX XX XX XX XX
GpsGetLatestGpsPositionDouble ret = 0
username_0: I use Ubuntu so I installed st32flash using apt-get. My PlatformIO, version 3.6.0a7
I'm using the tracker board v2.0 I also have a WisNode, but that's another story
I managed to compile the code a couple of days ago and had the tracker running, connected to ttn. Then I wanted to get the WisNode to work. When that did not succeed I thought I would see if I could at least recompile the Tracker software in an attempt to get back to some point where things worked. I managed to now have a tracker board with both led's on and nothing happening on the serial port. Is there a .bin I could burn to just make sure by board is still fine? How do I get back to a controllable state?
username_2: STM32 you must have this patchhttps://github.com/Formlabs/stm32flash/commit/8c4aa650bffaf98e96d1b6065ab6e76c43150d8a
username_3: I also have the RAK WisNode. Would be nice having a separate project folder without the hassle to adopt code.
username_4: I can run it on my WisNode just by applying https://github.com/username_1/RAK811_BreakBoard/issues/5#issuecomment-405612921. I use Debian Bullseye.
Flashing:
```
stm32flash -b 115200 -e 255 -w ~/RAK811_BreakBoard/.pio/build/rak811/firmware.bin -v /dev/ttyUSB0
```
Reading logs:
```
picocom --baud 115200 --omap crcrlf --echo /dev/ttyUSB0
``` |
LiskHQ/lisk-sdk | 559268752 | Title: Block header cached should be refilled if goes over certain threshold
Question:
username_0: ### Description
- Currently if a block is deleted, there is no re-cache until new block is added, but we want to maintain at least 303 blocks in the cache
### Motivation
In order to compute the BFT header we use 303 bocks header, therefore this amount of cache should be maintained.
### Acceptance Criteria
- threshold should be constructor option but default to 350 (more than 3 rounds for us)
### Additional Information<issue_closed>
Status: Issue closed |
pyannote/pyannote-audio | 243311138 | Title: Example with OS database?
Question:
username_0: This looks really nice. I was wondering if you have or plan to make available a quickstart example using an open source database?
Answers:
username_1: Definitely.
Yet, I still have to find such an open-source database. It needs to have the following features:
- 10+ hours of speech (the longer the better)
- 100+ different speakers (the higher the better)
- fine "who speaks when" manual temporal annotations
Any suggestions?
Then, it would just be a matter of creating a `pyannote.database` plugin for this database: https://github.com/pyannote/pyannote-db-template
username_2: Hello,
I've done the db plugin for LibriSpeech corpus. Please try to use it:
https://github.com/username_2/pyannote-db-librispeech
username_1: That's awesome. Thanks @username_2. Will definitely have a a try!
username_1: Same here, except this will probably have to wait until september...
Status: Issue closed
username_1: Latest release of pyannote.audio (v1.0) has tutorials relying on the AMI database.
Closing as I believe this solves the original issue.
I still have to find some time trying the Librispeech plugin, though :( |
geosolutions-it/MapStore2 | 207191350 | Title: Unclear how to add WFS layer to config
Question:
username_0: I want to add a WFS service to a project created with `createdProject.js.`
Adding WMS-services to config.json works like a charm, but I can't figure out how to add a WFS.
Since there is not a lot of documentation, I took a look at the examples in `web/client/examples.`
In `examples/styler/config.json`, a couple of layer objects have a value "WFS" under the property `describeLayer.owsType`, while the properties `type,` `url` and `format` are used to describe a WFS.
```json{
"map": {
"center": {
"x": 1250000.000000,
"y": 5370000.000000,
"crs": "EPSG:900913"
},
"layers": [
// some WMS layers ..
{
"describeLayer": {
"owsType": "WFS",
"geometryType": "Point"
},
"format": "image/png",
"group": "Vector",
"name": "tiger:poi",
"opacity": 1,
"title": "NY PoI",
"type": "wms",
"url": "http://demo.geo-solutions.it/geoserver/wms",
"visibility": true
},
]
}
}
```
**How can I add a WFS layer to the config file that is queried and displayed in the map?**
It would be handy just to state `"type":"wfs"` and the url, but that does not work.
BTW: First of all thanks guys, great project. I am currenty trying to dig into the code and I am not very experienced with web dev.
Answers:
username_1: This is a question for the [developers mailing list](https://github.com/geosolutions-it/MapStore2#communication) not an issue.
I am going to close this one, please report this on the ML.
Status: Issue closed
|
tensorflow/models | 159188638 | Title: SyntaxNet - error trying to train the POS tagger on a new treebank
Question:
username_0: I would like to train the SyntaxNet POS tagger on a new treebank.
Following the instructions in Training the SyntaxNet POS Tagger, first I have edited syntaxnet/context.pbtxt so that the inputs training-corpus, tuning-corpus, and dev-corpus point to the location of my training data (in 10-column CoNLL format). Then, I have run the command as shown, substituting the names of my training/tuning corpus.
This results in the following error:
INFO:tensorflow:Computing lexicon...
F ./syntaxnet/proto_io.h:147] Check failed: input.record_format_size() == 1 (0 vs. 1)TextReader only supports inputs with one record format: name: "/home/username_0/isdt_train.conll"
Any idea about what I'm doing wrong?
Thanks
Answers:
username_1: When you run the training command, don't substitute --training_corpus and --tuning_corpus for the names of your training corpora. The names of the input you want to use in the context.pbtxt file should still be 'training-corpus' or 'tuning-corpus', unless you've changed it- the only difference is that it points to a different file. You should be able to run the training command exactly as given.
username_2: @username_0: Does that fix your problem?
username_0: Yes, thanks. Now I close the issue.
Status: Issue closed
username_3: @username_0 I ran into same issue. I am not sure I understand clearly the fix.
Could you give ma the content of your context.pbtxt ? Especially the part defining input for training, dev and test data ?
And the command you used ?
Thx
username_0: In attachment my context.pbtxt (I changed the file extension because otherwise github complains). The only differences with the original one concern the pathnames of the training-, tuning-, and dev-corpus files.
I have used the command exactly as provided in the documentation.
Hope this helps.
[context.txt](https://github.com/tensorflow/models/files/419918/context.txt) |
spinnaker/spinnaker | 222258677 | Title: Enable/disable instances in service discovery (AWS)
Question:
username_0: Follow spinnaker/clouddriver#1017 as an example.
Answers:
username_1: this already works but relies on eureka, are you asking for a consul integration?
username_0: Yes, consul integration is the ask. It doesn't look like a whole lot of work, potentially an opportunity to make our first contribution to the project.
username_2: We'd love to see this as well. I'm willing to take a whack at it if someone can give me some guidance for the preferred integration style. I noticed that the existing `clouddriver-consul` project is integrated into `clouddriver-google` differently than `clouddriver-eureka` is integrated with `clouddriver-aws`. Any thoughts on approaching this?
username_3: `clouddriver-eureka` & `clouddriver-consul` have differences stemming from how they work at a platform level IIRC. I'd follow the `clouddriver-google` platform if/where possible.
If you have any questions ping me on slack (@username_3 on [join.spinnaker.io](http://join.spinnaker.io))
username_2: I'm sure I'll ping you on slack this week but I just want to clarify that my question was more about the integration points between `clouddriver-eureka`/`clouddriver-aws` and `clouddriver-consul`/`clouddriver-google`. It seems as though both AWS and GC are hooked into in fundamentally different ways and it's unclear to me which pattern I should be following.
username_1: I'd like to see the consul integration follow the ExternalHealthProvider interface used by Eureka and refactor the openstack/gcp integrations as it would reduce these kind of disconnects and provide one way of providing external health.
username_2: It also begs the question as to what Consul integration really looks like. I realized that not everyone uses Consul in the same way. Case in point, the existing `clouddriver-consul` assumes that the user is using the DNS interface to query the agent on other nodes rather than using the `/v1/catalog` and `/v1`health` endpoint on the local agent.
There's also the question of how to abstract network topography. We map a Datacenter in Consul to a VPC in AWS and use WAN communication to let the DCs talk to each other. Any node in any datacenter can have an understanding of what any other node in any other datacenter is and what their status is simply by providing the `?dc=` query parameter. I doubt everyone else is doing it this way.
I'm sure there are many other ways that companies are using Consul and it's difficult for me to get my head around generalizing the solution in such a way that we're not writing for our use case. Or maybe that's ok if we're making a first pass at it. I dunno, what do you guys think?
username_0: Another use case example:
We map the datacenter to the spinnaker stack name (and sometimes that means different VPCs, sometimes not).
We primarily utilize the API (via consul template) for service discovery, and use consul DNS in other cases.
username_4: @username_0 @username_2 @username_3,
Is there any active plan to support Consul with AWS in spinnaker? This would be very nice.
username_0: We don't have any active plans, but we've been toying with the idea of supporting development in Q2 2018 if nobody else gets around to it. Right now we're working around it with a combination of lambda functions, jenkins jobs, and canary strategies.
username_4: @username_0 the mix sounds interesting (lambda+jenkins+canary). But how are you able to disable/enable server groups registered in Consul this way :-)?
username_5: Is there an update on this given the suggestion that it was on the roadmap? |
ionepub/ionepub.github.io | 175746251 | Title: ThinkPHP中前置操作和后置操作的使用
Question:
username_0: 1. 只能用在controller中
2. 只对Url访问有效,类中直接调用方法或用A方法调用时无效
3. 除非直接调用前置或后置操作,否则不能在前置、后置方法中传递参数
```php
class demo{
//前置操作,在index方法执行之前执行,index可以为其他任意名称
public function _before_index(){
}
public function index(){
}
//后置操作,在index方法之后执行
//如果index方法中有exit或者die等,则不执行后置操作
public function _after_index(){
}
}
``` |
benadida/helios-server | 53266385 | Title: Add LICENSE.md file etc (if desire open source license).
Question:
username_0: So glad to see this software here! It looks like it's meant to be open source. If so, it would be good to add a LICENSE.md file containing the specific open source license, and add license header comments to the source flies (basically as per http://opensource.org/faq#apply-license).
By the way, I'd be happy to contribute the pull request for all this, if you say what license you'd like (http://opensource.org/licenses/ is a good starting place; see also http://choosealicense.com/).
Best,
-Karl
Answers:
username_1: @username_0 yes thank you! If you're up for providing a PR, I'd love it. Apache License please.
username_0: @username_1, hey -- am actively working on this; PR coming soon, probably next week (delayed by some client work right now).
username_1: thanks @username_0 ! No worries, I'll take your contribution when you have it.
username_2: Okay, @username_1, handing over to you to finish this one but let me or @username_0 know if you have any questions.
username_1: looks good, I have emailed the mailing list to make sure no objections to Apache 2.0, and will merge if no objections.
username_0: Sounds good! Note there's more to be done than just merging this PR -- it adds the LICENSE file, which is the main thing, and then adds a few source file copyright notice headers to show how that's done. But it'll be easier for you than for us to add those notices to the other source files (and the copyright-diviniation.sh script is meant to partly automate that process).
username_1: have the license file, I think this will be good enough (I know the best practice is every file, but ... one top-level file seems good enough for most projects.
Status: Issue closed
|
LambdaNote/professional-ipv6-feedbacks | 284345794 | Title: 286 ページに対するフィードバック
Question:
username_0: 問題を指摘してください。代案は不要です。なお、マンパワーの都合でissue上では無反応の可能性が高いです。ごめんなさい。
* 具体的に指摘をいただきたいポイント
- 内容の間違い
- 解説がほしいトピック
- 意味が取れない段落
* 下記の指摘も歓迎です(grepをかけるので存在の指摘だけで十分です)
- 技術用語の不統一
- スペルミス、誤字、脱字
* 下記については、指摘は不要です(3月までには自然に直ります)
- 日本語表記(漢字や送り仮名)の不統一
- レイアウトの不備
- その他、局所的な日本語の問題<issue_closed>
Status: Issue closed |
minbrowser/min | 214236507 | Title: Forward/Backward swiping not working
Question:
username_0: This app is frustrating. Many times, I cannot go back to the previous page, or forward. Swiping does nothing. There should be another option to move around between web pages
Answers:
username_1: It just seems that it requires a fairly long swipe
username_1: https://github.com/minbrowser/min/blob/d1db4753aff7f4a8c038c1ef54371714e95a6cc9/js/webview/swipeEvents.js#L21
username_2: The swipe gestures have been completely rewritten in the unreleased master branch, so the code @username_1 linked above isn't the same as what's included in 1.5.1.
@username_0 Could you try installing the master branch of Min and seeing if that works better?
username_3: This seems to be fixed in the new version(s), can you confirm @username_2?
Status: Issue closed
|
JuniorUdale/MoreFluff | 540678715 | Title: Meteor Hammer missing a sword icon
Question:
username_0: 
I'll probably linger here a bit, will try to keep reports for known issues to a minimum though
Status: Issue closed
Answers:
username_1: Fixed for the next release, thanks for the report! |
mko-x/docker-clamav | 718693046 | Title: Adjust build for dockerhub to use multi-arch manifest
Question:
username_0: **Actual:**
`docker pull mkodockx/docker-clamav:buster-slim` just has amd64 arch.
**Expected:**
`docker pull mkodockx/docker-clamav:buster-slim` should pull the correct arch for the platform I execute the command on
**Proposal:**
- have a build for each base folder (buster, stretch, main, edge)
- configure the `hooks/post_push` to execute the manifest tool (see below)
```sh
#!/bin/bash
curl -Lo manifest-tool https://github.com/estesp/manifest-tool/releases/download/v0.9.0/manifest-tool-linux-amd64
chmod +x manifest-tool
./manifest-tool push from-args \
--platforms linux/amd64,linux/arm/v7,linux/arm64/v8 \
--template ${repo}docker-clamav:buster-slim-ARCHVARIANT \
--target ${repo}docker-clamav:buster-slim
```
Not sure if this is how the hooks can work, never worked with them myself ;-) Commands for alle images can be found in `build-all.sh`
Answers:
username_1: I tried to manually run manifest-tool, that worked so far.
How do you recommend to configure hooks in `hooks/post_push`?
username_1: You may try to install one and check the arch delivered
username_0: I had a look into it and tried to figure out how it could be done. Might not be elegant, I am an Azure DevOps guy ;-)
So for each base image platform an action needs to be created, like the following:
```
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
branches: [ master ]
paths:
- "debian/buster"
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- name: Prepare for QEMU
run: sh -c "docker run --rm --privileged multiarch/qemu-user-static:register --reset"
- name: Build and push Docker images
# You may pin to the exact commit or the version.
# uses: docker/build-push-action@ab83648e2e224cfeeab899e23b639660765c3a89
uses: docker/[email protected]
with:
# Username used to log in to a Docker registry. If not set then no login will occur
username: # optional
# Password or personal access token used to log in to a Docker registry. If not set then no login will occur
password: # optional
# Server address of Docker registry. If not set then will default to Docker Hub
registry: # optional
# Docker repository to tag the image with
repository:
# Comma-delimited list of tags. These will be added to the registry/repository to form the image's tags
tags: # optional
# Automatically tags the built image with the git reference as per the readme
tag_with_ref: # optional
# Automatically tags the built image with the git short SHA as per the readme
tag_with_sha: # optional
# Path to the build context
path: # optional, default is .
# Path to the Dockerfile (Default is '{path}/Dockerfile')
dockerfile: # optional
# Sets the target stage to build
target: # optional
# Always attempt to pull a newer version of the image
always_pull: # optional
# Comma-delimited list of build-time variables
build_args: # optional
# Comma-delimited list of images to consider as cache sources
cache_froms: # optional
# Comma-delimited list of labels to add to the built image
labels: # optional
# Adds labels with git repository information to the built image
add_git_labels: # optional
# Whether to push the image
push: # optional, default is true
- name: Push manifest
run: curl -Lo manifest-tool https://github.com/estesp/manifest-tool/releases/download/v0.9.0/manifest-tool-linux-amd64 && chmod +x manifest-tool && ./manifest-tool push from-args --platforms linux/amd64,linux/arm/v7,linux/arm64/v8 --template mkodockx/docker-clamav:buster-slim-ARCHVARIANT --target ${repo}docker-clamav:buster-slim
```
Should trigger on master only if `debian/buster` changes were commited. Prepared the qemu requirements and then execute a docker build + push (should be mutliplied per `Dockerfile`) and at the end the manifest will pushed to DockerHub. Seems it could all be done within GitHub Action itself. Pay attention to the dockerx command credentials for DockerHub. Should be stored in repo Settings->Secrets.
username_1: Ah, I just reealized that you bumped alpine to version 3.12
There shouldn't occur problems, but it may cause one or another.
username_0: wrong issue I assume ;-) But seems alpine:12 was not really the issue
username_0: #80 can solve this as well :)
username_1: This one should be fixed already, shouldn't it?
Status: Issue closed
|
boacausa/webplatform | 621006994 | Title: Configurar jest para testes com javascript
Question:
username_0: - [ ] Ajustar configurações de testes para permitir testar se o layout dos componentes estão corretos
* Pesquisar configurações do jest
* Pesquisar como fazer testes de snapshot com estilos
- [ ] Configurar codeclimate para que envie report de coverage de testes javascript |
himmAllRight/himmAllRight-source | 176525300 | Title: Mobile Friendly
Question:
username_0: The css should be edited so that the generated website will be mobile friendly
Status: Issue closed
Answers:
username_0: The css should be edited so that the generated website will be mobile friendly
username_0: This was fixed when I [switched to hugo](http://ryan.himmelwright.net/post/website-transition-to-hugo/).
Status: Issue closed
|
alexbosworth/balanceofsatoshis | 734031686 | Title: [Feature] Add the ability to sync backups to cloud services
Question:
username_0: The current channel backup strategies implemented are:
1. Telegram push
2. Report piping to email
A more comprehensive backup strategy would push to cloud storage providers
This could be implemented with `inotify`: https://gist.github.com/username_0/2c5e185aedbdac45a03655b709e255a3
Ideally backups would be even more comprehensive and low setup than this
A potential package to assist with this would be https://github.com/pkgcloud/pkgcloud#storage but this has the disadvantage of being more cloud storage oriented and does not support common consumer backup services like Dropbox, Box, and iCloud. |
jisaacks/GitGutter | 73894968 | Title: Distinguish untracked and ignored files
Question:
username_0: `show_markers_on_untracked_file` setting affects both untracked files and ignored files. I think there should be another setting for ignored files like `show_markers_on_ignored_file`. Because you usually (if not always) don't want to see markers on ignored files. Status bar notification maybe, but not gutter markers.
Thanks.
Answers:
username_1: +1 for idea
username_2: I think if you don't want to see markers on ignored files, odds are you don't want to see them on untracked files either (since in both these cases they do not tell you anything about your edits just that the file is in fact ignored/untracked.) I don't see any compelling reason to make the user have to change 2 settings.
username_0: Agreed. So maybe we should put a statusbar indicator for both.
On the other hand, I think "untracked" indicator should be more noticeable than the "ignored" indicator in some way. But I don't know how.
username_2: @username_0 I would not be opposed to adding it to [show_status](https://github.com/username_2/GitGutter/blob/master/GitGutter.sublime-settings#L43-L49).
username_0: Makes sense. But with its current string based style, it may become inflexible. Maybe there should be an array like this?
status_info: ["diff_stats", "branch", "ignored_flag", "untracked_flag"]
username_2: @username_0 I would prefer an object:
```javascript
"status_info" {
"lines_changed": true,
"branch": false,
// etc.
}
```
username_0: I love the discussion :) Is object ordered in python? With array, users may change the order of the items.
username_0: Hi. Is there any improvement on this?
`[UNTRACKED]` and `[ignored]` badges could be added to the status bar for this purpose. Note the emphasis via case.
username_2: This isn't something I have had time to work on. It's not high on my list of priorities. However, it shouldn't be a very difficult thing to do if you (or anyone else) want to take a stab at it.
username_0: Ok, thanks. I may give it a try.
username_3: GitGutter 1.5.0 shows file status in the status bar, if jinja2 library is available.
So I set `"show_markers_on_untracked_file": false` to hide the icons and use the status message.
Is that ok?
Status: Issue closed
|
kks32/phd-thesis-template | 93032939 | Title: How to use it with differents language?
Question:
username_0: Hi there, this template it's really amazing and I'm starting to doing my thesis based on this awesome template.
I was wondering how can I work with spanish language? Maybe I need to configure in the class part but I'm not sure what should be the best way to do that.
I need to translate the title in the envoriments, e.g: dedicatoria instead dedication and so on. Any help is worthy :D thanks in advance
Answers:
username_0: I got it now. It's possible to close this ticket by myselft?
Thanks.
username_1: Hi @username_0 I assume you used the babel package. Did you have to do some customisation? It would be useful for others, if you could describe how you achieved it. Thanks for your help!
username_0: Hi there @username_1 I didn't do anything. I just start writing in spanish language and when I compiled the files, it worked really well.
About how to add title to the environment I just look for the one I wanted to add, which in this case is "Dedication" and I just added this:
\chapter*{\centering \Large Dedicación}
I didn't have to use babel package to use spanish accent which is great :+1:
Thanks for all!
Status: Issue closed
username_0: After working with several months with this awesome template I've to add a comment to this thread for those who are working in spanish or other non default language.
Just the babel language to split words correctly. |
LibraryManager/QA-Fundamental---Libary-Manager | 153648175 | Title: Second Secure Bug about Clients
Question:
username_0: - Steps to reproduce:
1. Open the site and Login
2. Click button "Lends".
3. Click button "Add new Lend".
4. Write First Name, Last, Name ,Birth Date and PID.
5. Click button "Clients".
6. Delete Client with Lends.
- Priority:Very High
- Expected result:Error Message:This client have lends, can not be deleted.
- Actually result:The client is deleted. |
hfuuss/algorithm-JS | 448732417 | Title: 题目练习
Question:
username_0: 1、https://github.com/username_0/algorithm-JS/issues/5
- [ ] 循环链表实现 约瑟夫问题
- [ ] 实现 LRU 链表和数据
- [ ] 字符串回文
Answers:
username_0: 2、https://github.com/username_0/algorithm-JS/issues/7
五个常见的链表:
- [ ] 单链表反转
- [ ] 链表中环的检测
- [ ] 两个有序的链表合并
- [ ] 删除链表倒数第n个结点
- [ ] 求链表的中间结点
username_0: 红黑树不懂 |
lbryio/lbry-desktop | 440370266 | Title: H264+AAC video in MP4 container doesn't play on beta.lbry.tv in Firefox
Question:
username_0: My video at `lbry://@BrendonBrewer/six` has the most LBRY-friendly codecs and container (h264 video, aac audio, mp4 container) and is web-optimised. It plays fine in the LBRY app and in beta.lbry.tv in Brave/Chromium. However, it fails in Firefox, saying "The media could not be loaded, either because the server or network failed or because the format is not supported".
Not sure how many people this will affect but it could be investigated.
Answers:
username_1: Seeing the same, and our new SDK streaming is having a similar issue in FF (keep re-requestingbut works in Chrome also.
Can you provide the parameters used to generate the file ? I don't think it's web optimized (doesn't load right away in the app), but that should not matter on it's own. I'm reading similar issues on sites like https://www.reddit.com/r/firefox/comments/6t8efe/openload_videos_not_running_the_media_could_not/ / https://www.reddit.com/r/firefox/comments/6jz5gd/videos_showing_media_could_not_be_loaded_error/
username_0: What parameters do you mean? I'm not sure if I have access to what I did in the past, except that I thought I did this flag with ffmpeg (from the top answer) to get the web optimisation: https://stackoverflow.com/questions/21686191/can-ffmpeg-place-mp4-metainfo-at-the-beginning-of-the-file

If other sites are having trouble it sounds like a Firefox bug.
username_1: It most likely is...but if we knew that web optimizing helps it, we could do that on the sdk side (I found a plug-in for this). Can you try the ffmpeg command and then verify the moov atom is at the front (via atomic parsley of ffprobe)? We can then put a higher priority on that issue.
username_0: Okay, the atomic parsley test revealed that the `movflags faststart` thing doesn't actually work. I used `qt-faststart` to move the atom instead, and atomic parsley said it was good. Published the new one at `lbry://test#d956840a662a45783af93ddb2f2e5edab5e35581` and it plays on Firefox! Yay!
IMHO, one of the good things about LBRY is that it's a _file marketplace_ and not just a video thing. So if you're going to modify people's files on upload, I'd strongly suggest making it optional (but maybe strongly recommended for video uploads), so people still have the option of uploading the original file as-is.
username_1: I put a consider soon tag on https://github.com/lbryio/lbry/issues/1988, closing this issue for now. Not sure there's anything we can do about firefox not properly being able to read correct range requests at this time.
Status: Issue closed
|
chuangbo/meteor-marked | 111140231 | Title: Helper does not work in Meteor 1.2
Question:
username_0: A fix I found is to do this:
````
return HTML.Raw(marked(content.trim()));
````
i.e. ```trim``` the content before passing it to ```marked``` etc.
Answers:
username_1: That's surprising.. I was tempted to just go with such an easy fix but I think trimming the unprocessed markdown could have unintended consequences (e.g. consider text that begins with code using indentation rather than a code fence).
But also, I wasn't able to reproduce this. Does this affect every string for you or just some strings? Can you give an example and the exact error you get?
username_0: It was actually my bad. I wasnt using markdown returned from a helper or data context, but rather just pasted in markdown to see if it would work. The markdown was indented (so it nests nicely with the surrounding HTML). Once i removed the indents so it was flush with the left side, it worked. Might be worth documenting that--perhaps it's just common knowledge I wasn't privy to (never done much with markdown or this sorta thing). Anyway, that's all I got.
Status: Issue closed
|
mlange-42/yarner-bib | 818367221 | Title: How to combine numbered citation style with a single refs file?
Question:
username_0: When writing to a single refs file (option `refs-file` set), document order and thus ordering of references is undefined.
Possible solutions:
* Forbid the combination of numbered references and option `refs-file`
* Let it undefined and write it into the docs
* Make Yarner emit documents in a defined order
* Define the order somehow in the config |
longitachi/ZLPhotoBrowser | 286686366 | Title: 有没有只吊起相机的方法
Question:
username_0: 感谢提交问题,提交 issue 前请先通过关键字搜索已经存在或解决了的 issue,避免重复提交相同内容。
- [ ] 我已经查阅了已有的issue,没有找到相同的
- [ ] 我想提出一个建议或bug,而不是问一个问题
#### 关于 Issue
1. 当前所使用的cocoapods版本
2. 所遇到的问题(如有可能,请尽量详细描述,建议附上截图)
Answers:
username_1: 暂时没有,如果需要,可以参照 `ZLCustomCamera` 里面的方法,自己写一个,拎出来修改下就好了
username_2: 是的。不过自定义相机类应该加一个取消拍照的回调。
Status: Issue closed
username_1: @username_2 新版本已经提供只调起相机的方法 |
MiteshSharma/OpenVPNAnsible | 526569100 | Title: does not generate certificate
Question:
username_0: fatal: [xxx.xxx.xx.xx]: FAILED! => {"changed": true, "cmd": "source vars; ./clean-all; yes \"\" | ./build-ca;", "delta": "0:00:00.012112", "end": "2019-11-21 13:29:26.228509", "msg": "non-zero return code", "rc": 127, "start": "2019-11-21 13:29:26.216397", "stderr": "You appear to be sourcing an Easy-RSA 'vars' file.\nThis is no longer necessary and is disallowed. See the section called\n'How to use this file' near the top comments for more details.\n/bin/bash: ./clean-all: No such file or directory\n/bin/bash: ./build-ca: No such file or directory", "stderr_lines": ["You appear to be sourcing an Easy-RSA 'vars' file.", "This is no longer necessary and is disallowed. See the section called", "'How to use this file' near the top comments for more details.", "/bin/bash: ./clean-all: No such file or directory", "/bin/bash: ./build-ca: No such file or directory"], "stdout": "", "stdout_lines": []} |
dask/distributed | 1004817014 | Title: NVML monitoring fails on Window Subsystem for Linux w/ GPU support
Question:
username_0: <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**What happened**:
When using an Insider build of Windows Subsystem for Linux (WSL) with GPUs, one quirk is that while GPU metrics are being actively monitored, compute utilization is not made available:
```
→ nvidia-smi
Wed Sep 22 17:41:09 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.00 Driver Version: 510.10 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 8000 On | 00000000:15:00.0 Off | Off |
| 34% 36C P8 18W / 260W | 444MiB / 49152MiB | N/A Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Quadro RTX 8000 On | 00000000:2D:00.0 On | Off |
| 36% 61C P0 70W / 260W | 2492MiB / 49152MiB | N/A Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
Because of this, when trying to start a Distributed cluster on WSL, we immediately see a `NVMLError_NotSupported` as the worker's `SystemMonitors` attempt to fetch the utilization:
```python
In [2]: from distributed import Client
In [3]: client = Client()
---------------------------------------------------------------------------
NVMLError_NotSupported Traceback (most recent call last)
<ipython-input-3-84f8000c6f67> in <module>
----> 1 client = Client()
~/distributed/distributed/client.py in __init__(self, address, loop, timeout, set_as_default, scheduler_file, security, asynchronous, name, heartbeat_interval, serializers, deserializers, extensions, direct_to_workers, connection_limit, **kwargs)
756 ext(self)
757
--> 758 self.start(timeout=timeout)
759 Client._instances.add(self)
760
~/distributed/distributed/client.py in start(self, **kwargs)
[Truncated]
742 if (ret != NVML_SUCCESS):
--> 743 raise NVMLError(ret)
744 return ret
745
NVMLError_NotSupported: Not Supported
```
**What you expected to happen**:
It would be nice if, after getting the device handles for the GPUs, we did a check that all desired metrics can be fetched for them. Depending on this check, we could either disable NVML's monitoring altogether, or disable monitoring of the specific metrics that aren't supported.
**Anything else we need to know?**:
Since this issue is occurring on a Windows Insider build, there's no guarantee that it will persist through future builds.
**Environment**:
- Dask version: latest `main`
- Python version: 3.8
- Operating System: Ubuntu 20.04 (WSL2)
- Install method: source<issue_closed>
Status: Issue closed |
colbymillerdev/react-native-progress-steps | 751323736 | Title: When I use Flatlist in it, I get the following error?
Question:
username_0: When I use Flatlist in it, I get the following error?
"VirtualizedLists should never be nested inside plain ScrollViews with the same orientation - use another VirtualizedList-backed container instead"

Answers:
username_1: getting the same error
username_1: Add the scrollable={false} prop to the ProgressStep component |
alexcojocaru/elasticsearch-maven-plugin | 443454450 | Title: logLevel
Question:
username_0: logLevel is not working for me
plugin version: 6.13
elasticsearch version: 6.6.2
a workaround is to use instanceSettings iso logLevel
Answers:
username_1: Just to be clear, the `logLevel` param controls the log level on the plugin; it has nothing to do with the Elasticsearch configuration. I just tested the plugin with the log level set to `DEBUG` and the execution output quite a few debug messages.
If you still think it's an issue, show me your plugin config.
Status: Issue closed
username_0: ok, i misunderstood |
andrewcmyers/civs | 1028166801 | Title: opt-in text needs translations
Question:
username_0: The page https://civs1.civs.us/cgi-bin/opt_in.pl exists in an English version, but it's a page seen by a lot of users whose first language is not English. So it would be good to add translations for the several languages that CIVS supports.
Answers:
username_0: French, German, Russian, Chinese are now covered. |
MicrosoftDocs/windowsserverdocs | 632685484 | Title: https://docs.microsoft.com/en-us/windows-server/administration/manage-windows-server contains a dead link for Windows PowerShell
Question:
username_0: https://docs.microsoft.com/en-us/windows-server/administration/manage-windows-server contains a dead link for Windows PowerShell
Steps to reproduce:
1. Go to https://docs.microsoft.com/en-us/windows-server/administration/manage-windows-server .
2. Click on Windows PowerShell.
3. Get a 404.
Please fix this.
Answers:
username_1: OK, fair enough. I will see what I can do.
username_1: OK, so the following seems to be the case:
The page https://docs.microsoft.com/powershell/scripting/powershell-scripting?view=powershell-5.1 (broken, error 404)
moved to https://docs.microsoft.com/powershell/scripting/overview
This is what the original web page looked like, by way of the WayBack machine (Web Archives):
https://web.archive.org/web/20180221051851/https://docs.microsoft.com/en-us/powershell/scripting/powershell-scripting?view=powershell-6&viewFallbackFrom=powershell%3D5.1
Would you mind taking a look and see if the new page (https://docs.microsoft.com/powershell/scripting/overview) is an acceptable replacement?
There are 4 PowerShell versions to choose from, but they don't seem to affect the page much (as far as I can see):
- https://docs.microsoft.com/powershell/scripting/overview?view=powershell-5.1
- https://docs.microsoft.com/powershell/scripting/overview?view=powershell-6
- https://docs.microsoft.com/powershell/scripting/overview?view=powershell-7 (LTS)
- https://docs.microsoft.com/powershell/scripting/overview?view=powershell-7.1 (preview)
username_1: By the way, I will open a Pull Request (PR) to have the link replaced. Feel free to join in, once I have posted my PR with the link change.
username_1: Anyway, looking at the archived versions of the PowerShell Scripting page, the archive only contains cmdlet info for versions 3.0, 4.0, and 5.0.
For version 5.1, we are forced to use the new page, https://docs.microsoft.com/powershell/scripting/overview .
username_1: Off topic: Would it be OK to ask you to add some minimalistic info to your GitHub profile page, like city/country and maybe full name? (Feel free to ignore this, if you are concerned about privacy.)
username_1: There you go. PR #4463 is now ready for review. Feel free to join in to comment, approve or suggest further improvement changes.
Status: Issue closed
|
hpi-swa-teaching/RichTextEditing | 449944043 | Title: Edit templates
Question:
username_0: As someone who writes many texts, I want to be able to edit text structure type templates, in order to easier write many documents which are of a similar type.
**Acceptance criteria:**
- [ ] #37 must be done first
- [ ] I can select a template to edit
- [ ] The same editor window to edit the available text structure types is opened
- [ ] All the edits apply to the template, not to the currently available text structure type
Answers:
username_1: Details on how to design this must first be cleared up with the client. Also, since #37 is yet to be implemented, we cannot estimate the effort of this as of now. |
raiden-network/raiden | 564838668 | Title: `make check-pip-tools` is very verbose and prints lots of false positives
Question:
username_0: ## Abstract
Running `make check-pip-tools` will print something like:
```
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires aniso8601==7.0.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires asn1crypto==0.24.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires attrdict==2.0.1, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires cachetools==3.1.1, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires certifi==2019.3.9, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires cffi==1.12.3, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires chardet==3.0.4, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires coincurve==13.0.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires colorama==0.4.1, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires cytoolz==0.10.1, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires decorator==4.4.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires eth-abi==1.3.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires eth-account==0.3.0, which is not installed.
ERROR: raiden 0.100.5a1.dev1472+g87297b599 requires eth-hash[pycryptodome]==0.2.0, which is not installed.
```
Which at first looks like a problem, but it just the tool being to verbose.
## Motivation
It would be nice to remove the false positives, it can be scary to look at.
Answers:
username_1: Output looks clean (empty), now.
Status: Issue closed
|
dkpro/dkpro-lab | 76384086 | Title: getStorageLocation() always returns a folder
Question:
username_0: ```
TaskContext.getStorageLocation() always creates/returns a folder. But
sometimes, I just want to create a file, not a full folder. It would be nice to
have a getStorageFile() and getStorageFolder() instead and to deprecate the
getStorageLocation().
```
Original issue reported on code.google.com by `richard.eckart` on 19 Jan 2015 at 1:48
Status: Issue closed
Answers:
username_1: The methods getFolder() and getFile() are present now.
username_1: ```
TaskContext.getStorageLocation() always creates/returns a folder. But
sometimes, I just want to create a file, not a full folder. It would be nice to
have a getStorageFile() and getStorageFolder() instead and to deprecate the
getStorageLocation().
```
Original issue reported on code.google.com by `richard.eckart` on 19 Jan 2015 at 1:48
username_1: getFolder() is not backwards compatible with getStorageLocation() - the old behavior should be restored for backwards compatibility.
Status: Issue closed
|
jackyliang/Pollen-Buddy-PHP | 215858558 | Title: No Longer Current
Question:
username_0: Jacky,
First off thanks for making this. You are a life saver! Currently I have just implemented this on a test page. Unfortunately it seems to be stuck in December 2016. Is this an issue with Wunderground? Maybe I haven't set something correctly? Here is my code currently for the test page:
```php
<?php
require_once("Pollen-Buddy-PHP/PollenBuddy.php");
?>
<!DOCTYPE html>
<html>
<head>
<title>testing page</title>
</head>
<body>
<?php
$data = new PollenBuddy(42101);
// echo $data->getSiteHTML();
var_dump($data->getFourDayForecast());
?>
</body>
</html>
```
This currently returns:
`array(4) { ["December 14, 2016"]=> string(3) ".10" ["December 15, 2016"]=> string(3) ".10" ["December 16, 2016"]=> string(3) ".10" ["December 17, 2016"]=> string(3) ".10" }`
Please let me know. Thanks, again!
Answers:
username_1: Hey - I actually am not sure. This library has not been maintained in awhile. If you want to check the source (should be pretty straightforward) and see what's going on, totally go ahead and do that. I have a feeling it might just be Wunderground being the issue.
username_0: Thanks! I will check the source and see what I can find! Looks pretty simpile.
Sent from BlueMail
username_0: Yes, wunderground seems to be the problem. I will see what I can do to remedy the issue.
username_0: The new health tab seems to have a consistent logic so getting the data shouldn't be too difficult. I am going to start working on this soon!
username_1: Thanks! Feel free to submit a PR and I'll merge it if it looks good.
username_0: Will do!
Sent from BlueMail |
pmaji/crypto-whale-watching-app | 314355876 | Title: Time Stamping - Alt Coin Buzz
Question:
username_0: You guys have a great tool here! I haven't seen any work lately. I'd like to promote you on my segments that I do on Alt Coin Buzz. It's a TA segment along with the news. Are you still working on this? If so can I promote, or would it be better to wait?
Also wondering if you guys are working on a time stamp for the trades to be included in the roll-over. Certainly would be helpful to see if the larger orders are set up to affect the price.
Thank you in advance!
Status: Issue closed
Answers:
username_1: Hey @username_0! Thanks for reaching out. Definitely still an active project, just a slow period as of late because we're coming off the cliff of a lot of major updates. Let's talk more. Can you reach out to me via telegram (username @username_1 -- same as my GitHub)? I can give you any details you need, answer your questions, etc. |
squidfunk/mkdocs-material | 442791516 | Title: Inconsistent search behavior
Question:
username_0: ## Description
Search appears to work with beginning of word, but then not when the full word is queried.
### Expected behavior
**Query**: "where"
**Expected preview options**: anything heading/paragraph/etc containing "where"
### Actual behavior
[video](https://youtu.be/K_W118lrvEU)
### Steps to reproduce the bug
1. Create pages containing "which"
2. Create pages containing "where"
3. Create pages containing "what"
4. Search for "wh" – you get appropriate results
5. Search for "what" – you get appropriate results
6. Search for "which" – you get no results.
6. Search for "where" – you get no results.
### Package versions
* Python: Python 3.7.3
* MkDocs: mkdocs, version 1.0.4
* Material: Version: 4.2.0
### Project configuration
``` yaml
site_name: 'DEDocketResearch'
theme: 'material'
extra_css: [extra.css]
extra_javascript: [extra.js]
nav:
- '': 'index.md'
- '1-1-DELADocketSurvey': '1-1-DELADocketSurvey/1-1-DELADocketSurvey.md'
- '1-2-DEJMTScrapeAndSurvey': '1-2-DEJMTScrapeAndSurvey/1-2-DEJMTScrapeAndSurvey.md'
- '1-3-LAJMTInitialComparison': '1-3-LAJMTInitialComparison/1-3-LAJMTInitialComparison.md'
```
### System information
* OS: macOS Sierra
* Browser: Chrome
---
I've saw the related issues, but couldn't figure if/how this was related exactly.
This project is awesome and it's @username_1 is so responsive!
Answers:
username_1: Have you seen #1097? It's definitely related to Lunr.js stemmer. Furthermore, which/what/where may be stopwords.
username_0: Stop words. Right. That makes sense: "wh" stems correctly, but "which" is filtered out.
I'm looking at this https://github.com/olivernn/lunr.js/issues/212 – Any guidance on doing this within the theme? Do you expect I will have to rebuild?
username_1: For future reference, I will try to lay out the process which Material currently uses for localization and in the end sketch out how to achieve what you're asking for:
The English localization file `partials/language/en.html` is the base from which all other languages _extend_, which means that if a localization file does not specify a value for a placeholder, it will always fall back to the respective English translation. This is particularly true for the placeholders that were introduced after some of the localizations where submitted, like for example the `skip.to.content` placeholder for the equally-titled button. French, for example, will show "Skip to content", as it doesn't specify a translation for the placeholder. Now, this file contains three placeholders that are used to configure search behavior:
https://github.com/username_1/mkdocs-material/blob/367fef75b26d3007df1f05f4b75dc7d41407c883/material/partials/language/en.html#L11-L13
Lunr.js provides stemmers for some languages through [lunr-languages](https://github.com/MihaiValentin/lunr-languages) (which is integrated with Material), but not for all (currently 36) supported by Material, but as I wanted to support search in those languages, I fiddled around with Lunr.js and found out that if I disable stemming and the stopword filter, those languages could be searched, too. The search experience may not be as smooth as it is with English, but it's better than nothing, like for example Hebrew:
https://github.com/username_1/mkdocs-material/blob/367fef75b26d3007df1f05f4b75dc7d41407c883/material/partials/language/he.html#L11-L13
This was the reason why I pulled search configuration into the localization files. As those values must be accessible from JavaScript, they are defined as `meta` tags within the `head` section:
https://github.com/username_1/mkdocs-material/blob/367fef75b26d3007df1f05f4b75dc7d41407c883/material/base.html#L30-L42
This approach seems to work reasonably well. Some languages use stemmers from other languages, as they _seem_ to work well enough (Chinese and Korean use `jp`, Serbo-Croatian uses `ro`, etc.). Why _seem_? Because I don't speak those languages, but when integrating them I always try to search some of the localized terms to see whether Lunr.js catches them on a best effort basis.
So, answering your question, how could you disable stemming and stopword filtering? You just need to override `partials/language/en.html` and unset the three placeholders. Why is this not possible via `mkdocs.yml`? Because up to now, nobody needed it. In theory, we could make it configurable by adjusting `partials/language.html`, which is the entry point for localization, but I think it won't be necessary. I want to keep configuration as lean as possible.
I hope that this shows that a lot of thought was put into how search works and how it can be localized and scaled to so many languages without much effort.
username_0: This does indeed show a lot of thought, and setting `"search.pipeline.stopwords": false,` does fix the issue.
Thanks again for work, and continued attention, on this project.
Status: Issue closed
|
zawy12/difficulty-algorithms | 473337339 | Title: Ethereum's difficulty algorithm is a messed-up EMA
Question:
username_0: This was posted to Ethereum's stack exchange which will probably be deleted as a result of explaining what's wrong with the algorithm and how it was almost correct as a way of explaining how it works.
Ethereum's difficulty algorithm without the difficulty bomb is:
diff = parent_diff + parent_diff / 2048 \*
max(1 - (block_timestamp - parent_timestamp) // 10, -99)
The max really adds complexity as shown in [a stack exchange answer](https://ethereum.stackexchange.com/a/5914/55107) and this messes it up a lot. If the max is not used it is:
diff = parent_diff \* (1 + 1/2048 - 1/2048/10 )
The terms can be expanded to prevent the integer division that would always result in the last two terms being zero:
diff = parent_diff + parent_diff/N - parent_diff\*t/T/N
where
t = parent solvetime
T = target solvetime
N = extinction coefficient aka "mean lifetime" aka number of block to "temper" or "buffer" the size of the response. It can't be too small or a negative difficulty can result from long solvetimes.
This is very close to the theoretically best algorithm. This is the result of being a close approximation to the exponential moving average (EMA) that I and others have investigated. It's an approximation of the EMA by the taylor series expansion of the exponential function:
e^x = 1 + x + x^2/2! + ...
Where you use the approximation e^x = 1 + x in the EMA algorithm:
diff = A\*D\*T/t + (1-A)\*D
where
D = parent D
t = parent solvetime
T = target solvetime
A = alpha = 1-e^(-t/T/N)
See https://github.com/username_0/difficulty-algorithms/issues/17
This algorithm was discovered by Jacob Eliosoff who was already very familiar with EMa's for stock prices. He needed to modify it to fit difficulty, and the result turns out to be a known version that's mentioned in Wikipedia in regards to estimating computer performance:
https://en.wikipedia.org/wiki/Moving_average#Application_to_measuring_computer_performance
I say it's theoretically best because you can reduce N all the way down to "1" and the mean and median solvetimes are close to the expected T and ln(2)\*T. So it's the best estimator (I know of) of guessing the current hashrate based on only the previous block.<issue_closed>
Status: Issue closed |
randyzwitch/OAuth.jl | 140112472 | Title: Question: OAuth2
Question:
username_0: Hi @username_1 ,
I am doing some work on OAuth 2.0 for use with Azure. I wonder how we might combine these two packages, or should I just create a separate, but similar OAuth2.jl?
I'm just learning OAuth 2.0 and haven't worked with OAuth 1.0, so not sure how different they are.
Answers:
username_1: Hey @username_0, I think combining efforts in a single OAuth package makes great sense. I created this package solely to factor out the OAuth 1.0 code from Twitter.jl, so I'm not really familiar with what the overlap might be, but it makes sense to have them both together.
I don't have a ton of time to work on this right now, but do you want to branch off of the current master, then start adding in OAuth2?
username_2: Any further dev on OAuth 2.0? I am trying to use an API that requires it right now.
username_1: Not on my end. While it might seem like OAuth 2.0 is an update to the original, its actually a different, (and in practice, ad-hoc) specification that's completely different from OAuth 1.0a.
username_3: I too am interested in OAuth2... Any progress on that @username_0? |
tiodb/tiodb | 183569139 | Title: HTTP protocol GET
Question:
username_0: We should support getting items using the HTTP protocol, so tiodb can be accessed right from HTML apps. Supporting HTTP will also speed up development of clients for new languages, since it's easier to support HTTP than tiodb's binary or text protocol. |
Automattic/node-canvas | 142321712 | Title: Installation on Heroku needs updating
Question:
username_0: I followed https://github.com/Automattic/node-canvas/wiki/Installation-on-Heroku but it's now out of date. Heroku now supports multiple buildpacks out of the box.
The simplest way to get node-canvas working is to add the cairo buildpack using the following heroku command:
```
heroku buildpacks:add --index 1 https://github.com/mojodna/heroku-buildpack-cairo.git
```
Answers:
username_1: Thanks you @username_0
username_2: note that *this* workaround is no longer working, as it hasn't been updated for heroku-16 stack, and cedar-14 is no longer available:
https://github.com/mojodna/heroku-buildpack-cairo/issues/16
username_3: I just tested this with version `2.0.0-alpha.12`, and got it working with this buildpack: https://github.com/username_3/heroku-buildpack-cairo
Just add it before nodejs buildpack and it should be working.
username_4: @username_3 I was able to get `node-canvas` running on Heroku without any issues (see: https://github.com/Automattic/node-canvas/issues/843#issuecomment-386801470)
Although it looks like you are using `cedar-14` stack, while I'm using `heroku-16` - maybe that's the difference
username_5: In addition to the other solutions commented above... I just successfully installed canvas 2.0.0-alpha.12 and 1.6.11 on the vanilla heroku-16 stack. Both just worked thanks to prebuilds. If anyone has a need to make a from-source build work, I can look into it.
Status: Issue closed
|
ImpulseAdventure/GUIslice-Builder | 611243967 | Title: Support for Teensy fonts
Question:
username_0: **Describe the solution you'd like**
The Teensy (and several other devices) use the extended font modes of GUIslice to select between various fonts (eg. internal, external ROM, custom, etc.). It would be very useful if the Builder were to add support for the insertion of the gslc_FontSetMode() API so that these devices/fonts could be supported directly
**Additional context**
Ideally, the `arduinofonts.csv` file could add a column to record the `FONTREF_MODE` setting. If the `FONTREF_MODE` value were non-blank, then the gslc_FontSetMode() API would be called after gslc_FontSet().
Please see **ex04_ard_ctrls** for an example.
In the case of the Teensy example, a corresponding arduinofonts.csv row might show:
`Teensy,Arial,font_Arial.h,NULL,E_T3_ARIAL12,GSLC_FONTREF_PTR,&Arial_12,GSLC_FONTREF_MODE_1`
Reference: Builder issue list item #166
Answers:
username_1: Under development and will be included with release 0.14.b005
Paul--
Status: Issue closed
username_1: Addressed in Release 0.15.0 |
PrismarineJS/mineflayer | 204311850 | Title: Installation doesnt work.
Question:
username_0: mineflayer_external 1.8.8
downloading and starting server
1) "before all" hook
2) "after all" hook
mineflayer_external 1.9
downloading and starting server
3) "before all" hook
4) "after all" hook
mineflayer_external 1.10
downloading and starting server
5) "before all" hook
6) "after all" hook
mineflayer_external 1.11.2
downloading and starting server
7) "before all" hook
8) "after all" hook
mineflayer_internal 1.8.8
9) "before each" hook
10) "after each" hook for "chat"
mineflayer_internal 1.9
11) "before each" hook
12) "after each" hook for "chat"
mineflayer_internal 1.10
13) "before each" hook
14) "after each" hook for "chat"
mineflayer_internal 1.11.2
15) "before each" hook
16) "after each" hook for "chat"
0 passing (2s)
16 failing
1) mineflayer_external 1.8.8 "before all" hook:
Uncaught Error: ENOENT: no such file or directory, open 'undefined/minecraf t_server.1.8.8.jar'
2) mineflayer_external 1.8.8 "after all" hook:
TypeError: Cannot read property 'quit' of undefined
at Context.<anonymous> (test/externalTest.js:97:10)
3) mineflayer_external 1.9 "before all" hook:
Uncaught Error: ENOENT: no such file or directory, open 'undefined/minecraf t_server.1.9.jar'
4) mineflayer_external 1.9 "after all" hook:
TypeError: Cannot read property 'quit' of undefined
at Context.<anonymous> (test/externalTest.js:97:10)
5) mineflayer_external 1.10 "before all" hook:
Uncaught Error: ENOENT: no such file or directory, open 'undefined/minecraf t_server.1.10.jar'
[Truncated]
at Server._listen2 (net.js:1262:14)
at listen (net.js:1298:10)
at doListening (net.js:1397:7)
at _combinedTickCallback (internal/process/next_tick.js:77:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
16) mineflayer_internal 1.11.2 "after each" hook for "chat":
TypeError: Cannot read property 'on' of undefined
at Context.<anonymous> (test/internalTest.js:26:10)
npm ERR! Test failed. See above for more details.
`
Also I tryed to use this: `var mineflayer = require('mineflayer');`
but it also throws this error in chrome: `Uncaught ReferenceError: require is not defined`
Soooo can someone help me to get it installed proerly?
And not only "npm install mineflayer" because this doesn't work...
Answers:
username_1: What version of node are you using ?
What did you try do to ?
What error do you get when running npm install mineflayer ?
username_0: What version of node are you using ? -> v7.4.0
What did you try do to ? -> Install the bot and then using it for pathfinding / other tasks
What error do you get when running npm install mineflayer ? -> This is everything it shows in the log: http://pastebin.com/raw/VxaTXscx
username_1: Ok so npm install works.
Again "What did you try do to ?"
I mean, not your objective, what did you run exactly, what code, what directory, etc
username_0: This is my script: http://pastebin.com/raw/61wC809y (Its from here: https://github.com/andrewrk/mineflayer-navigate/)
username_1: ok the problem is `<script>`
This will not work in the browser.
You need to run this with `node`
username_0: module.js:472
throw err;
^
Error: Cannot find module '../'
at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/var/www/html/Bot/run.js:3:22)
at Module._compile (module.js:571:32)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
at Function.Module._load (module.js:439:3)
username_1: replace `../` by `mineflayer-navigate`
username_1: oh and you need to use https://github.com/andrewrk/mineflayer-navigate/pull/28 if you're using mineflayer 2.0.0
username_0: So like this? http://pastebin.com/raw/g2BGZmGn
Because this will still give me this error:
module.js:472
throw err;
^
Error: Cannot find module 'mineflayer-navigate'
at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/var/www/html/Bot/run.js:3:22)
at Module._compile (module.js:571:32)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
at Function.Module._load (module.js:439:3)
username_1: ok run `npm install username_1/mineflayer-navigate#update_vec3`
username_0: npm ERR! Linux 3.16.0-4-amd64
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "username_1/mineflayer-navigate#update_vec3"
npm ERR! node v7.4.0
npm ERR! npm v4.0.5
npm ERR! code ENOSELF
npm ERR! Refusing to install package with name "mineflayer-navigate" under a package
npm ERR! also called "mineflayer-navigate". Did you name your project the same
npm ERR! as the dependency you're installing?
npm ERR!
npm ERR! For more information, see:
npm ERR! <https://docs.npmjs.com/cli/install#limitations-of-npms-install-algorithm>
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR! <https://github.com/npm/npm/issues>
npm ERR! Please include the following file with any support request:
npm ERR! /var/www/html/Bot/npm-debug.log
npm-debug.log: http://pastebin.com/raw/BsRwf38u
And sorry if this is a bit annoying :s But I realy dont know it better :l
username_1: well.... where did you run this ?
Run it in your "Bot" dir
username_0: I have run it in the Bot dir: ***@*********:/var/www/html/Bot# npm install username_1/mineflayer-navigate#update_vec3
username_0: I changed nothing in it its still this:
{
"name": "mineflayer-navigate",
"version": "0.0.9",
"description": "mineflayer plugin which adds 3d pathfinding",
"main": "index.js",
"devDependencies": {
"mineflayer": "0.0.18"
},
"repository": "git://github.com/superjoe30/mineflayer-navigator.git",
"keywords": [
"mineflayer",
"minecraft",
"bot",
"path",
"finding",
"navigation"
],
"author": "<NAME>",
"license": "BSD",
"dependencies": {
"a-star": "~0.1.0"
}
}
username_1: Ok that shouldn't be in the Bot dir.
Remove that file and run npm init.
username_2: @username_0 rm -rf everything. Do this :
```
mkdir my_minecraft_bot
cd my_minecraft_bot
npm init # It will ask you a few questions, answer them. the defaults should be fine
npm i --save mineflayer
npm i --save username_1/mineflayer-navigate#update_vec3
curl http://pastebin.com/raw/NayLXamq > index.js
node index.js
```
username_0: @username_2 Firstly thanks! It hasnt thrown any error but sadly the bot isnt connecting :s
username_1: @username_0 put the host adress and port, and your username/password in the createBot function instead of the example one in your script ;)
username_0: @username_1 I have changed them ^^
username_2: @username_0 is your server in online or offline mode ? And what minecraft version is it ?
username_0: - Online mode.
- 1.8.x
username_1: Ah yes you need to specify the version in createBot if it's not the default (`1.11.2`)
username_1: ok so put `version:"1.8.8"` in createBot
username_0: @username_1 Jep it was the `version:"1.8.8"`. Now it works. Thanks a lot!
username_2: Cool :D Closing now.
Status: Issue closed
|
ministryofjustice/cloud-platform | 834933656 | Title: Look into why nginx ingress integration tests fail
Question:
username_0: The following nginx integration tests regularly fail, the majority of the time it is due to the creation of an ingress takes too long and it times out. This ticket is to find out why it takes so long and find a solution where ingresses can be created in a timely fashion therefore test the http response.
Below are the 4 nginx ingress tests:
4) nginx ingress when ingress is not deployed fails http get
# Temporarily skipped with xdescribe
# ./spec/ingress_spec.rb:23
5) nginx ingress when ingress is deployed using 'nginx' ingress controller returns 200 for http get
# Temporarily skipped with xdescribe
# ./spec/ingress_spec.rb:47
6) nginx ingress when ingress is deployed using 'integration-test' ingress controller responds to http get
# Temporarily skipped with xdescribe
# ./spec/ingress_spec.rb:70
7) nginx ingress when ingress is deployed with invalid syntax is rejected by the admission webhook
# Temporarily skipped with xdescribe
# ./spec/ingress_spec.rb:80
Status: Issue closed
Answers:
username_0: The failures were not down to the test themselves but due to rspec not getting the information it needs to use for the tests.
There is sometimes failures for the kubectl get -o yaml commands that are used to gather the info required to run the tests.
It was agreed as a team that we will accelerate the migration to using Go for out long terms use of integration tests than spend time to resolve rspec issues. |
hyperledger/fabric | 167123953 | Title: sdk/node: newChain(42) breaks registrar.js test
Question:
username_0: ## Description
#This error occurs while running make node-sdk-unit-tests:
enroll again
enrollAgain
enrollAgain: calling newChain(42)
enrollAgain: back from newChain()
/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/test/unit/registrar.js:134
chain.setKeyValStore(hfc.newFileKeyValStore('/tmp/keyValStore'));
^
ReferenceError: chain is not defined
at enrollAgain (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/test/unit/registrar.js:134:4)
at Test.<anonymous> (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/test/unit/registrar.js:48:5)
at Test.bound [as _cb] (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/node_modules/tape/lib/test.js:63:32)
at Test.run (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/node_modules/tape/lib/test.js:82:10)
at Test.bound [as run] (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/node_modules/tape/lib/test.js:63:32)
at Immediate.next [as _onImmediate] (/opt/gopath/src/github.com/hyperledger/fabric/sdk/node/node_modules/tape/lib/results.js:70:15)
at processImmediate [as _immediateCallback] (timers.js:367:17)
ERROR running registrar tests!
END running registrar tests
Here is the documentation for getChain() function:
newChain
newChain(name: **any**): any
Defined in src/hfc.ts:2669
Create a new chain. If it already exists, throws an Error.
Parameters
name: **any**
Name of the chain. It can be any name and has value only for the client.
Returns any
I reckoned 'name:any' means any value, so I tried value 42 and got this error.
## Describe How to Reproduce
<!-- If an issue, provide sufficient context and steps to reproduce the issue -->
1.make node-sdk-unit-tests to establish baseline success
2.Edit sdk/node/test/unit/registrar.js, function enrollAgain(cb)
3.Change hfc.newChain("testChain2") to hfc.newChain(42)
For example,
/var chain = hfc.newChain("testChain2");
//var chain = hfc.newChain("");
var chain2 = hfc.newChain(42)
4.Save registrar.js
5.make node-sdk-unit-tests
Answers:
username_1: @username_0 please migrate to Jira. Thanks
username_0: The Jira issue for this problem is FAB-127
Status: Issue closed
|
ionic-team/ionic-framework | 1070441474 | Title: feat: Search feature for vue ion-select
Question:
username_0: ### Prerequisites
- [X] I have read the [Contributing Guidelines](https://github.com/ionic-team/ionic-framework/blob/main/.github/CONTRIBUTING.md#creating-an-issue).
- [X] I agree to follow the [Code of Conduct](https://ionicframework.com/code-of-conduct).
- [X] I have searched for [existing issues](https://github.com/ionic-team/ionic-framework/issues) that already include this feature request, without success.
### Describe the Feature Request
Search feature in ion-select for vue framework. It would be helpful if we have ion-searchbar inside select like we have action sheets and pop-over.
Its difficult to find item if we have huge list. Something like this [https://www.npmjs.com/package/ionic-selectable](url) this [ackage is for angular and vanilla js.
### Describe the Use Case
Searching items in ion-select.
### Describe Preferred Solution
_No response_
### Describe Alternatives
[https://www.npmjs.com/package/ionic-selectable](url)
### Related Code
_No response_
### Additional Information
_No response_
Answers:
username_1: Thanks for the issue. I am going to close this as a duplicate of https://github.com/ionic-team/ionic-framework/issues/23799.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.