repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
holoviz/panel
809092286
Title: Cannot layout Pipeline Question: username_0: Panel 0.10.3 If I put a `Pipeline` in a column I see a `pn.Param` version instead of the pipeline. ```python import webbrowser import param import panel as pn pipeline = pn.pipeline.Pipeline() class GetPassword(param.Parameterized): password = param.String() _panel = param.Parameter(precedence=-1) _widgets = { "password": pn.widgets.PasswordInput } def __init__(self, **params): super().__init__(**params) self._panel = pn.Param( self, widgets=self._widgets ) @param.output('password', param.String) def panel(self): return self._panel get_admin_password_from_pam = GetPassword pipeline.add_stage("Get Pam Password", get_admin_password_from_pam) pn.Column(pipeline).servable() ``` Answers: username_0: #### A Workaround is `pn.Column(pipeline.layout).servable()`. Status: Issue closed
ClickHouse/ClickHouse
1107248080
Title: Expressions in DDL dictionaries are not passed 'as is' Question: username_0: I need to use data from a MySQL table. Unfortunately some values in the key column are negative, so I have to convert them to positive ones in order to be able to create a hashed dictionary. My expression is: `cast(country_key as unsigned)` I have an XML dictionary and it works great. ``` <yandex> <include_from>/etc/clickhouse-server/dict_sources.xml</include_from> <dictionary> <name>dim_country</name> <source> <mysql incl="mysql_ds_config"> <table>dim_country</table> </mysql> </source> <lifetime> <min>0</min> <max>0</max> </lifetime> <layout> <hashed /> </layout> <structure> <id> <name>key</name> <expression>cast(country_key as unsigned)</expression> </id> <attribute> <name>country_key</name> <type>Int32</type> <null_value>0</null_value> </attribute> <attribute> <name>country_code</name> <type>String</type> <null_value/> </attribute> <attribute> <name>country_name</name> <type>String</type> <null_value/> </attribute> </structure> </dictionary> </yandex> ``` I decided to replace it with a DDL dictionary. ``` CREATE DICTIONARY dw.dim_country ( `key` UInt64 expression cast(country_key as unsigned), `country_key` Int32, `country_code` String, `country_name` String ) PRIMARY KEY key SOURCE(MYSQL( name mysql_ds_config table 'dim_country' )) LAYOUT(HASHED()) LIFETIME(MIN 0 MAX 0) ``` However the expression does not work in this dictionary because it gets parsed by ClickHouse: ``` -- SELECT (CAST(country_key, 'unsigned')) AS `key` 2022.01.18 17:36:09.736396 [ 568555 ] {262abc99-e8b6-42f7-9e21-188e30311fb1} <Trace> MySQLDictionarySource: SELECT (CAST(country_key, 'unsigned')) AS `key`, `country_key`, `country_code`, `country_name` FROM `ds`.`dim_country`; -- mysqlxx::BadQuery: You have an error in your SQL syntax near ' 'unsigned' 2022.01.18 17:36:09.736921 [ 568555 ] {262abc99-e8b6-42f7-9e21-188e30311fb1} <Error> ExternalDictionariesLoader: Could not load external dictionary 'dw.dim_country', next update is scheduled at 2022-01-18 17:36:14: Poco::Exception. Code: 1000, e.code() = 1064, mysqlxx::BadQuery: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' 'unsigned')) ``` Status: Issue closed Answers: username_0: It works with quotes. ``` CREATE DICTIONARY dw.dim_country ( `key` UInt64 expression 'cast(country_key as unsigned)', `country_key` Int32, `country_code` String, `country_name` String ) PRIMARY KEY key SOURCE(MYSQL( name mysql_ds_config table 'dim_country' )) LAYOUT(HASHED()) LIFETIME(MIN 0 MAX 0) ```
noflo/noflo-nodejs
90654174
Title: Nodejs v0.12.4 server requires reset to catch graph changes Question: username_0: I notice this behavior in nodejs v0.12.4 with the webserver/writeresponse component. Changes to the IIP string value for writeresponse are not reflected until I restart the service. Once I realized this was not intended behavior, I tried using nodejs v0.10.2 and so far I have had no issues. Answers: username_1: Have not seen this problem on a recent Node.js. Do you know if it affects nodejs 4.6 or 6.10? username_2: or 7.10... Status: Issue closed username_3: Closing since there hasn't been new information.
veg/hyphy
133407599
Title: Does PRIME help with saturation? Question: username_0: Hi Sergei, I got about 50 genes from about 30 insect species that is prone to saturation. I calculated the dS value with KaKs Calculator. On average, it's about 5. Then I use Kr/Kc (radical vs conservative) values rather than dNdS values to test for positive selection. And the values look fine. Do you think PRIME is more tolerant to saturation? Also, I was trying to run PRIME locally but didn't find a tutorial about it. I wonder is it possible to run PRIME locally? Thank you for any advice on this YY Answers: username_1: Dear @username_0, I have never tested PRIME in the context of saturation. I suppose it **could** extract more signal by looking only at __conservative__ substitutions, but I am not prepared to defend this claim. You can run PRIME locally, yes (it will be slow, however). Give me a day or two to write a little tutorial and make sure everything still works; I am working on v2.3 (and v3) releases, so my working branches of the code may have broken PRIME scripts. Sergei username_0: Thank you Sergei. I'm looking forward to the PRIME scripts. There is usually a pretty long queue for PRIME on the web server. YY Status: Issue closed
Timson020/react-native-umeng-api
438359664
Title: UMSocialCore/UMSocialCore.h' file not found Question: username_0: UMSocialCore/UMSocialCore.h' file not found Answers: username_1: https://github.com/username_1/react-native-umeng-api#ios安装 step 1 can you do it ? Status: Issue closed username_2: @username_1 step1: ios and reactnative's sdk all tried but failed,could you rewrite the api more clearly.
pombase/fypo
201047387
Title: PMID:19570908 actin etc. Question: username_0: 1 normal rate of actin filament-based movement 2 increased rate of actin filament-based movement 3 decreased rate of actin filament-based movement 4 incomplete actomyosin contractile ring contraction (not sure about xp) 5 normal rate of protein exchange in actomyosin contractile ring (no xp) 6 normal onset of actomyosin contractile ring assembly Answers: username_0: also add synonyms: FYPO:0004653 increased dwell time before actomyosin contractile ring contraction FYPO:0004430 decreased dwell time before actomyosin contractile ring contraction FYPO:new6 normal dwell time before actomyosin contractile ring contraction username_0: normal rate of actin filament-based movement FYPO:0005899 abnormal actin filament-based movement FYPO:0005900 increased rate of actin filament-based movement FYPO:0005901 decreased rate of actin filament-based movement FYPO:0005902 incomplete actomyosin contractile ring contraction FYPO:0005903 normal rate of protein exchange in actomyosin contractile ring FYPO:0005904 normal onset of actomyosin contractile ring assembly FYPO:0005905 normal onset of actomyosin contractile ring contraction FYPO:0005906 edit file: 4228b2a736a48ea67746f1522ac95744eacd49e8 release: 24e51dc0e2a4f9da07bc673ce2c61e6c314d91ba Status: Issue closed
snap-stanford/GraphGym
912058314
Title: Error when run "base run_single.sh" at step 6 Test the installation Question: username_0: ``` Thank you Answers: username_1: I have the same issue with PyTorch1.4 and PyG installation, I installed PyTorch 1.7+cpu, when I run the .sh file I have following error ``` Traceback (most recent call last): File "main.py", line 11, in <module> from graphgym.loader import create_dataset, create_loader File "/data/huyuanzheng/GraphGym/graphgym/loader.py", line 6, in <module> from deepsnap.dataset import GraphDataset File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/deepsnap/__init__.py", line 5, in <module> import deepsnap.graph File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/deepsnap/graph.py", line 9, in <module> from torch_geometric.utils import to_undirected File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/torch_geometric/__init__.py", line 5, in <module> import torch_geometric.data File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/torch_geometric/data/__init__.py", line 1, in <module> from .data import Data File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/torch_geometric/data/data.py", line 8, in <module> from torch_sparse import coalesce, SparseTensor File "/data/huyuanzheng/anaconda3/envs/graphgym/lib/python3.7/site-packages/torch_sparse/__init__.py", line 15, in <module> f'{library}_{suffix}', [osp.dirname(__file__)]).origin) AttributeError: 'NoneType' object has no attribute 'origin' ``` username_2: Thanks for pointing out the bug! The bug is due to PyTorch Geometric updates the installation script. I've updated GraphGym to support the latest PyTorch (1.8.0) and the latest PyTorch Geometric. I deployed a fresh copy of GraphGym on my side and it works fine. You may follow the new README to reinstall the environment. @username_0 @username_1 Status: Issue closed username_0: Thank you Jiaxuan. I will try it out. BTW, will you be able to have a Windows 10 based installation as well? Best regards Chris username_0: Hi username_2, I tried torch1.8.1+cu102 (my cuda version for nvidia driver) today while following the instructions closely. When I preceded to `pip install -r requirements.txt`, it resulted in the following errors after auto Downloading torch-1.8.1-cp37-cp37m-manylinux1_x86_64.whl (804.1 MB) ``` Collecting torch-scatter Downloading torch_scatter-2.0.7.tar.gz (21 kB) ERROR: Command errored out with exit status 1: command: /home/hdd2nd/dev/miniconda3/envs/graphgym/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-l79nk47e/torch-scatter_f16bdd4dab584f82896ad52880657ad7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-l79nk47e/torch-scatter_f16bdd4dab584f82896ad52880657ad7/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-d0nd14mr cwd: /tmp/pip-install-l79nk47e/torch-scatter_f16bdd4dab584f82896ad52880657ad7/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-l79nk47e/torch-scatter_f16bdd4dab584f82896ad52880657ad7/setup.py", line 8, in <module> import torch ModuleNotFoundError: No module named 'torch' ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/fa/d1/0bade0c3b9222710528de0458ad48407dab46efd7ad3d4fd1be82b68ac2b/torch_scatter-2.0.7.tar.gz#sha256=369184948c838f756eea10464a3fbf8e103e22dc94d7045dbab85b5748bf85f9 (from https://pypi.org/simple/torch-scatter/) (requires-python:>=3.6). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached torch_scatter-2.0.6.tar.gz (21 kB) ERROR: Command errored out with exit status 1: command: /home/hdd2nd/dev/miniconda3/envs/graphgym/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-l79nk47e/torch-scatter_15948f5c5b1e4e8d97d597fb879707db/setup.py'"'"'; __file__='"'"'/tmp/pip-install-l79nk47e/torch-scatter_15948f5c5b1e4e8d97d597fb879707db/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-837l6i5b cwd: /tmp/pip-install-l79nk47e/torch-scatter_15948f5c5b1e4e8d97d597fb879707db/ Complete output (5 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-l79nk47e/torch-scatter_15948f5c5b1e4e8d97d597fb879707db/setup.py", line 8, in <module> import torch ModuleNotFoundError: No module named 'torch' ---------------------------------------- WARNING: Discarding ... ... ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/08/09/07b106f3e74246f4ecf6517013a053b6dd7486c4f889d81f39adc662431f/torch_scatter-1.0.3.tar.gz#sha256=e626993194819ba65cdf89a52fbbb7780569d9e157bc63dbef13ead6b7a33930 (from https://pypi.org/simple/torch-scatter/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Downloading torch_scatter-1.0.2.tar.gz (13 kB) ERROR: Command errored out with exit status 1: command: /home/hdd2nd/dev/miniconda3/envs/graphgym/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-l79nk47e/torch-scatter_036d08b07d4b4e4dab0e3b1d7393e4ca/setup.py'"'"'; __file__='"'"'/tmp/pip-install-l79nk47e/torch-scatter_036d08b07d4b4e4dab0e3b1d7393e4ca/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-6bats41s cwd: /tmp/pip-install-l79nk47e/torch-scatter_036d08b07d4b4e4dab0e3b1d7393e4ca/ Complete output (25 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-l79nk47e/torch-scatter_036d08b07d4b4e4dab0e3b1d7393e4ca/setup.py", line 26, in <module> cffi_modules=[osp.join(osp.dirname(__file__), 'build.py:ffi')], File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/setuptools/dist.py", line 433, in __init__ k: v for k, v in attrs.items() File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/distutils/dist.py", line 292, in __init__ self.finalize_options() File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/setuptools/dist.py", line 708, in finalize_options ep(self) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords ep.load()(self, ep.name, value) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules add_cffi_module(dist, cffi_module) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module execfile(build_file_name, mod_vars) File "/home/hdd2nd/dev/miniconda3/envs/graphgym/lib/python3.7/site-packages/cffi/setuptools_ext.py", line 25, in execfile exec(code, glob, glob) File "/tmp/pip-install-l79nk47e/torch-scatter_036d08b07d4b4e4dab0e3b1d7393e4ca/build.py", line 4, in <module> [Truncated] exec(code, glob, glob) File "/tmp/pip-install-l79nk47e/torch-scatter_95b898cd279f45d192781e0d3f65a547/build.py", line 3, in <module> import torch ModuleNotFoundError: No module named 'torch' ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/29/96/566ac314e796d4b07209a3b88cc7a8d2e8582d55819e33f72e6c0e8d8216/torch_scatter-0.3.0.tar.gz#sha256=9e5e5a6efa4ef45f584e8611f83690d799370dd122b862646751ae112b685b50 (from https://pypi.org/simple/torch-scatter/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement torch-scatter (from versions: 0.3.0, 1.0.2, 1.0.3, 1.0.4, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.3.0, 1.3.1, 1.3.2, 1.4.0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7) ERROR: No matching distribution found for torch-scatter ``` Meanwhile, I also tried 1.8.0+cu111 by updating cuda to the latest 11.3 aforehand, and then following the instruction successfully. But running 'bash run_single.sh' got the error: ``` OSError: libcudart.so.10.2: cannot open shared object file: No such file or directory ``` It still look for the old version. I am wondering whether all combinations of torch+cuda listed in your instructional comment are tested. Would you be able to investigate the issue further? Thank you Chris
megahertz/electron-log
987797630
Title: contextBridge and how to prevent logs Question: username_0: I wanted to have a log system that makes it possible to turn all logs off when the app is in production mode. I followed the guideline and did something like this in the preload script (which works smoothly, btw...): ``` contextBridge.exposeInMainWorld("electronApp", { log: log.functions, ... }); ``` Then if in the main process I add the following two lines: ``` log.transports.file.level = false; log.transports.console.level = false; ``` I still see the logs from the renderer. Did I miss something or is it a bug? Answers: username_1: You need to do the same in the renderer process too. Status: Issue closed
ShokoAnime/ShokoServer
272751112
Title: Unable To Run Remove Missing Files In Succession Question: username_0: Something I've noticed while cleaning up my collection, if I remove some files and then run Remove Missing Files, those files are removed from my collection. However if I remove more files and then run Remove Missing Files, nothing happens, I have to restart Server in order for Shoko to detect the removed files. Answers: username_1: What version username_1: Should be fixed in latest username_0: Ah, I was running 3.8.1 so I'll try again with latest when I'm done importing. username_1: @username_0 any news? username_0: Haven't tried on daily yet, will report back when I do. username_1: @username_0 can you check this so that we can close it? username_0: No longer an issue. The only thing I would recommend we change is the fact that Shoko leaves an empty group behind if you delete all episodes from a series. Status: Issue closed username_1: That can be arranged
bfricka/less-preview
69295951
Title: compile on mobile Question: username_0: It doesn't compile on mobile ( looks like event is not triggered.) I think if you would add mobile version without pretty textarea but with simple textarea + button to compile it would be great. Answers: username_0: Tested on Android 5. Chrome, Firefox, native. Lg g3 username_1: did you fix this issue @username_0 ??
rmrmg/tree-of-life
718024537
Title: missing nxCycleSearch.py Question: username_0: Dear authors! I can't find the file, you mentioned "nxCycleSearch.py". How can I load the data as a networkx graph? Thank you for your help in advance! Answers: username_1: Thx for feedback. The file is now in repo. Anyway if you want to search for cycles (or do some other time-consuming graph operation) I strongly suggest to switch to graph-tool. NetworkX is terribly slow.
XX-net/XX-Net
245669290
Title: 发布 3.3.6 版 Question: username_0: 修复: * 3.3.5 版本发布时,漏了更新version.txt文件,会导致循环更新。 无功能更新。 Answers: username_1: vimeo.com 这个网站把gae ip封锁了?求 怎么登陆呀。。。xx-net。。。 username_0: ​GAE 存在限制的,都用x-tunnel 解决。 username_2: 沙发。。。。。X大出现了 username_3: 我用的是3.3.4版本,之前的ip封锁已经缓和了很多,一切都恢复正常了,只是看youtube的速度,一直都稳定在2000-3000之间,不知道是什么原因,很稳定,就是速度上不去,不过这只是个小问题。另外,最近一段时间是敏感期,好几个翻墙软件的作者都关闭项目了,X大也注意安全吧! username_4: 好吧,我再更一下。 username_5: 感谢分享,好人一生平安!!!!!!!!!!!!! username_6: 威武! username_7: 升级到3.3.6,YouTube打开还是乱码 username_8: 楼上的,windows10上用edge不乱,Mac上用chrome不乱。我也是今天天升级后发现乱码了。 username_7: 我win10上用edge也是乱码,气死了 username_0: ​我也重新了乱码,我先把3.3.6撤下来,再看看是什么原因. username_0: 乱码问题出在Youtube 采用br 压缩编码,参考: https://www.chromestatus.com/feature/5420797577396224 解决办法也很简单,暂时先用Chrome 浏览器,以后再研究完整的解决方案. username_3: 刚看到SSR那边,破娃关闭项目了,真是坑啊,现在就剩xx坚挺了,狗日的,日子越发艰难 username_9: 项目关闭不代表东西不能用 username_10: 希望能修復卡聲問題。 username_11: 不是不维护了嘛 又更新了!真开心啊 username_12: 3.3.6 发布为什么没在 XX-NET的项目首页下载区上更新呢? username_13: 能否基于xx-net 3.3.1 重新编译一个apk username_14: 支持的浏览器 http://caniuse.com/#feat=brotli
Facepunch/Facepunch.Steamworks
398030122
Title: LobbyList.Lobbies is null Question: username_0: After my Client is all init. I try to call `Client.Instance.LobbyList.Refresh();` But then I receive this error message ``` NullReferenceException: Object reference not set to an instance of an object Facepunch.Steamworks.LobbyList+<>c__DisplayClass14_0.<OnLobbyDataUpdated>b__0 (Facepunch.Steamworks.LobbyList+Lobby x) (at .../Facepunch.Steamworks/Facepunch.Steamworks/Client/LobbyList.cs:131) ``` Is there a way to Initialize Lobbies so it cannot be null?<issue_closed> Status: Issue closed
intellij-rust/intellij-rust
335091593
Title: --all-targets not always desirable Question: username_0: When building a project for a specific no-std target (in my case for `thumbv6m-none-eabi`), the argument `--all-targets` casues `cargo check` to fail because there is no crate `test`. The same thing happens when I build from the command line. Please make it configurable or even just a tickbox or something! I've had to go back to an older version of the plugin before the `--all-targets` change was introduced and am stuck there. An example log is shown: ``` C:/Users/David/.cargo/bin/cargo.exe build --all --all-targets Compiling coffee-compass v0.1.0 (file:///C:/Users/David/Documents/code/coffee-compass) error[E0463]: can't find crate for `test` --> src\main.rs:1:1 | 1 | #![no_std] | ^ can't find crate error: aborting due to previous error ``` Answers: username_1: Another case where `--all-targets` is undesired: I have a stable Rust project, with a benchmark utilizing `#![feature(test)]`. `cargo check --all-targets` fails building the benchmark (because it requires nightly) and reports an error, even though I don't want to build the benchmark normally (when I do I can issue `cargo +nightly bench` manually).
lbryio/lbry-desktop
377965383
Title: Fix Travis runs on forked repos/PRs Question: username_0: <!-- Thanks for reporting an issue to LBRY and helping us improve! To make it possible for us to help you, please fill out below information carefully. Before reporting any issues, please make sure that you're using the latest version. - App releases: https://github.com/lbryio/lbry-desktop/releases - Standalone daemon: https://github.com/lbryio/lbry/releases We are also available on live chat at https://chat.lbry.io --> ## The Issue Currently, Travis fails to run on PRs that are from forked repositories due to lack of code signing access. We want to disable code signing for these builds and allow them to complete. ## System Configuration <!-- For the app, this info is in the About section at the bottom of the Help page. You can include a screenshot instead of typing it out --> <!-- For the daemon, run: curl 'http://localhost:5279' --data '{"method":"version"}' and include the full output --> - LBRY Daemon version: - LBRY App version: - LBRY Installation ID: - Operating system: ## Anything Else <!-- Include anything else that does not fit into the above sections --> ## Screenshots <!-- If a screenshot would help explain the bug, please include one or two here --> ## Internal Use ### Acceptance Criteria 1. 2. 3. ### Definition of Done - [ ] Tested against acceptance criteria - [ ] Tested against the assumptions of the user story - [ ] The project builds without errors - [ ] Unit tests are written and passing - [ ] Tests on devices/browsers listed in the issue have passed - [ ] QA performed & issues resolved - [ ] Refactoring completed - [ ] Any configuration or build changes documented - [ ] Documentation updated - [ ] Peer Code Review performed Answers: username_1: @username_2 is this something you are working on currently? username_2: i'll be tackling this tomorrow as part of my new sprint Status: Issue closed
agda/agda-stdlib
1101853648
Title: Introduce `NonZero` to `Data.Fin` as well Question: username_0: Just noticed we have similar constructor issues in various `Fin` operations and proofs. Answers: username_1: Oh? I found only `pred<` in `Data.Fin.Properties`, and that seems used nowhere else... username_1: Interesting corridor discussion. My eyes are somewhat more open about this now. username_0: In summary we want to avoid arguments of the form `Fin (suc n)` as that makes it very difficult to use the functions when the `Fin` index is known to be non-zero but is not of that form, e.g. `Fin (n !)`... username_1: Yes, OK. In the first instance, though, moving to `.{{_ : NonZero n}}` arguments on indices does/would generate a lot of explicit use/mention of `pred`... maybe that is simply a pain for the library (and its developers) but better for downstream clients? is that an argument for a (more?) disciplined separation of public/private interfaces to modules... with the private interfaces being more dependently-typed/constructor-form index-aware, with the public interface being more destructor-form index focused? In miniature, I guess that was part of the `#1709` discussion, with `Data.Nat.Properties.Core.≤-pred` being precisely such a form... Sorry for not having had a more nuanced sense of these things until our discussion yesterday. Hmmm. username_0: Yes it would mean more uses of `pred`, and I agree with your analysis about the cost/gain benefits for devs vs users. One thing to say is that it also makes the proofs slightly harder to read for the users... but I think the gain is usability outweighs that. As for the more general point, yes that sounds reasonable. Although unfortunately we have no real (enforced) notion of private/public interfaces... username_1: Worker/wrapper idiom: I'd suggest leaving in the old 'worker' proofs then, and only add new ones expressed in terms of`pred`... not least because these latter will most likely be easy 'wrapper's around the former... and then any client module importing these can decide which they need/prefer to use?
octomation/makefiles
791775535
Title: find way to update cache of pkg.go.dev Question: username_0: when I published [email protected] the cache of https://pkg.go.dev/github.com/username_0/breaker was a stale long time Answers: username_0: e.g. https://dev.to/koddr/how-to-update-version-s-cache-of-your-package-in-pkg-go-dev-39ij username_0: maybe I can open https://pkg.go.dev/github.com/username_0/[email protected] Status: Issue closed
summernote/summernote
575603989
Title: How to determine current fullscreen mode / state of summernote ? Question: username_0: #### Description of your Issue or Request: There is no documented way to determine summernote current state, specifically fullscreen mode on / off. Is there a way to determine whether summernote is in fullscreen mode or not other than check for `fullscreen` class in DOM element ? Is there a way to get something like current state of summernote with all all option enabled, like bold, fullscreen and so on ? Answers: username_1: Not as such, as far as I know, I would have to look at the source to see if there is a flag being set other than adding the class to the DOM. username_0: For the fullscreen there is a function, not documented though: `isFullscreen()` Check: https://github.com/summernote/summernote/blob/83fb867d36501296591ddabc9400a4eb1297effa/src/js/base/module/Fullscreen.js#L51 So I managed to use it like this: `$('#summernote').summernote('fullscreen.isFullscreen');` username_1: lol, I never noticed that before. Wonder why that's not in the documentation. username_1: I've just added this to the documentation, thank you for bringing this to our attention. Status: Issue closed
Practice-Hacker/main-app
698562784
Title: User Story #19 Question: username_0: Feature: User - Delete Practice Tips To Pieces Story: As an authenticated user, When I visit a piece’s show page, I should be able to delete any tip(s) that I previously contributed and when the page refreshes, I should no longer see the tip(s) on the page.<issue_closed> Status: Issue closed
BetaMasaheft/Documentation
274806262
Title: Standardize spelling Question: username_0: The spelling of some Gǝʿǝz words should be consistently given, since there exist competing forms, especially due to the presence of a shwa after ʾ or ʿ. I have myself sometimes altered an existing spelling form, but not regularly. In the small list below, the former is etymologically better, because it relies on the Semitic pattern, whereas the latter has been used by EAe. Yāʿqob or Yāʿǝqob Tǝʾzāz or Tǝʾǝzāz ʾAʾlāf or ʾAʾǝlāf tǝsbǝʾt or tǝsbǝʾǝt, etc. One more instance: Taʾammǝra (as used in the Encyclopaedia Aethiopica) or Taʾamra (as used in Ethio-SPaRe) Answers: username_1: If this are variants in titles transcriptions, both should be given in the data, instead of replacing them. username_2: It would probably make sense for us to adopt the same solution as Traces in these cases? I believe that they generally follow Leslau, which would give us: tǝʾǝzāz ʾaʾǝlāf tǝsbǝʾt taʾammǝr We also had inconsistencies with ba-ʾənta/baʾənta username_0: It seems, according to Traces (I only had an informal conversation with Susanne), that ba-ʾənta/baʾənta is considered as one single lexeme. So, contrary to other prepositions like ʾəm-wəsta and so on, it should not be hyphenated. username_1: @username_0 will update guidelines after checking the traces decisions with @SusanneHummel guidelines will include examples as are in the current traces documentation for use in BM, in the lexicon/parser and in dillmann app username_2: I think that this can be closed. Here are the relevant guideline pages for reference: http://betamasaheft.eu/Guidelines/?id=transliteration-principles http://betamasaheft.eu/Guidelines/?id=dubious-spelling As always, new issues should be open for any particular new uncertain cases Status: Issue closed
pszklarska/FlutterPubVersionChecker
526425584
Title: False Positives Question: username_0: Android Studio 3.5.2: ![image](https://user-images.githubusercontent.com/5154664/69317958-34ddda80-0c90-11ea-8388-6287fb2858c8.png) Seems to report many false positives. Like ![image](https://user-images.githubusercontent.com/5154664/69317743-baad5600-0c8f-11ea-8996-c4d249f41a42.png) On pub.dev the latest version for a long time is 0.1.1 see here https://pub.dev/packages/outline_material_icons/versions More examples: ![image](https://user-images.githubusercontent.com/5154664/69317929-21cb0a80-0c90-11ea-8e9e-e976b0663174.png) ![image](https://user-images.githubusercontent.com/5154664/69318062-79697600-0c90-11ea-82f4-5e2cd7346aaf.png) ![image](https://user-images.githubusercontent.com/5154664/69318094-8be3af80-0c90-11ea-8208-ce1d46544571.png) ![image](https://user-images.githubusercontent.com/5154664/69318109-969e4480-0c90-11ea-85b3-3d828fd33038.png) ![image](https://user-images.githubusercontent.com/5154664/69318148-a74eba80-0c90-11ea-84b9-007e79c5cb5c.png) Flutter v1.9.1+hotfix.6 ``` flutter doctor -v [✓] Flutter (Channel stable, v1.9.1+hotfix.6, on Mac OS X 10.14.6 18G1012, locale en-AU) • Flutter version 1.9.1+hotfix.6 at /Users/gamma/Documents/flutter • Framework revision 68587a0916 (10 weeks ago), 2019-09-13 19:46:58 -0700 • Engine revision b863200c37 • Dart version 2.5.0 [✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2) • Android SDK at /Users/gamma/Library/Android/sdk • Android NDK location not configured (optional; useful for native profiling support) • Platform android-29, build-tools 29.0.2 • ANDROID_HOME = /Users/gamma/Library/Android/sdk • Java binary at: /Applications/Android Studio 3.5 Preview.app/Contents/jre/jdk/Contents/Home/bin/java • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) • All Android licenses accepted. [✓] Xcode - develop for iOS and macOS (Xcode 11.2.1) • Xcode at /Applications/Xcode.app/Contents/Developer • Xcode 11.2.1, Build version 11B500 • CocoaPods version 1.8.4 [✓] Android Studio (version 3.5) • Android Studio at /Applications/Android Studio 3.5 Preview.app/Contents • Flutter plugin version 41.1.2 • Dart plugin version 191.8593 • Java version OpenJDK Runtime Environment (build 1.8.0_202-release-1483-b49-5587405) [✓] VS Code (version 1.40.1) • VS Code at /Applications/Visual Studio Code.app/Contents • Flutter extension version 3.6.0 [✓] Connected device (1 available) • Nexus 6P • CVH7N15A17000241 • android-arm64 • Android 8.1.0 (API 27) • No issues found! ``` Answers: username_1: Hi @username_0! Thanks for your feedback 👍 Issue you reported is probably caused by the fact that plugin checks for the dependencies only at the start of IDE, to avoid multiple API calls. Can you check if after restarting Android Studio you still see wrong versions? username_0: Hi @username_1 I've checked and yes it warns that every version is incorrect ie. ... there is a newer version X, when the version mentions is already X. I know the pub.dev API changed a while ago ... possible the cause? Status: Issue closed username_1: I'm closing it for now, @username_0 feel free to reopen if you still see that issue
spyzhov/ajson
613604190
Title: Results do not match other implementations Question: username_0: The following queries provide results that do not match those of other implementations of JSONPath (compare https://username_0.github.io/json-path-comparison/): - [ ] `$[7:10]` Input: ``` ["first", "second", "third"] ``` Expected output: ``` [] ``` Actual output: ``` null ``` - [ ] `$[1:3]` Input: ``` {":": 42, "more": "string", "a": 1, "b": 2, "c": 3} ``` Expected output: ``` [] ``` Actual output: ``` null ``` - [ ] `$[2:1]` Input: ``` ["first", "second", "third", "forth"] ``` Expected output: ``` [] ``` Actual output: ``` null ``` - [ ] `$[0:0]` Input: ``` ["first", "second"] ``` Expected output: ``` [] ``` Actual output: ``` null ``` - [ ] `$["key"]` [Truncated] ``` null ``` - [ ] `$[?(@.key=="some.value")]` Input: ``` [{"key": "some"}, {"key": "value"}, {"key": "some.value"}] ``` Expected output: ``` [{"key": "some.value"}] ``` Error: ``` wrong request: wrong request: ?(@.key=="some.value") ``` For reference, the output was generated by the program in https://github.com/username_0/json-path-comparison/tree/master/implementations/Golang_github.com-username_1-ajson. Answers: username_0: I've filed those on quite a few projects. Let me know if this is helpful. Feedback is welcome. username_1: Thanks! It's really very helpful. I'll take care of this, asap. A couple of weeks ago I implement big pack of tests for json decode, if you need it. https://github.com/username_1/ajson/blob/master/decode_test.go#L787-L2875 username_0: I might have a look at the test suite, it does seem though that a lot of them are also checking for valid JSON. That's taken for granted and is not part of checks. I have taken the time to go through all the other implementations' issue tracker, to find issues filed by users. I hope this already gives us a good overview of frequent/likely errors. username_1: Sorry, but can you please fix test suite and run again? https://github.com/username_0/json-path-comparison/blob/Golang_github.com-username_1-ajson/implementations/Golang_github.com-username_1-ajson/main.go#L27 Instead of `var results []interface{}` write `results := make([]interface{}, 0)` Or you can simplify code, like: `results, err := ArrayNode("", nodes).Unpack()` username_0: Ah, thanks for looking into this. I've updated the findings above. Using the ArrayNode way directly gave me a panic, so I went with the make option. Status: Issue closed username_1: @username_0 , please check it once again. Version: [v0.2.2](https://github.com/username_1/ajson/releases/tag/v0.2.2) username_0: Those look good, but it seems there have been regressions? See https://github.com/username_0/json-path-comparison/blob/master/bug_reports/Golang_github.com-username_1-ajson.md username_1: The following queries provide results that do not match those of other implementations of JSONPath (compare https://username_0.github.io/json-path-comparison/): - [ ] `$["key"]` Input: ``` {"key": "value"} ``` Expected output: ``` ["value"] ``` Actual output: ``` [] ``` - [ ] `$[?(@['key']==42)]` Input: ``` [{"key": 0}, {"key": 42}, {"key": -1}, {"key": 41}, {"key": 43}, {"key": 42.0001}, {"key": 41.9999}, {"key": 100}, {"some": "value"}] ``` Expected output: ``` [{"key": 42}] ``` Error: ``` wrong symbol '=' at 12 ``` - [ ] `$[?(@.key=="some.value")]` Input: ``` [{"key": "some"}, {"key": "value"}, {"key": "some.value"}] ``` Expected output: ``` [{"key": "some.value"}] ``` Error: ``` wrong request: wrong request: ?(@.key=="some.value") ``` For reference, the output was generated by the program in https://github.com/username_0/json-path-comparison/tree/master/implementations/Golang_github.com-username_1-ajson. username_1: Ah, thanks a lot! Yes, I tried to fix some issues, but seems to do worth. Will fix it soon. username_0: If you think it helps you can use https://github.com/username_0/json-path-comparison/blob/master/regression_suite/regression_suite.yaml as a regression suite. This will be updated when new queries are added, or a consensus emerges, or changes. Status: Issue closed username_1: @username_0 fixed at [v0.2.3](https://github.com/username_1/ajson/releases/tag/v0.2.3). Thanks again!
dmsl/anyplace
697527038
Title: utils Status: Issue closed Question: username_0: Information:java: User-specified option "-g" is ignored for "anyplace". This compilation parameter is set automatically according to project settings. Information:java: Errors occurred while compiling module'anyplace' Information:javac 11.0.6 was used to compile java sources Information:2020-09-10 15:10-Build completed with 16 errors and 1 warning in 2 s 955 ms Warning: scala: skipping Scala files without a Scala SDK in module(s) anyplace C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\location\RadioMap.java Error: (39, 13) java: package utils does not exist Error: (78, 13) java: Symbol not found Symbol: variable LPLogger Location: class location.RadioMap C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\oauth\provider\v2\granttype\AbstractGrantHandler.java Error: (38, 16) java: package accounts does not exist Error: (39, 32) java: Package oauth.provider.v2.models does not exist Error: (40, 32) java: Package oauth.provider.v2.models does not exist Error: (53, 49) java: Symbol not found Symbol: Class IAccountService Location: Class oauth.provider.v2.granttype.AbstractGrantHandler Error: (53, 81) java: Symbol not found Symbol: class AuthInfo Location: Class oauth.provider.v2.granttype.AbstractGrantHandler Error: (53, 15) java: Symbol not found Symbol: Class AccessTokenModel Location: Class oauth.provider.v2.granttype.AbstractGrantHandler C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\oauth\provider\v2\granttype\IGrantHandler.java Error: (38, 16) java: package accounts does not exist Error: (39, 32) java: Package oauth.provider.v2.models does not exist Error: (40, 32) java: Package oauth.provider.v2.models does not exist Error: (41, 16) java: Package play.mvc does not exist Error: (57, 33) java: Symbol not found Symbol: Class OAuth2Request Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 56) java: Symbol not found Symbol: Class IAccountService Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 88) java: Symbol not found Symbol: Class AccountModel Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 12) java: Symbol not found Symbol: class Result Location: Interface oauth.provider.v2.granttype.IGrantHandler Status: Issue closed Answers: username_0: Information:java: User-specified option "-g" is ignored for "anyplace". This compilation parameter is set automatically according to project settings. Information:java: Errors occurred while compiling module'anyplace' Information:javac 11.0.6 was used to compile java sources Information:2020-09-10 15:10-Build completed with 16 errors and 1 warning in 2 s 955 ms Warning: scala: skipping Scala files without a Scala SDK in module(s) anyplace C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\location\RadioMap.java Error: (39, 13) java: package utils does not exist Error: (78, 13) java: Symbol not found Symbol: variable LPLogger Location: class location.RadioMap C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\oauth\provider\v2\granttype\AbstractGrantHandler.java Error: (38, 16) java: package accounts does not exist Error: (39, 32) java: Package oauth.provider.v2.models does not exist Error: (40, 32) java: Package oauth.provider.v2.models does not exist Error: (53, 49) java: Symbol not found Symbol: Class IAccountService Location: Class oauth.provider.v2.granttype.AbstractGrantHandler Error: (53, 81) java: Symbol not found Symbol: class AuthInfo Location: Class oauth.provider.v2.granttype.AbstractGrantHandler Error: (53, 15) java: Symbol not found Symbol: Class AccessTokenModel Location: Class oauth.provider.v2.granttype.AbstractGrantHandler C:\Users\17812\Desktop\Indoor_position\anyplace-master\server\app\oauth\provider\v2\granttype\IGrantHandler.java Error: (38, 16) java: package accounts does not exist Error: (39, 32) java: Package oauth.provider.v2.models does not exist Error: (40, 32) java: Package oauth.provider.v2.models does not exist Error: (41, 16) java: Package play.mvc does not exist Error: (57, 33) java: Symbol not found Symbol: Class OAuth2Request Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 56) java: Symbol not found Symbol: Class IAccountService Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 88) java: Symbol not found Symbol: Class AccountModel Location: Interface oauth.provider.v2.granttype.IGrantHandler Error: (57, 12) java: Symbol not found Symbol: class Result Location: Interface oauth.provider.v2.granttype.IGrantHandler Status: Issue closed
youichiro/rails-multiple-db-sandbox
732789060
Title: 画像 Question: username_0: index <img width="1277" alt="スクリーンショット 2020-10-30 9 18 01" src="https://user-images.githubusercontent.com/20487308/97646358-c3f39500-1a92-11eb-8d5e-da37b67b457e.png"> show <img width="875" alt="スクリーンショット 2020-10-30 9 14 01" src="https://user-images.githubusercontent.com/20487308/97646361-c655ef00-1a92-11eb-8dd0-a128b1c9c68c.png"> create <img width="1278" alt="スクリーンショット 2020-10-30 9 17 17" src="https://user-images.githubusercontent.com/20487308/97646363-c81fb280-1a92-11eb-89cd-beb7cef25cd8.png"> update <img width="1073" alt="スクリーンショット 2020-10-30 9 21 22" src="https://user-images.githubusercontent.com/20487308/97646368-c9e97600-1a92-11eb-879f-a10ffd6793f2.png"> destroy <img width="969" alt="スクリーンショット 2020-10-30 9 23 05" src="https://user-images.githubusercontent.com/20487308/97646373-cc4bd000-1a92-11eb-9cde-2576a183cb55.png">
ioBroker/ioBroker.zigbee
555708817
Title: Finding no new device Question: username_0: After updating to 1.0.1 i could not find any new device. I have made an soft reset of the stick and reload the adapter. I have tested: Ikea motion Sensor E1745, Aqara Cube, Aqara Motion Sensor, Hue Remote Dimmer and Aqara Opple. Some device are complete new for the stick. Some device paired in past and was deleted. Have Somebody the same problem and could help? Answers: username_1: What firmware stack are you using? Original 1.2 or 3.x? For 1.2: are you sure you replugged the stick before you searched for devices? Pairing only works within the first minutes! username_0: Coordinator firmware version: {"type":"zStack12","meta":{"transportrev":2,"product":0,"majorrel":2,"minorrel":6,"maintrel":3,"revision":20190608}} Which is that? username_0: Update: After some resets in get the message Joining already permitted. After that I start the next pairing modus and now I will find the IKEA Motion sensor. But lost the connection after I bring it to the bath (long way). But some repeaters (Osram Plugs) are close. username_0: For the next device also not found... I don't know what's happen... username_0: Today i try my cc2531 Stick flash with repeater SW. He will not found. After deleting one device. I found the new motion sensor. @ the moment i have 40 devices. Is there an limit? username_2: 40 devices.. it's a lot.. I commend to you switch the cc2531 to CC2538+CC2592 PA or cc26x2r1 here is the link quality better Status: Issue closed
simatec/ioBroker.backitup
610146215
Title: deCONZ Backup Integration Question: username_0: I would appreciate another integrated backup process to backup the deCONZ (phoscon conbee/raspbee) project. It would be a bonus to have the automated restore functionality if the default restore functionality of the phoscon app itself would work. Answers: username_1: A hint for the developers: This should be able to integrate using an undocumented API-Call in the deCONZ-Software. I use it like this via a cron job: curl -k -S -v -H "Content-Type: application/json" http://####IP####:8080/ -X "POST /api/####APIKEY####/config/export" This results in the creation of a backup in /home/pi/.local/share/dresden-elektronik/deCONZ/deCONZ.tar.gz username_2: I am already familiar with this method, but unfortunately not the solution that is possible for multiple systems. However, Backitup works across platforms username_3: Sorry, but I couldn't quite understand this answer. Since I would also appreciate a solution to automatically create and secure a backup of the Phoscon gateway by ioBroker, it would be nice if you could clarify your answer a little :-) username_2: Deconz does not currently support any sensible backup solution. we have already opened an issue at deconz
RDFLib/rdflib
335155063
Title: . in bnode does not parse Question: username_0: "The character . may appear anywhere except the first or last character." (https://www.w3.org/TR/n-triples/#BNodes) and (https://www.w3.org/TR/n-quads/#BNodes) i think we need `r'_:([A-Za-z0-9][A-Za-z0-9\.]*[A-Za-z0-9])'` in https://github.com/RDFLib/rdflib/blob/9da21dbcec8bea433827dc042536173db28c21d7/rdflib/plugins/parsers/ntriples.py#L40<issue_closed> Status: Issue closed
tilezen/tapalcatl
197043725
Title: Proxying responsibilities Question: username_0: At the moment, tapalcatl is responsible for reverse proxying to tileserver when s3 doesn't have the tile. Can we move this responsibility back to fastly? This should both simplify tapalcatl and leave the burden of keeping more connections open at fastly.<issue_closed> Status: Issue closed
simonw/datasette.io
995545132
Title: Asset file on /desktop should be Datasette-mac.app.zip Question: username_0: This is required by the auto-update mechanism, see https://github.com/username_0/datasette-app/issues/106#issuecomment-918766485 Answers: username_0: No code changes necessary, just needs a re-deploy after renaming the asset. username_0: Re-deploying didn't work because the code isn't re-fetching the releases for some reason. username_0: Relevant code: https://github.com/username_0/datasette.io/blob/126a53b09a2cb0296756c09cae5ec9412dfbae1d/build_directory.py#L140-L156 username_0: I'm going to add a `--always-fetch-releases` option and set it to `username_0/datasette-app`. username_0: https://datasette.io/desktop now links to the asset with the new name. Status: Issue closed
PeaceGeeksSociety/servicesadvisor-3.0
204452456
Title: Remove paging for Taxonomy list Question: username_0: see https://github.com/PeaceGeeksSociety/ServicesAdvisor-2.0/issues/45 Answers: username_1: It's not possible to remove paging entirely but I set it to an absurd 1,000 terms per page. Which should effectively remove the pager for us. username_0: Verified on Test Status: Issue closed
bibstha/termidoro
222490789
Title: Allow a way to add the title of focus Question: username_0: When I start a 25 minutes timer, it optionally allows me to add a description of what the focus time is dedicated form. This allows for when distraction happens, you can get back and realize oh I was doing this.
DerrickWood/kraken2
380513101
Title: How to make taxonomy for custom database Question: username_0: Hello. I'm running kraken2 to classify some microbiome data, and I'm using custom database ( bacteria, viruses, human by karaken2-build). Then I found the bacteria reported in incomplete shotgun sequence from un-classified read. Therefore, I'd like to add some sequences whose prefix are "NZ_" to the custom database. But I could not prepare taxonomy for the custom database adding new data. I would appreciate it if you could tell how to make tazonomy for the custom database. Answers: username_1: I believe if you have downloaded the full NCBI taxonomy already (for example with a standard Kraken install) then it suffices to make a link from any new database to that taxonomy. Example: ``` ralf@ark:~/krakendir> ls -l taxonomy total 28161448 drwxr-xr-x 2 ralf users 4096 Aug 22 10:29 ./ drwxr-xr-x 13 ralf users 266240 Oct 17 09:15 ../ -rw-r--r-- 1 ralf users 0 Aug 18 08:17 accmap.dlflag ... etc. ralf@ark:~/krakendir> mkdir new_db ralf@ark:~/krakendir> cd new_db/ ralf@ark:~/krakendir/new_db> ln -s ../taxonomy . ralf@ark:~/krakendir/new_db> ls -l total 268 drwxr-xr-x 2 ralf users 4096 Nov 14 07:39 ./ drwxr-xr-x 14 ralf users 266240 Nov 14 07:38 ../ lrwxrwxrwx 1 ralf users 11 Nov 14 07:39 taxonomy -> ../taxonomy/ ``` Now you can proceed with `kraken2-build` and add your fasta files to the database named `new_db`. username_0: Thank you for your reply. When I tried it, it was output as follows. And "unmapped.txt" file was created. $ kraken2-build --build --db ./database/new_db/ Creating sequence ID to taxonomy ID map (step 1)... Found 12895322/13365584 targets, searched through 682186361 accession IDs, search complete. lookup_accession_numbers: 470262/13365584 accession numbers remain unmapped, see unmapped.txt in DB directory Sequence ID to taxonomy ID map complete. [13m11.526s] …... Are the species described in "unmapped.txt" classified by kraken2? username_1: Can you please give examples of what you have in `unmapped.txt`? Alternatively, is it possible that you have accessions that were submitted to the NCBI **after** your taxonomy was downloaded, i.e., was your taxonomy download already some time ago? For a different reason I personally encountered this myself when I tried to build a database with UniProt accessions, which are not supported by the NCBI taxonomy. username_0: The first five lines of `unmapped.txt` are as follows. ``` NZ_UKZL01000018 NZ_UKZL01000017 NZ_UKZL01000016 NZ_UKZL01000012 NZ_UKZL01000011 ``` The new database consists of 13,365,584 sequences. There are 470,262 lines in `unmapped.txt`. I downloaded taxonomy using `kraken-build --download-taxonomy`. username_1: It seems these sequences are a sort of quick adoption by NCBI from Genbank sequences. In the GFF they are associated with _K. pneumoniae_ but as far as I understand it there is little use in adding that sequence to Kraken if you have only a species classification (because that species is already covered enough). Rather, you want the strain level taxon (EuSCAPE_ES050) classified. However, the current NCBI taxonomy does not have that strain, I checked on the website. One thing you can do at the moment is take that list in `unmapped.txt` and remove all these accessions from the bulk you are trying to build a database from. username_0: I will try to create a database only for mapped sequence for the present. Thank you for your kind correspondence.
catchorg/Catch2
381410019
Title: ASSERT macro from afx.h not firing in Catch2 2.4.2 worked ok in Catch2 2.2.3. Question: username_0: ASSERT(false); does not fire when running Catch2 unit tests. This had worked OK in 2.2.3. assert(false) is working ok. * Catch version: 2.4.2 * Operating System: Windows 10 * Compiler+version: Visual Studios 2015
corona-warn-app/cwa-documentation
798600984
Title: Question on cwa statistics numbers as published by RKI Question: username_0: <!-- Thanks for submitting your question 🙌 ❤️ Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, please, have a look at our FAQs and existing questions before opening a new question. --> ## Your Question <!-- Include details about your question. --> RKI publishes weekly statistics on cwa [here](https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/WarnApp/Archiv_Kennzahlen/WarnApp_KennzahlenTab.html) I am a bit confused about these published numbers. The meaning is a bit unclear and confusing. **Is here the correct place to ask such questions?** If not, where could I get more information? My current assumption is that these numbers are captured "somewhere" on a cwa server. 1) Are the numbers of users that have shared their keys from **Germany** ( = excluding keys from European federation gateway), or from **Europe** ( = including these keys from the gateway)? 2) In the last section of the data sheet two accumulated numbers are given how many users have shared their keys. These numbers are confusing and seem inconsistent. Or am I missing something? a) Data sheet 2020-10-12, bottom left: "Im Zeitraum vom 1. September bis 11. Oktober 2020 [...] haben sich 7922 Nutzerinnen und Nutzer dafür entschieden, ihr positives Testergebnis mit den anderen Nutzerinnen und Nutzern zu teilen." b) same data sheet, bottom right: "Seit dem Start der Corona-Warn-App haben insgesamt 10504 Nutzerinnen und Nutzer ihr positives Testergebnis geteilt." c) Data sheet 2021-01-29, bottom left: "Im Zeitraum vom 01. September 2020 bis 27. Januar 2021 [...] haben sich 222.486 Nutzerinnen und Nutzer dafür entschieden, ihr positives Testergebnis mit den anderen Nutzerinnen und Nutzern zu teilen." d) same data sheet, bottom right: "Seit dem Start der Corona-Warn-App haben insgesamt 227.985 Nutzerinnen und Nutzer ihr positives Testergebnis geteilt" So when I calculate (b) - (a), I get the number of users that shared their keys before 01 Sept 2020 since begin of cwa - right? 10504 - 7922 = 2582 users And now I calculate (d) - (c) and expect the same number - right? 227985 - 222486 = 5499 users What am I missing here? Answers: username_1: @username_0 ok, interesting. We will raise this internally. Thanks.
kubo25/Diabolik-Lovers-STCM2L-Editor
860537823
Title: How do i turn this into an executable, run the program? Will the there be an executable release? Question: username_0: Hello, im trying to use this editor! ive cloned the repo with git cmd but im unsure on how to run it? I cant find a tutorial on how to compile/build repos (they all lead to a dead end) and im pretty new to all of this, would you help me with it? Im using windows 10 by the way. Answers: username_0: I forgot to say, my discord is username_0#2528 you can dm me on there if youd like username_1: Hi, I've added a release for you. It should just work, but I can't test it because I don't have any of the files anymore. If you want to take a look at the code, maybe make some changes and compile you only need to download Visual Studio and open `Diabolik Lovers STCM2L Editor.sln` in it. This will open the project and in theory you should just be able to press start and it'll work. Status: Issue closed
IFRCGo/go-frontend
667090942
Title: Creating Mapbox tileset from go database Question: username_0: With the work we have been doing in https://github.com/IFRCGo/go-api/pull/815 we will soon have all geometries for countries and districts in the Go database. The goal is to have go database become the canonical source of geo data for go-api and front-end. This means that we can allow admins to edit names, fix centroid, adjust bbox, overwrite geometries etc from within the go-api Django Admin. To make sure these changes are reflected on the front-end maps, we need a workflow to generate Mapbox tilesets from the database and push them to Mapbox so it updates all maps. What we should do: * Write a management command that creates a mapbox vector tileset using [Tippecanoe](https://github.com/mapbox/tippecanoe) * Upload this to the an existing or versioned map id * Update style as necessary. We can only do this once all geometries are imported and inconsistencies like #1456, #1473 and #1464 are resolved first in the shapefile, then imported to the database. In the meantime, we will work on the pipeline while the data is all getting ready perhaps by mid-late August. cc @username_1 Answers: username_1: Moving to 4.4. username_0: We decided to use the [Mapbox Tiling Service](https://docs.mapbox.com/mapbox-tiling-service/) for this purpose. I've built out the workflow in this PR https://github.com/IFRCGo/go-api/pull/933. Country tileset is processed as expected, but districts are failing. Unfortunately there's no error or logs from the Mapbox side that's visible to us using the [tilesets-cli](https://github.com/mapbox/tilesets-cli). When I run the status command, this is what comes back: ``` root@b36819f481d0:~/go-api# tilesets status go-ifrc.go-districts {"id": "go-ifrc.go-districts", "latest_job": "cki4j413u001207ma8cmo0dy5", "status": "failed"} ``` This is after many hours (> 10 hours) I ran tippecanoe manually on my computer with the same dataset and was able to generate an mbtiles file successfully. @username_2 @username_1 I plan to email Mikel to see if he can help us get some more information about this. username_0: We resolved the MTS error by reducing the coordinate precision and setting a maxzoom of 10 based on Mapbox's suggestion. MTS does fail occasionally with little helpful errors but the script should work and when run as a management command will update the tilesets. I think all items in this ticket are done. username_0: Tileset generation code is now in staging. Once this is merged to production, we will import and update all data changes in the database and then run the management command to update the tileset. username_2: Hey @username_0 - can we change the zoom levels for the tilesets, especially districts as you can't see them until zoomed too far in. Not certain what best here. The disputed boundaries uploaded direct to Mapbox are z0-8 (and cannot be changed unless we use API). Tempted to make all layers the same so they continue to line up (more or less) as you zoom closer? Could be overkill though. username_0: @username_2 yes! I highly recommend to make sure we pick a zoom level that's needed and not go fully overboard. It introduces more friction points and higher chances of MTS failure and a Mapbox bill. Can you tell me what zoom levels are needed? username_2: Proposing the zoom level and naming for the MTS layers: - GO_Countries_v1 - z0-8 - GO_Districts_v1 - z3-8 - GO_Country_Centroids_v1 - z0-4 - GO_District_Centroids_v1 - z3-6 username_0: @username_2 This is now the zoom levels for the tilesets. They are a bit bigger range than listed above: Countries minzoom 0, maxzoom 10 Districts minzoom 3, maxzoom 10 Countries centroids minzoom 0, maxzoom 10 Districts centroids minzoom 3, maxzoom 10 https://github.com/IFRCGo/go-api/pull/1028 username_2: When building the larger districts tileset, the VM with 4GB RAM was insufficient and crashed. This is now a blocker to finishing the Mapbox build. @username_1 @username_0 - I hope this captures the issue but perhaps we need a new ticket for the VM issue? username_2: Started building the styles in Mapbox with the new tilesets. A few issues to note as discussed @username_1 @username_0: - Nepal districts missing - Morocco and Serbia missing (we updated the geometries) - Add country ISO (and maybe name) to district attributes - currently used to focus on countries of interest on emergency page - Add disputed attribute to countries (we need it to style them) username_0: @username_2 I can do 3, 4, 5 tomorrow. @username_1 let me know if we can coordinate with Zoltan on checking the nepal and other geometries are actually missing the database. username_3: Nepal is in the country table: ``` ... api_country c where cd.country_id = c.id and c.record_type=1 and name like '%epal%' country_id | name | iso ------------+-------+----- 123 | Nepal | NP ``` username_0: Here are things to do to wrap this up: 1. Merge https://github.com/IFRCGo/go-api/pull/1103 and deploy 2. Update serbia and morocco geometry projection *SHOULD BE TESTED ON STAGING FIRST*: ``` ALTER TABLE api_countrygeoms SET geom=ST_Transform(ST_SetSRID(geom, 3857), 4326) WHERE country_id=271; ALTER TABLE api_countrygeoms SET geom=ST_Transform(ST_SetSRID(geom, 3857), 4326) WHERE country_id=119; ``` 3. Update nepal geometries with the [new shapefile] (https://drive.google.com/file/d/1LjNDbBuAJ3kDldwgbIb6CV0x5aEo1QE9/view?usp=sharing). By running `python manage.py import-admin1-data 20210610_NepalRC_Nepal-adm1_CODE-UPDATE.shp --update-geom --update-centroid --update-bbox` *SHOULD BE TESTED ON STAGING FIRST* username_3: Did you mean "UPDATE api_countrygeoms..." in the 2nd point? username_0: @username_3 indeed! I meant `UPDATE` I just fixed the query above username_3: The two updates are run successfully on staging: ``` postgres=> UPDATE api_countrygeoms SET geom=ST_Transform(ST_SetSRID(geom, 3857), 4326) WHERE country_id=271; UPDATE 1 postgres=> UPDATE api_countrygeoms SET geom=ST_Transform(ST_SetSRID(geom, 3857), 4326) WHERE country_id=119; UPDATE 1 ``` username_0: @username_3 excellent! can you paste the output of the following query in a txt file and share? `SELECT ST_AsGeoJSON(geom) as geom FROM api_countrygeoms WHERE country_id=271` username_3: I've run it, but to get the file from the remote environment needs some effort... username_3: Shall I run this? ``` python manage.py import-admin1-data 20210610_NepalRC_Nepal-adm1_CODE-UPDATE.shp --update-geom --update-centroid --update-bbox ``` username_3: @username_0 – So 1, 2, 3 steps are ready on staging. username_0: @username_3 ⚡ fast! could you run a query fetch all the nepal districts from `api_districtgeoms` on staging and share here so @username_2 can confirm it looks good. username_3: [NepalDistricts.zip](https://github.com/IFRCGo/go-frontend/files/6630825/NepalDistricts.zip) This is the result of the below query: ``` select id, name from api_country where name like '%epal%'; id | name -----+------- 123 | Nepal (to check id) select d.name, geom from api_districtgeoms g join api_district d on(g.district_id=d.id and country_id=123); username_0: Looks like the `UPDATE` to fix projection didn't work as expected. I will do some more experiments and confirm when this is ready for production username_2: Very happy that this is now DONE! Thanks to @username_0 @username_1 and @vdeak!! This is a satisfying ticket to close. 😄 Status: Issue closed
wearefine/fae
356742964
Title: Generator does't inserts routes Question: username_0: Having `Category` model already present in my app, I type `rails g fae:scaffold Category name:string`. Everything is generated correctly, except `routes.rb`, where `resource :categories` is not added in `namespace :admin`, but the output `insert config/routes.rb` indicates, that it was done.<issue_closed> Status: Issue closed
velopert/learning-react
349387117
Title: 불필요한 코드 Question: username_0: 348 페이지 post.js 에서 import { handleActions, createAction } from 'redux-action'; 에서 createActions은 불필요하니 없에는 게 맞는 것 같습니다. 괜히 혼란스러울 것 같습니다 Answers: username_1: 그 뒷 부분에 redux-pender 를 사용하게 될 때 createAction 을 사용하게되니 방치하겠습니다. Status: Issue closed
expo/expo
792723064
Title: Expo- Lottie-React-native error/bug Question: username_0: TypeError: _reactNative.Platform.select(...) is not a function[ ![Capture](https://user-images.githubusercontent.com/64904115/105621739-db8faf00-5e30-11eb-94cd-3f3ce5505767.PNG) ](url) Same issue is in WEB and ANDROID Answers: username_1: Hey @username_0, please follow our issue template and fill out all the required information. Once you've done so, we can re-open this issue and investigate it properly. Status: Issue closed
katanoZ/project-cards-app
325563401
Title: シナリオテスト(フィーチャスペック) Question: username_0: ユーザの一連のアプリの操作をシュミレートして期待通りに動くかをテスト - [ ] ユーザがトップページからプロジェクトを作成するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページから編集するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページから削除するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカラムの作成をするまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカラムの編集するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカラムの削除するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカラムの移動するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカードの作成をするまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカードの編集するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカードの削除するまでのシナリオ - [ ] ユーザがプロジェクト詳細ページからカードの移動するまでのシナリオ<issue_closed> Status: Issue closed
naver/searchad-apidoc
539559312
Title: Estimate (Average position bid) 문의 Question: username_0: 안녕하세요. POST /estimate/average-position-bid/id 을 호출 할 때, 광고주의 액세스라이센스와 시크릿키로 호출하면 bid, keyword, position, nccKeywordId를 정상적으로 리턴하지만, AE의 액세스라이센스와 시크릿키로 호출하면 position, nccKeywordId만 리턴되고 bid 값을 리턴하지 않습니다. 예전에는 AE의 권한으로 호출해도 정상적으로 bid 값을 리턴했었는데요. 언제부터 변경 된 것인지 향후에도 광고주의 계정으로 호출했을 때만 해당 API를 사용 가능한건가요? Answers: username_0: 참고로 searchad.naver.com 웹에서는 AE 계정으로도 estimate 가 가능합니다. api 에서만 막아놓은 것 같습니다. username_1: 안녕하세요. 검색광고 API입니다. TransactionID 알려주시면 확인 후 답변 드리도록 하겠습니다. username_0: 'x-transaction-id': 'BF3HLOK0PHH79' 입니다. username_1: AccessToken을 발급하실 때 CustomerID를 대상 광고주의 ID를 입력하셔야 합니다. 입력하신 CustomerID와 등록키워드를 조회할 수 없어 견적 데이타를 생성하지 못한 케이스 입니다. username_0: X-Customer의 값이 잘못되었다는게 무슨 말인지 잘 모르겠네요. X-Customer 는 API를 호출하는 자의 ID 아닌가요? AE의 customerId (1171307) 을 X-Customer 1171307 값을 가지고, 해당 id에 맞게 사이닝해서 X-Signature설정 후 API를 호출하고 있습니다. 광고주의 customerId(957874) 에 맞게 X-Customer를 설정하고 사이닝해서 호출하면 정상리턴하는 것은 확인했구요. 최초 질문이 그거였습니다. AE가 searchad.naver.com 에 접속하면 아래 스크린샷 처럼 광고주의 estimate 값을 가져 올 수 있는데 (bid 값이 존재) <img width="1269" alt="스크린샷 2019-12-19 오후 4 44 47" src="https://user-images.githubusercontent.com/1294162/71154493-f6cbda80-227e-11ea-88f0-786cdf219f17.png"> <img width="1265" alt="스크린샷 2019-12-19 오후 4 44 59" src="https://user-images.githubusercontent.com/1294162/71154507-00edd900-227f-11ea-9736-88d4f434bce3.png"> API 를 통해서는 AE 가 호출하면 bid값이 없음(광고주가 호출하면 있음) <img width="488" alt="스크린샷 2019-12-19 오후 4 56 46" src="https://user-images.githubusercontent.com/1294162/71155216-9d64ab00-2280-11ea-9829-3049f27d968d.png"> 1. 이 상황 (내부 API와 외부 API가 동작을 다르게하는게) 정상적인 상황인 것인지? 2. 정책에 의해 AE가 외부 API를 통해 estimate값을 가져올 수 없는게 확정적이라면 앞으로도 그렇다는 것인지 궁금합니다. * 내부 API : https://manage.searchad.naver.com/api/estimate/average-position-bid/id * 외부 API : https://api.naver.com/estimate/average-position-bid/id username_1: 헤더의 `X-Customer`에 엑세스하고자 하는 대상 광고주의 ID가 입력되어야 합니다. 정책이 변경되지 않았습니다. Signature 생성 시에는 CustomerID가 사용되지 않습니다. `X-Customer`에 대상 광고주ID를 사용해서 다시 호출해보시기 바랍니다. username_1: 좀 더 부연 드리자면, 웹에서 AE(AE가 뭔지는 잘 모르겠지만 대행계정을 의미하시는 것으로 이해하고) 계정으로 로그인해서 광고관리 대상 계정으로 바꾼 후 사용하시는 걸 API로 해석하자면, AE 계정의 api key를 사용한다는 게 곧 '로그인'한 계정과 같은 개념이고 X-Customer에 입력하는 customer id가 곧 관리대상 계정을 설정한 것과 같은 개념입니다. 즉 api key는 로그인한 계정이니 바뀌지 않는 정보이고 관리대상 계정은 바꿔 가면서 사용하는 것이므로 X-Customer를 변수처럼 이용하시는 의미입니다. username_0: 네 이해했습니다. 친절한 답변 감사합니다. Status: Issue closed
bespoken/bst
570752954
Title: Add --env property to indicate the location of the file Question: username_0: ### Summary As a user of bespoken tools I want to be able to have the env file in any location ### Acceptance Criteria 1. GIVEN: I have my env file in another location other than the root folder WHEN: I add the --env + correct path THEN: The test uses the env properties set 2. GIVEN: I have my env file in another location other than the root folder WHEN: I add the --env + invalid path THEN: The test doesn't use the env properties and a warning message is displayed @dmarvp ### NOTES None. Answers: username_1: Can be tested on bst 2.4.60 The final warning message is: A .env file could not be found on: " + envLocation username_0: Marked as verified Build = [email protected] Status: Issue closed
AdobeDocs/target.en
504859902
Title: Request Table type Missing Question: username_0: Issue in help/c-implementing-target/c-implementing-target-for-client-side-web/adobe-target-getoffers-atjs-2.md The Request table does not specify the accepted types for the fields. The limitations of `order > purchasedProductIds` makes it seem that `purchasedProductIds` should be passed as a string value. Sending a string value results in a 400 error - invalid JSON (itself not a very useful error). Only after viewing the documentation for `sendNotification` was it apparent that `purchasedProductIds` should be an array of strings. The request table on this page should also have a column for `type`. Answers: username_1: Thanks for your suggestion. Added link to API doc that contains that information. You can drill down to each field to see its allowable type. https://docs.adobe.com/content/help/en/target/using/implement-target/client-side/functions-overview/adobe-target-getoffers-atjs-2.html Status: Issue closed
sublimehq/Packages
156143209
Title: [PHP] incorrect snippets and comments shown for HTML inside PHP control structures Question: username_0: A few issues have been reported on the forum, regarding the incorrect snippets and comments being shown for HTML inside PHP control structures. I think all the PHP snippets (and the HTML autocompletion plugin) need to be updated to use a new scope selector to prevent this problem. Examples: - Toggle comment function: https://forum.sublimetext.com/t/toggle-comment-bug-between-php-brackets/20341 - `php` -> `<?php ?>` snippet: https://forum.sublimetext.com/t/php-tag-shortcut-not-available-in-unclosed-tags/20302 - `html` tag autocompletion: https://forum.sublimetext.com/t/3114-php-syntax-highlighting-change/20016/10 You can see in those posts the suggestions I have made about what selector to use, but I'm no expert - maybe someone has some better ideas? :) Status: Issue closed Answers: username_1: These should be solved by cd7f8c24ef8c785fc9ff01c1c157beadb35278d8
ThunderModder/bionisation3
419498414
Title: [suggestion] Cyberware compatibility Question: username_0: If player install some augment from Cyberware, they should be immune to all things. Or make an API for Bionisation so authors of another mods can do... things with your mod API :). Like add augment for Bionisation. Answers: username_1: Same problem with API in #10 Author can use full mod like API. About augment from Cyberware.. May be.. But not in this moment. Status: Issue closed
jonlab/NewAtlantis
120558644
Title: Helps set to the ear Question: username_0: Helps (Earing helps in Bacon's text) could be implemented as special kind of "Tools" that could change perception. Bacon : "We have certain helps which set to the ear do further the hearing greatly." Answers: username_1: They are tools. They may be visible on the avatar username_0: Question : can sonification devices be considered as helps ? For example, a geiger counter or a headphone transforming electromagnetic waves to audio. username_1: I would define HELPS as all devices attached to the AVATAR influencing the sound rendering of the world AudioSources. In short = sound effects. They would be different from actuators or sound triggers which activate the sound on specific AudioSources. Your examples are right in the middle in an intermediary zone. One more reason to merge the two categories. We may file the TOOLS between these two poles. ACTIVATORS < - - - - - - - -> HELPS ... > username_0: "influencing sound rendering of the world", OK. But what about "turns something that is not sound into sound" ? (Sonification) username_2: Sonification sounds like a nice project (colour to filter for instance) but maybe later. Otherwise as metined in post about litsener psoition maybe helps could also be considered as different types of microphones which are attached to the avatar even when in third person.
rauenzi/BetterDiscordAddons
360052633
Title: Better Redux is not working Question: username_0: I enabled it and I am not seeing it! ![image](https://user-images.githubusercontent.com/26362558/45513714-6fd1ad80-b771-11e8-9d3a-a30112b5575f.png) Answers: username_0: That isn't working! username_1: Do you see something like the screenshot below when you enable it ? ![image](https://user-images.githubusercontent.com/13572456/45515001-403d9800-b7a7-11e8-8641-11e996e8d7d9.png) username_0: I had startup errors turned off, I turned it on and: ![image](https://user-images.githubusercontent.com/26362558/45515230-8c6fe480-b775-11e8-9093-b0890c2063fd.png) Status: Issue closed
nwchemgit/nwchem
796991390
Title: Quantum EOMCCSD calculation of the excitation energy of uracil is very high Question: username_0: Quantum EOMCCSD calculation of the excitation energy of uracil is much higher than that of the first excite sstate 0f the QA test tce_uracil_creomact using the same geometry. Very Best Regards! [uracilq.log](https://github.com/nwchemgit/nwchem/files/5894791/uracilq.log) Answers: username_0: The replacement of 6-31G with aug-ccpv-dz of the QA test cytosine gets a stuck, and the employment of 6-31+G** still gets very high exciatation energies. [cytosine.log](https://github.com/nwchemgit/nwchem/files/5971345/cytosine.log) Very Best Regards! username_0: Is it only suitable for cow-level states? Very Best Regards! username_0: I am trying using second order FCI and quantum computing to calculate the excitation energies of a small radical. Very Best Regards! username_1: Since I do not have inputs/outputs to examine, I can venture to guess that the problem with the calculations is a combination of the following things: 1. From the manual: "If the number of electrons and orbitals do not correspond to the total numbers of electrons and orbitals, then the calculation will perform a corresponding frozen core/frozen virtual CC/EOMCC calculation. This is to ensure that leading CC/EOMCC excitations do not correspond to orbitals outside of the printed integrals." If you are printing the standard Hamiltonian, you are likely freezing several orbitals and not doing an equivalent calculation to compare with the QA test. 2. EOMCCSD and CR-EOMCCSD(T) are different methods, so you can expect different solutions for excitation energies. The first issue is likely the problem that you are encountering. Status: Issue closed
aigoncharov/cls-proxify
678824457
Title: Recommended way to log with hapi-pino Question: username_0: I read this [article](https://itnext.io/nodejs-logging-made-right-117a19e8b4ce) about adding tracing to logs and it inspired me to want to enable this inside my hapi/hapi-pino project. I registered hapi-pino as a hapi plugin just like the [docs](https://github.com/pinojs/hapi-pino) recommend (below). I'm wondering if it would make more sense to register a separate cls-pino-logger plugin or if it would be recommended to do it somehow inside of hapi-pino? Thanks in advance for any guidance. ``` await server.register({ plugin: require('hapi-pino'), options: { prettyPrint: process.env.NODE_ENV !== 'production', // Redact Authorization headers, see https://getpino.io/#/docs/redaction redact: ['req.headers.authorization'] } }) ``` I created a plugin per the hapi plugin [docs](https://hapi.dev/tutorials/plugins/?lang=en_US): ``` import { clsProxify, clsProxifyNamespace, setClsProxyValue } from 'cls-proxify' import * as Pino from "pino"; const logger = Pino(); const loggerCls = clsProxify('clsKeyLogger', logger) const handler = function (request, h) { clsProxifyNamespace.bindEmitter(request); clsProxifyNamespace.bindEmitter(request.response); clsProxifyNamespace.run(() => { const headerRequestID = request.headers.Traceparent // this value will be accesible in CLS by key 'clsKeyLogger' // it will be used as a proxy for `loggerCls` const loggerProxy = { info: (msg: string) => `${headerRequestID}: ${msg}`, } setClsProxyValue('clsKeyLogger', loggerProxy) }) }; exports.plugin = { name: 'cls-trace-logger', register: function (server, options) { server.route({ method: 'GET', path: '/test/cls', handler }); loggerCls.info('My message!'); } }; ``` But I get this error: [``` 1597365833550] ERROR (55306 on ip-192-168-2-50.ec2.internal): request error err: { "type": "AssertionError", "message": "can only bind real EEs", "stack": AssertionError [ERR_ASSERTION]: can only bind real EEs ``` Status: Issue closed Answers: username_0: Duplicate of #14
w3c/N3
769362685
Title: use of the <#> namespace for the tests in https://w3c.github.io/N3/tests/N3Tests/manifest-reasoner.ttl Question: username_0: Most of the tests in https://w3c.github.io/N3/tests/N3Tests/manifest-reasoner.ttl have mf:action and mf:result which both use the <#> namespace and so <#abc> in mf:action does not corefer with <#abc> in mf:result. They should rather use for instance <test#> or an absolute uri. Answers: username_1: Is this a problem in the manifest itself, or in some specific test result files? Can you provide an example of where this doesn’t produce the right results? username_0: The problem is in the files mentioned in the manifest e.g. https://w3c.github.io/N3/tests/N3Tests/cwm_includes/list-in.n3 is (implicitly) using `@prefix : <#> .` and https://w3c.github.io/N3/tests/N3Tests/cwm_includes/list-in-ref.n3 is also using `@prefix : <#> .` so `:Pythagorean` in the former is not the same as `:Pythagorean` in the latter. The problem seems to be in many entries and I guess it stems from making a copy as -ref.n3 files from the cwm result. username_1: Yes, that's indeed, not the typical pattern for these tests. list-in-ref should have `@prefix : <list-in.n3#> .` It was hidden in my test runner as I set the base of the expected file to the action file, but this is probably not appropriate. I'll see what needs to be addressed and update PR #61. Otherwise, there will be a number of conflicts. Status: Issue closed
HenrikJoreteg/getconfig
392139632
Title: Local is overwriting NODE_ENV Question: username_0: I have these env as json files in my config/ dir: - default - dev - local - production - staging Although I have NODE_ENV set to 'staging', the config 'local' seems to be included to & overwrites the values. v: 3.1.0 Answers: username_0: Same on v4.5.0 too username_0: Renaming in to localbox.json works username_1: this is intended behavior, see: https://github.com/HenrikJoreteg/getconfig#where-to-put-your-config-and-what-to-call-it specifically the list of what configuration files are attempted and the order they're attempted in. each of those files is attempted and layered on top of the previous config layer. that said, i do think that the `local` concept is kind of moot right now given the `.env` file support, so i'll likely remove that special case soon.
cyasam/youtube-video-app
262462482
Title: Where is README Question: username_0: Hello, Please add a readme file. Answers: username_1: Hi, I'm going to write a readme ASAP about how to install. Thanks. Cagdas username_1: Hi again, I have added a REAME into the project. You can check it out. Best. Cagdas username_0: Thanks ( : Status: Issue closed
pytorch/pytorch
864769711
Title: Incorrect example output in sparse_csr_tensor doc-string Question: username_0: tensor(crow_indices=tensor([0, 2, 4]), col_indices=tensor([0, 1, 0, 1]), values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4, dtype=torch.float64) ``` Notice the lack of `layout` parameter in csr tensor repr result. Answers: username_1: We would definitely accept PRs correcting examples to reflect the current version of PyTorch!
Antsax/MinesweeperSolver
410938833
Title: TiraLabra Project review Question: username_0: Since everything is in English, let's continue on the same track. ### Documentation Excellent. I found the docs to be excellent quality. Well presented, clearly written and they follow an easy track. Overall the docs are easy to read and to follow. I cannot find anything to add. ### Code Test folder: Tests are missing from classes and the few ones present lack comments and javadoc. Would be good to add the comments at the time the tests are created, which makes it easier to come back to class and fill in the tests. Main folder: You should add a comment to explain the single pipe characters meaning in the code, eg this block: ``` catch (NumberFormatException | StringIndexOutOfBoundsException e) { System.out.println("y not a number or poor input, try again."); ``` One letter variable names are ok if you have good comments which explain these. Here it's not the case and from experience I can say that these can be way too easily forgotten. Few comments to describe the classes and some methods, but overall a deeper documentation would be great. Code is clean, runs and mostly method names are self explanatory. A lot of code in the project for which I have to raise my hat for, since this isn't the easiest of projects. But you are on the right track and I'd say that with more comments this would be a 5 star project. Keep up the excellent work.
Grinnode-live/2020-grin-bug-bash-challenge
772922792
Title: test the no_payment_proof flag Question: username_0: **Prerequisites**: 1. Setup two GRIN-Wallets (1) + (2) 2. send funds from wallet (1) to wallet (2) via manual Slatepacks do not specify any extra flags 3. note down the length of slatepack message in number of characters 4. cancel the transaction 5. attempt to perform the transaction again from wallet (1) to wallet (2) via manual Slatepacks and set the `--no_payment_proof` flag 6. note down the length of slatepack message in number of characters 7. cancel the transaction The wallet (2) is only required to provide a receiver address. **Expected result:** Expect the slatepack message to be shorter when the `--no_payment_proof` flag is set. Document that. **Other info:** For both wallets include the output of command ``` grin-wallet -V ``` and your environment ``` uname -a ``` Answers: username_1: ### Prerequisites: * Version : grin-wallet 5.0.0 RC1 * Darwin MacBook-Pro-de-Workstation.local 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64 * wallet 1 : grin12wktxlyfx62wx48ldn55katd8zm5d6qfa6mupt9r4uul8eqxagsqct3je5 * wallet 2 : grin1jdj2w0fh8haq9pfuvjmjrev5f4gs34n4n7fnsfmuwf7j09x4v9ws79nq8d ## Step 1 : send funds from wallet (1) to wallet (2) ``` grin-wallet -r "https://grinnode.live:3413" send -d grin1jdj2w0fh8haq9pfuvjmjrev5f4gs34n4n7fnsfmuwf7j09x4v9ws79nq8d 0.1 20201223 10:03:17.405 ERROR grin_wallet_impls::node_clients::http - Error calling get_version: ResponseError error: Cannot parse response 20201223 10:03:17.405 ERROR grin_wallet_impls::node_clients::http - Unable to contact Node to get version info: Client Callback Error: Error calling get_version: ResponseError error: Cannot parse response Password: 20201223 10:03:22.260 WARN grin_wallet_api::owner - Attempting to send transaction via TOR 20201223 10:03:30.862 WARN grin_wallet_api::owner - Unable to send transaction via TOR /Users/workstation/.grin/main/slatepack/97ebabda-347e-4dee-8c0d-97be52fbeb72.S1.slatepack Slatepack data follows. Please provide this output to the other party --- CUT BELOW THIS LINE --- BEGINSLATEPACK. 22WLRHEuTy1Jyvk QnRoKjD5iQNPTA9 Ngk1NMBR11w7yko AYH7AkPbxNNXegU zKRJUjgLeqa8Gc6 4NUbPV8csaNcwaw QScBeNzKKSSKrSq r2BAecdWf2abQhf HuYcEjjiXab9Qqc nPzj7vDZPigu9xi CLiqkiJPFJ6s87C Z6GyA6GowqAVVH1 42KsUfLMRzT53iS nqRshhQozhycHRd 3EXVviZNvzTBqaw mi8rJXbjsQ6m3AH pZyxjxVL5rFpVz9 3mD8inPyn4R6RH2 <KEY>. ENDSLATEPACK. --- CUT ABOVE THIS LINE --- Slatepack data was also output to /Users/workstation/.grin/main/slatepack/97ebabda-347e-4dee-8c0d-97be52fbeb72.S1.slatepack The slatepack data is encrypted for the recipient only Command 'send' completed successfully ``` Command 'send' completed successfully length of slatepack message : 993 ## Step 2 : cancel the transaction ``` grin-wallet -r "https://grinnode.live:3413" cancel -i 41 20201223 10:05:53.939 ERROR grin_wallet_impls::node_clients::http - Error calling get_version: ResponseError error: Cannot parse response 20201223 10:05:53.939 ERROR grin_wallet_impls::node_clients::http - Unable to contact Node to get version info: Client Callback Error: Error calling get_version: ResponseError error: Cannot parse response Password: 20201223 10:05:59.630 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 0% complete 20201223 10:06:01.949 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 99% complete 20201223 10:06:01.959 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 99% complete 20201223 10:06:01.960 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning Complete Command 'cancel' completed successfully ``` Command 'cancel' completed successfully ## Step 3 : send using --no_payment_proof flag ``` grin-wallet -r "https://grinnode.live:3413" send --no_payment_proof -d grin1jdj2w0fh8haq9pfuvjmjrev5f4gs34n4n7fnsfmuwf7j09x4v9ws79nq8d 0.1 [Truncated] Command 'send' completed successfully ``` length of slatepack message : 744 ## Step 4 : cancel the transaction ``` grin-wallet -r "https://grinnode.live:3413" cancel -i 42 20201223 10:09:14.949 ERROR grin_wallet_impls::node_clients::http - Error calling get_version: ResponseError error: Cannot parse response 20201223 10:09:14.949 ERROR grin_wallet_impls::node_clients::http - Unable to contact Node to get version info: Client Callback Error: Error calling get_version: ResponseError error: Cannot parse response Password: 20201223 10:09:20.827 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 0% complete 20201223 10:09:23.162 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 99% complete 20201223 10:09:23.171 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning - 99% complete 20201223 10:09:23.173 WARN grin_wallet_libwallet::api_impl::owner_updater - Scanning Complete Command 'cancel' completed successfully ``` ## Conclusion when using the --no_payment_proof flag, the slatepack was shorter (744 vs 993) username_0: Excellent! Status: Issue closed
naser44/1
128482486
Title: انخفاض زواج السعوديين من المغربيات 50% Question: username_0: <a href="http://ift.tt/1VjLldR">&#1575;&#1606;&#1582;&#1601;&#1575;&#1590; &#1586;&#1608;&#1575;&#1580; &#1575;&#1604;&#1587;&#1593;&#1608;&#1583;&#1610;&#1610;&#1606; &#1605;&#1606; &#1575;&#1604;&#1605;&#1594;&#1585;&#1576;&#1610;&#1575;&#1578; 50%</a>
bpbm-vertnet/bpbm-verts
152798325
Title: Monthly VertNet data use report for 2016-4, resource bpbm_verts Question: username_0: Your monthly VertNet data use report is ready! You can see the HTML rendered version of the reports with this link: http://dev.tools-usagestats.vertnet-portal.appspot.com/reports/b929f23d-290f-4e85-8f17-764c55b3b284/201604/ Raw text and JSON-formatted versions of the report are also available for download from this link. In addition, a copy of the text version has been uploaded to your GitHub repository, under the "Reports" folder. Also, a full list of all reports can be accessed here: http://dev.tools-usagestats.vertnet-portal.appspot.com/reports/b929f23d-290f-4e85-8f17-764c55b3b284/ You can find more information on the reporting system, along with an explanation of each metric, here: http://www.vertnet.org/resources/usagereportingguide.html Please post any comments or questions to: http://www.vertnet.org/feedback/contact.html Thank you for being a part of VertNet.
RepairShopr/react-native-signature-capture
283074438
Title: Way to detect if signature is empty Question: username_0: Need to enforce a signature in my app. Doesnt seem like there is a way to test for an empty signature? I tried to use the ondrag event but that has some type of threshold that has to be crossed before it can fire. Answers: username_0: The ios version does not raise an event when the signature is empty. Updated the android version to have the same behavior. It would be better to always raise the event, maybe with null or something to indicate empty signature. username_1: I agree with @username_0, iOS and Android don't have the same behavior and it would be better to always raise the event username_2: I imagine this is too late to be helpful, but I used state within the component using the sig capture and added unchanged: true, and onDrag - unchanged changes to false, and on reset back to true. username_0: I submitted a pull request to fix shortly after I posted this issue, but it does not seem like this project is really maintained https://github.com/RepairShopr/react-native-signature-capture/pull/106/commits/6ed42b7fba3600700b0e5a719d58db3d092bd7e5 username_3: Any updates? I need to detect if is empty
Geonovum/imkl2015-review
368244086
Title: ExtraRegels: Tabblad ExtraDetailinfo Question: username_0: In het document met extra regels (IMKL2015 v 1.2.1_object-attributen-ExtraRegels.xlsx), in het tabblad ExtraDetailinfo, zijn enkele teksten onjuist. - feature heet officieel ExtraDetailinfo in plaats van ExtraDetailInfo (kleine letter "i"). - attribuut bestandLocatie was: Bij uitlevering verplicht wordt: Bij uitlevering verplicht; bij aanlevering beheerdersinformatie verplicht; niet toegestaan bij aanlevering netinformatie - attribuut bestandMediaType was: Bij uitlevering verplicht wordt: Bij uitlevering verplicht; bij aanlevering beheerdersinformatie verplicht; niet toegestaan bij aanlevering netinformatie Answers: username_1: @username_2 Deze regel moet dan worden opgenomen in de Extra regels van IMKL. username_1: Werkgroep standaarden KLIC is akkoord BAO KLIC met het op 15 november 2018 behandelen username_2: Verwerkt in IMKL2015 v 1.2.1.1io_object-attributen-ExtraRegels.xlsx Opmerking: Niet aangepast in OCL constraints in UML en IMKL2015_Objectcatalogus_1.2.1. Het excel doc met extra regels bevat in feite dezelfde informatie als de OCL constraints. Deze zouden dus ook aangepast moeten worden. Reden: - Aanpassing van de OCL regels heeft een grote impact op de documentatie. Het UML, het modeldoc en de objectcatalogus zouden aangepast moeten worden. - Als basis voor de validatieregels zijn niet de OCL constraints gebruikt maar het excel doc. - De aanpassing betreffen extra regels. Ze zijn niet in tegenspraak met de al in OCL opgenomen regel. Status: Issue closed
sanjayV/ng-image-slider
643417746
Title: Starts the control with some selected index Question: username_0: I've wanted a behavior where I can to specify some input property as default selected index image. Answers: username_1: @username_0 You means there should be some property by which you can provide starting image index and on page load, ng-image-slider will show that image first rather than 0 index image? username_2: @username_0 @username_1 this PR #96 may help you. username_1: @username_0 Using latest version ([2.8.0](https://www.npmjs.com/package/ng-image-slider)), you can use input **defaultActiveImage** for set default selected image on load You can also change images order, by providing order with images url more details available here: https://www.npmjs.com/package/ng-image-slider
tango-controls/cppTango
178285923
Title: cmake: Support cmake v2.8.9 Question: username_0: Hi, We have an issue compiling with CMake on our Debian7 machines where CMake 2.8.9 is installed. We get the following error: ``` CMake Error at cpp_test_suite/CMakeLists.txt:10 (cmake_host_system_information): Unknown CMake command "cmake_host_system_information". ``` Answers: username_1: Since supporting CMake 2.8.9 requires more work. I would suggest to postpone it till we fully migrate from svn (aka close #1 and #11) username_0: It is actually easy to install a more recent version of CMake manually (even the latest stable) on a Debian 7 computer as demonstrated in #417 so this requirement to support CMake v2.8.9 is not considered as critical. Closing this issue. Please reopen if it's a real problem for you. Status: Issue closed
wyohackathon/wyohackathon.github.io
490526640
Title: Speaker Dr. <NAME> Question: username_0: Jim is a Speaker (website, Sched, Devpost) and a Judge (Devpost) BIO <NAME> is a Professor in the Department of Computer Science at the University of Wyoming. He graduated with a PhD in Computer Science from Cornell in 1998 and has an MS and BS in Computer Science from SUNY at Albany. He has been on the Faculty at UW since 19981. Previously, he worked at NASA Langley Research Center for 10 years, at GE Corporate R&D, and at a VLSI CAD tool start-up. Twitter none LinkedIn https://www.linkedin.com/in/james-caldwell-a4746725/<issue_closed> Status: Issue closed
schemaorg/schemaorg
147686217
Title: Use schema:Thing as domain/range of isPartOf Question: username_0: I just noticed that [`isPartOf`](https://schema.org/isPartOf) names `Creative Work` as expected values and as types used on. The list of sub-properties includes [`isPartOfOrder`](https://schema.org/isPartOfOrder), though, which is expected to be used on resources of type [`ParcelDelivery`](https://schema.org/ParcelDelivery) with expected value [`Order`](https://schema.org/Order). Both are [`Intangibles`](https://schema.org/Intangible). Thus, I propose to change the domain and range of `isPartOf` to [`Thing`](https://schema.org/Thing). When doing this, the domain/range of [`hasPart`](https://schema.org/hasPart) should probably be adjusted as well. Answers: username_1: At first glance the basic proposal of having Thing in the range of isPartOf that raises a +1 from me. However if you follow the logic of the associated consequences, some of which you reference, brings you to the inevitable conclusion that we should change the domain and range for both isPartOf and hasPart to be Thing. From a usefulness point of view, that is something I am also inclined to support. However in the spirit of the concerns about hanging too many properties on Thing, we should look for a broad consensus on it. ~Richard <NAME> Founder, Data Liberate http://dataliberate.com Linkedin: http://www.linkedin.com/in/richardwallis Twitter: @rjw username_2: Do we have a use case for this change? I hesitate to make such a large change without a compelling use case. username_3: We would like to use schema.org to define foods. Currently, we define a food using `Class` and `Thing`. We would like to say Food `hasPart` chemical (a `Thing`) and commodity (a `Thing`). Because these `Things` are not a `CreativeWork` we are not able to specify the `hasPart` and `isPartOf` relationships for our defined `Things`. the @username_0 proposal would enable our use case. username_0: The use case I was working on when realizing this inconsistency is relating a record – which normally is not considered to be a creative work – to the dataset it comes from: https://github.com/hbz/lobid-organisations/issues/128#issuecomment-208822543. In this particular case it is quite probable, though, that we will end up describing the relation in some other way e.g. using the PROV Ontology. However, this probably might not be the kind of use case a lot of schema.org users face... username_4: In prior discussion in #436 and elsewhere we have generally decided not to do this. There are so many different senses of "part" that can overlap and be confused if we broaden the property - e.g. see http://plato.stanford.edu/entries/mereology/ My proposal would be to define dedicated properties for specific use cases e.g. for food we might have containsChemical, for places we already have containsPlace, for people who are "part of" organizations we have already got "affiliation", and for sub-events within a larger/longer containing event, we already have subEvent. As we move to having more domain-oriented extensions e.g. medicine, finance I think we'll see a desire to have more precise definitions. What are the "parts" of a medical Patient? (to a GP, to a neuroscientist, to an anesthetist, to a psychiatrist, to a roboticist fitting a prosthetic limb, ...) or of a bank account? of a book? TV Series? I fear that overloading "isPartOf" for all of these will lead to a kind of grey goo :) It's hard enough to be clear even for CreativeWork, and the work around EventSeries #447 shows these things to be differently subtle and tricky in each new area. What we could do is create some navigational structure (e.g. supported at the schema level using subtypes of Property or a superproperty hiearchy) which allows these part-esque properties to be seen more clearly. username_0: The approach outlined by @username_4 sounds reasonable to me. I think the inconsistency should be resolved in another way then. In [this mail](https://lists.w3.org/Archives/Public/public-vocabs/2015Mar/0182.html) some part-whole-relations in schema.org are listed that are not sub-properties of `isPartOf` as they don't refer to part-whole-relationships of creative works. For the sake of consistency, I suggest to stop listing [`partOfSystem`](https://schema.org/partOfSystem) and [`partOfOrder`](https://schema.org/partOfOrder) as subproperties of `isPartOf` as they also don't apply to creative works. username_3: OK, we understand the @username_4 logic. Currently, we use relevant schema.org terms in one graph, and then use a second graph for domain-specific ontologies such as Plant Ontology. So, with schema.org, we can declare the basics (name, Class, Thing), and then declare complementary, domain-specific, objects and relationships in a second graph. The "item" is then graph-1 and graph-2. Google Structure Data Testing Tool validates graph-1 (the schema.org declarations) and then "acknowledges" the structure of graph-2. username_4: @username_3 interesting approach and much as we've hoped we'd see. Do you have a writeup somewhere? username_3: will be back to you with an illustration username_5: I tend to agree with Vicki and Dan. I would really like to avoid going down the path of making Thing the domain and range of too many properties. username_6: +1 to @username_4 approach. But does this mean that partOfOrder needs to be moved out from under isPartof? username_7: +1 to username_4's approach username_4: Sounds like rough agreement. And thanks @username_0 - "For the sake of consistency, I suggest to stop listing partOfSystem and partOfOrder as subproperties of isPartOf as they also don't apply to creative works." - I agree that seems something we should fix. username_4: Ok, I've gone ahead and made that change and pushed it to the staging site at http://webschemas.org/isPartOf username_8: An example along those lines would be great. Which is the 2nd ontology you're using in this way? Status: Issue closed username_4: Published via http://schema.org/docs/releases.html#v3.1 http://blog.schema.org/2016/08/schemaorg-update-hotels-datasets-health.html
ARMmbed/mbed-os-example-lorawan
815842062
Title: HardFault STM32L433RC + SX1272 Question: username_0: <!-- ************************************** WARNING ************************************** The ciarcom bot parses this header automatically. Any deviation from the template may cause the bot to automatically correct this header or may result in a warning message, requesting updates. PLEASE ENSURE ALL SECTIONS OF THIS TEMPLATE ARE FILLED IN AND THAT THERE ARE NO OTHER CHANGES TO THE TEMPLATE. Only bugs should be raised here as issues. Questions or enhancements should instead be raised on our forums: https://forums.mbed.com/ . ************************************************************************************* --> ### Description of defect Hello, I'm using a NUCLEO-L433RC-P + SX1272MB2xAS with the lorawan example code, after uploading the code to the board I get the following hard fault message: ++ MbedOS Fault Handler ++ FaultType: HardFault Context: R 0: 00000000 R 1: 20010000 R 2: 00000001 R 3: 20010000 R 4: 20003314 R 5: 20004170 R 6: 20000CB4 R 7: 20000610 R 8: 00000000 R 9: 00000000 R 10: 00000000 R 11: 00000000 R 12: 20010000 SP : 20002318 LR : 080108F5 PC : 08003354 xPSR : 01000000 PSP : 200022B0 MSP : 2000FFD0 CPUID: 410FC241 HFSR : 40000000 MMFSR: 00000082 BFSR : 00000000 UFSR : 00000000 DFSR : 00000008 AFSR : 00000000 MMFAR: 00000000 Mode : Thread Priv : Privileged Stack: PSP [Truncated] --> #### Target(s) affected by this defect ? NUCLEO L433RC-P #### Toolchain(s) (name and version) displaying this defect ? arm-gcc 10.2.1 #### What version of Mbed-os are you using (tag or sha) ? mbed-os-6.7.0 #### What version(s) of tools are you using. List all that apply (E.g. mbed-cli) mbed-cli, arm-gcc 10.2.1 and Mbed Studio: 1.3.1 #### How is this defect reproduced ? Read description. Answers: username_1: Thank you for raising this detailed GitHub issue. I am now notifying our internal issue triagers. *Internal Jira reference: https://jira.arm.com/browse/IOTOSM-3509* username_2: Few questions: - seems that other targets using sx1272 are setting DIO4 ? - did you try without mbed-trace ? - did you try with default "lora.phy" = EU868 ? - did you try other "main_stack_size" values ?
compnerd/swift-build
616003285
Title: How to install Question: username_0: I went here: https://github.com/username_1/swift-build/releases/tag/v5.2.1 and I see: ~~~ icu.msi installer.exe runtime.msi sdk.msi toolchain.msi ~~~ do I need all of these to get started? I am just trying to run a minimal install for Windows. I checked the Readme but didnt see instructions. Answers: username_1: Yes, you do need all of them for a minimal install - the toolchain is the toolchain, the SDK is the SDK. The runtime is the actual Swift runtime; ICU is a dependency for the runtime. You can use installer.exe to install the full set properly. The instructions are linked from the README that is displayed on GitHub: https://github.com/username_1/swift-build/blob/master/docs/GettingStartedWindows.md Status: Issue closed username_0: If I understand correctly, `installer.exe` is more or less a concatenation of the other 4 files. So a user could just get `installer.exe`, or could get the 4 MSI files. Is that correct? username_1: Yeah, exactly :) BTW, PRs to improve the documentation is welcome!
ProjectSidewalk/SidewalkWebpage
148229134
Title: Google Map Avatar Guy Disappeared Question: username_0: I was in the middle of auditing when suddenly my Google Map avatar guys disappeared. See screenshot: ![image](https://cloud.githubusercontent.com/assets/1621749/14514904/e7c84efa-01c2-11e6-9da8-ff7f3e86ce7d.png) Answers: username_1: This is a bug in Google Maps and Street View APIs. I can't do anything about it. Status: Issue closed username_0: Maybe keep it open though so that we don't get duplicates? Sent from my iPhone >
DefinitelyTyped/DefinitelyTyped
431033081
Title: [@types/react] latest 16.18.13 makes optional prop as required Question: username_0: as discuss of #34386 Here is the reproduce code. It works on 16.18.12 but failed on 16.18.13: ```tsx import * as React from 'react'; import * as PropTypes from 'prop-types'; export type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>; export const tuple = <T extends string[]>(...args: T) => args; const ButtonTypes = tuple('default', 'primary', 'ghost', 'dashed', 'danger'); export type ButtonType = (typeof ButtonTypes)[number]; const ButtonShapes = tuple('circle', 'circle-outline', 'round'); export type ButtonShape = (typeof ButtonShapes)[number]; const ButtonSizes = tuple('large', 'default', 'small'); export type ButtonSize = (typeof ButtonSizes)[number]; const ButtonHTMLTypes = tuple('submit', 'button', 'reset'); export type ButtonHTMLType = (typeof ButtonHTMLTypes)[number]; export type AnchorButtonProps = { href: string; target?: string; onClick?: React.MouseEventHandler<HTMLAnchorElement>; } & Omit<React.AnchorHTMLAttributes<HTMLAnchorElement>, 'type'>; export type NativeButtonProps = { htmlType?: ButtonHTMLType; onClick?: React.MouseEventHandler<HTMLButtonElement>; } & Omit<React.ButtonHTMLAttributes<HTMLButtonElement>, 'type'>; export type ButtonProps = AnchorButtonProps | NativeButtonProps; class Button extends React.Component<ButtonProps, any> { static propTypes = { type: PropTypes.string, shape: PropTypes.oneOf(ButtonShapes), size: PropTypes.oneOf(ButtonSizes), htmlType: PropTypes.oneOf(ButtonHTMLTypes), onClick: PropTypes.func, loading: PropTypes.oneOfType([PropTypes.bool, PropTypes.object]), className: PropTypes.string, icon: PropTypes.string, block: PropTypes.bool, }; render() { return null; } } // Follow code will get ts error that require props which is optional export default () => ( <Button>test</Button> ); ``` ref issue: ant-design/ant-design#15930 @username_1 @Jessidhia @username_2 Answers: username_1: Minimal repro: ```typescript username_1: As mentioned by @username_0, this is a result of https://github.com/Microsoft/TypeScript/issues/20722 where `Pick` does not copy the property modifier from union types. I think we may be able to revert the `MergePropTypes` refactor and address #33742 by more surgically re-applying the optional modifier after the fact. I'm close to a solution doing it that way. I believe such a fix would also address #34588. username_2: @username_1 you might be able to work around this by distributing first before the Pick. ```ts type DistributedPick<T> = T extends any ? Pick<T> : never ``` I believe this will work for each individual union constitutent types. username_1: Thanks, I'll give that a try! Status: Issue closed
zombodb/zombodb
381872817
Title: Attempted to read past total number of hits Question: username_0: ZomboDB version: 10-1.0.0b6 Postgres version: 10 Elasticsearch version: 6.5 Problem Description: I'm using the following query to get 50 documents from ES, using pagination: ```sql SELECT * FROM instagram.profiles WHERE xprof(profiles) ==> dsl.sort( $3, $4, dsl.limit( 50, dsl.offset( $2::bigint, $1::zdbquery ) ) ); ``` An example query (argument 1) is ```json { "query": { "bool": { "filter": [], "must": { "multi_match": { "fields": [ "biography", "biography.english", "full_name", "location_country", "location_region", "location_city", "categories^2" ], "query": "food", "type": "cross_fields" } } } } } ``` If I execute it directly in ES (by wrapping it in a query clause `{"query": query}`) I get 120409 hits. With ZomboDB instead it only works if I query the first page (argument 2 set to 0). If I query successive pages I always get: ``` Attempted to read past total number of hits: 50 ``` Aside from the bug, I think that in my case the simplest solution is to just query ES directly (I have also noticed that ZomboDB is slower than pure ES). Could you tell me how ZomboDB filters the results in order to remove dead documents? I read in the Postgres docs that xmax is supposed to contain the id of the command that deleted that row, but in my case all duplicated documents have xmax set. Answers: username_1: That's far outside the scope of a GitHub issue. I might should sit down some day and write a detailed design document about how ZomboDB maps Postgres row visibility information on top of Elasticearch, and how doing so allows for the complete resolution of visibility rules solely within Elasticsearch, but today isn't that day. If you'd like to create a new issue so we can discuss and review any specific performance issues you might be seeing, I'd be happy to help. Status: Issue closed username_1: Also, when you upgrade to v10-1.0.0b9+, your SELECT statement syntax will have changed. You'll want to read the new [CREATE-INDEX.md](https://github.com/zombodb/zombodb/blob/master/CREATE-INDEX.md) docs. It's not far off from what had to be done prior to b9.
gatsbyjs/gatsby
469829807
Title: [gatsby-plugin-page-creator] (Suggestion) Use explicitly plugin page creator for new gatsby sites Question: username_0: ## Summary Hello 🙋‍♂️I'm currently working on the migration of a gatsby site to a gatsby theme. One the issues I faced is [explained here](https://github.com/gatsbyjs/gatsby/blob/69445cec9b93897a5819f344965da57bfddd5f22/docs/docs/themes/converting-a-starter.md#sourcing-pages ). By default a gatsby site will load pages in src/pages directory. A theme doesn't and need to have the gatsby-page-creator plugin installed and enabled. `{ resolve: `gatsby-plugin-page-creator`, options: { path: `${__dirname}/src/pages`, }, },` In order to have consistency and ease the migration of a site to a theme, could we explicitly add the plugin page creator in new sites created in Gatsby ? Answers: username_0: @username_1 As this is related to themes, you may have an opinion on this, your feedback would be really helpful 💯 username_0: Hello @gatsbyjs/themes @gatsbyjs/core ✋Does it make sense to you? I'm working to ease the migration of gatsby sites to gatsby themes and this change could save a bit of time to gatsby theme creators. If I'm correct, Gatsby already uses gatsby-plugin-page-creator under the hood and [seems loaded here](https://github.com/gatsbyjs/gatsby/blob/34e4add901738e2b15fd1b5875ccb76871ce57a3/packages/gatsby/src/bootstrap/load-plugins/load.js#L196) I think we may explicitly expose the configuration in gatsby-config.js to ease theme creation username_1: https://github.com/gatsbyjs/gatsby/issues/15873 username_2: Published in `[email protected]`
NodeFactoryIo/gitmythx
452451261
Title: Link MythX issue location with source code line in github repo Question: username_0: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/8836210?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [username_0](https://github.com/username_0)** _Thursday May 02, 2019 at 10:56 GMT_ _Originally opened as https://github.com/username_0/gitmythx/issues/4_ ---- Status: Issue closed Answers: username_1: <a href="https://github.com/username_0"><img src="https://avatars0.githubusercontent.com/u/8836210?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [username_0](https://github.com/username_0)** _Thursday May 02, 2019 at 10:56 GMT_ _Originally opened as https://github.com/username_0/gitmythx/issues/4_ ----
lmfit/lmfit-py
732350303
Title: Unconverged fits do not return the best sampled fit Question: username_0: If the sampling is aborted because `max_nfev` was reached, the returned MinimizerResult.params contains the latest samled parameters instead of the one best available fit. Here is the [gist](https://gist.github.com/username_0/871858b83399d66a73e236ddb4cf1e2c) with the sample code. ```python import lmfit import numpy as np def residual(pars, x, data=None): """Model a decaying sine wave and subtract data.""" vals = pars.valuesdict() amp = vals['amp'] per = vals['period'] shift = vals['shift'] decay = vals['decay'] if abs(shift) > np.pi/2: shift = shift - np.sign(shift)*np.pi model = amp * np.sin(shift + x/per) * np.exp(-x*x*decay*decay) if data is None: return model return model - data def chi2r(pars, x, data): return np.square(residual(pars, x, data)).sum()/x.size np.random.seed(1) x = np.linspace(0.0, 250., 10000) noise = np.random.normal(scale=0.7215, size=x.size) params = lmfit.Parameters() params.add('amp', value=13.0) params.add('period', value=2) params.add('shift', value=0.0) params.add('decay', value=0.02) data = residual(params, x) + noise #scramble for name in params: params[name].value *= np.random.uniform(0.1,10.0) nSteps = 10000 bestChi2r = chi2r(params, x, data) def progress(p, i, resi, *_): global bestChi2r bestChi2r = min(bestChi2r, resi) out = lmfit.minimize(chi2r, params, args=(x, data), method='basinhopping', max_nfev=nSteps, iter_cb=progress) lastChi2 = chi2r(out.params, x, data) print(f'Result chi2r = {lastChi2}') print(f'True best chi2r = {bestChi2r}') #print(lmfit.fit_report(out)) ``` My result: ``` Result chi2r = 9.360 True best chi2r = 0.519 ``` I would expect them to match. Answers: username_1: @username_0 Hm, this could be a duplicate of #665. Then again, if the fit aborts early because it reaches the maximum number of evaluations that you set, I don't see why you have any valid expectations of what the result should be. But far more importantly, go back and read the instructions about when and how to raise an issue. Do not ignore those instructions again. username_0: Which made me think, this is actually a bug. It would also be much more practical to report the best fit since for many real-life applications these global minimizers will not fully converge in any reasonable time but still find the best or very good solution relatively fast. Sorry if I violated any contribution guidelines, I tried to follow them. username_1: It's entirely possible that this problem was fixed already in #665, as I mentioned earlier. You did not give version information as instructed (with a warning of "DO NOT IGNORE") when creating an issue, so I don't know whether this applies or not. It is demotivating to continue saying this, but: follow the instructions. You didn't violate contributor guidelines, this is a basic "did not give a complete enough report to act on". Status: Issue closed username_0: My bad, the lmfit version is 1.0.1, which is from May 7, and the fix is from Aug 24, so yes, it's probably the old version, sorry I bothered.
openPMD/openPMD-viewer
454268583
Title: Packages downgraded on `conda` install Question: username_0: `conda install -n myenv -c rlehe openpmd_viewer` ``` The following packages will be DOWNGRADED: cycler 0.10.0-py37_0 --> 0.10.0-py36_0 h5py 2.9.0-py37h7918eee_0 --> 2.9.0-py36h7918eee_0 jsmin 2.2.2-py37_1000 --> 2.2.2-py36_1000 kiwisolver 1.1.0-py37he6710b0_0 --> 1.1.0-py36he6710b0_0 libsass 0.19.1-py37he1b5a44_0 --> 0.19.1-py36he1b5a44_0 llvmlite 0.28.0-py37hd408876_0 --> 0.28.0-py36hd408876_0 matplotlib 3.1.0-py37h5429711_0 --> 3.1.0-py36h5429711_0 mkl_fft 1.0.12-py37ha843d7b_0 --> 1.0.12-py36ha843d7b_0 mkl_random 1.0.2-py37hd81dba3_0 --> 1.0.2-py36hd81dba3_0 numba 0.43.1-py37h962f231_0 --> 0.43.1-py36h962f231_0 numpy 1.16.4-py37h7e9f1db_0 --> 1.16.4-py36h7e9f1db_0 numpy-base 1.16.4-py37hde5b4d6_0 --> 1.16.4-py36hde5b4d6_0 pandas 0.24.2-py37he6710b0_0 --> 0.24.2-py36he6710b0_0 pip 19.1.1-py37_0 --> 19.1.1-py36_0 pyqt 5.9.2-py37h05f1152_2 --> 5.9.2-py36h05f1152_2 python 3.7.3-h0371630_0 --> 3.6.8-h0371630_0 python-dateutil 2.8.0-py37_0 --> 2.8.0-py36_0 scipy 1.2.1-py37h7c811a0_0 --> 1.2.1-py36h7c811a0_0 setuptools 41.0.1-py37_0 --> 41.0.1-py36_0 signac-flow 0.7.1-py37_0 --> 0.7.1-py36_0 sip 4.19.8-py37hf484d3e_0 --> 4.19.8-py36hf484d3e_0 six 1.12.0-py37_0 --> 1.12.0-py36_0 tornado 6.0.2-py37h7b6447c_0 --> 6.0.2-py36h7b6447c_0 wheel 0.33.4-py37_0 --> 0.33.4-py36_0 ``` Answers: username_1: That's in principle no problem. Anyway, with the next release to conda, if @username_2 updates his local environment beforehand, this will take again the latest anaconda libs as dependencies. username_2: Agreed: it would be great to add openPMD-viewer. I just pushed a new release to `pip` (but not yet to `conda`): I'll look into adding it to `conda-forge` in the next few days. username_1: feel free to add me as a co-maintainer when you add the staged recipe :) username_2: @username_0 This issue should now be solved by the new conda-forge package: https://anaconda.org/conda-forge/openpmd-viewer i.e. by doing: ``` conda install -c conda-forge openpmd-viewer ``` Could you confirm that this solves your issue? username_0: Yes, it seems to be working now :) Status: Issue closed username_2: Thanks for confirming!
fgrehm/vagrant-lxc
356085062
Title: Specified node.vm.network ip address doesn't correspond Question: username_0: The specified `node.vm.network.ip` doesn't correspond with the one finally assigned by Vagrant. ```ruby # ... if RUBY_PLATFORM.include? 'linux' node.vm.network :private_network, ip: '198.51.100.3', lxc__bridge_name: 'bridge-yeah01' node.vm.box = 'fgrehm/centos-6-64-lxc' node.vm.provider :lxc do |lxc| lxc.container_name = :node_profile end else # ... ``` And the `vagrant up` does this result : ``` ==> YEAHMAN: Starting container... ==> YEAHMAN: Waiting for machine to boot. This may take a few minutes... YEAHMAN: SSH address: 10.0.3.233:22 YEAHMAN: SSH username: vagrant YEAHMAN: SSH auth method: private key ``` I wonder why in the Vagrantfile, the IP is `198.51.100.3` and the `vagrant up` says `10.0.3.233`. And also in the `/etc/hosts` : ``` 198.51.100.3 YEAHMAN ``` I can connect to the machine using `vagrant ssh`, but the Ansible provision fails : ``` TASK [Gathering Facts] ********************************************************* fatal: [YEAHMAN]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host YEAHMAN port 22: No route to host\r\n", "unreachable": true} to retry, use: --limit @/home/username_0/Projects/yeahman-project/install-server.retry PLAY RECAP ********************************************************************* YEAHMAN : ok=0 changed=0 unreachable=1 failed=0 ``` What am I doing wrong ? Thank you very much and have a great day ! Answers: username_0: @username_1 Hey sup Virgil ! I will try to check to LXD networking. Thanks ! username_1: I don't have a `node.vm.network` line in my vagrant configurations (I don't do fancy stuff network-wise with vagrant-lxc). My LXC default configuration already includes network config so vagrant doesn't have to touch anything. Status: Issue closed
buildo/react-components
125595553
Title: add revenge components Question: username_0: I updated the PR to remove separation between `revenge` and others Status: Issue closed Answers: username_1: should we also rename `revenge` to something else, given https://github.com/buildo/react-components/issues/147 ? having a different folder is a tech requirement? username_0: I updated the PR to remove separation between `revenge` and others Status: Issue closed
uit-no/hpc-doc
120169056
Title: moving example scripts reference Question: username_0: I suggest including the notur example scripts into this overview also. Will clean up that stuff, but the reference to /global/apps/examples should be edited since this folder do not exist. Answers: username_1: The CPE part is referenced in #6. Status: Issue closed
sdi-sweden/geodataportalen
1050696293
Title: (1589, 'Nytt lösen') Question: username_0: **2014-04-23T12:17:02.000+00:00** ****: <NAME>! Enligt min notering mitt lösenord till Geodataportalen går ut någon gång vid den här tiden. Kan jag få nytt? Mvh Biljana <NAME> Systemutvecklare Verksamhetsstöd Länsstyrelsernas IT-enhet 104 22 Stockholm 010 - 224 44 34 OBS! Nytt telefonnummer! 070 - 387 54 21 <EMAIL> www.lansstyrelsen.se
zephyrproject-rtos/west
444035348
Title: west's build/run support is not adequate for real usage Question: username_0: Let's do `west build --help` and read something up: ~~~ -b BOARD, --board BOARD Board to build for -d BUILD_DIR, --build-dir BUILD_DIR Build directory. If missing and run in a Zephyr build directory, it is used; otherwise, it's "build". Always created if it doesn't exist. ~~~ What it says there is that regardless of BOARD specified, it will always use one, the same build dir, or another words, at the same time, building for only one board is supported! It's actually frightening to think that west promotes such a workflow. I can't call myself too advanced a Zephyr developer, but I all the time build for a few boards, like qemu_x86, qemu_cortex_m3, native_posix, a couple of real boards - in parallel. Someone who doesn't test their code during development with multiple boards clearly doesn't do even the very minimal caring about portable code, and again, it's frightening to see the official Zephyr tool promoting such a crippled workflow. Second issue is that there's no "west run". So, after an app is built, a user is thrown out in the code of CMake funkiness, like unknown/unexpected build system used, etc. The whole idea of west should be shielding a user from adhoc build systems like CMake, so all the common actions (build/run in emu/deploy/run in debugger) should be handled by it, or it's unclear why there's extra tool at all. All in all, I find current west is no replacement for a simple wrappers like zephyr-make: https://github.com/zephyrproject-rtos/zephyr/pull/5201 , which people with basic needs, like building for a number of boards in parallel, still have to use. Answers: username_1: Great! So since the concerns you've raised here are already covered by other issues, and you seem to have a workflow in place you are happier with anyway, I'm going to close this. Status: Issue closed username_1: BTW: `west build -t run`. This is in the Zephyr documentation. username_0: Well, that's not riddle enough ;-). Thanks for response anyway!
samuelgozi/firebase-auth-lite
618282570
Title: Don't sign-in automatically on sign-up Question: username_0: It's better to let the developer customise that, just like google's API. With this design, you are taking my choice away and I have to manually sign the user out (I want e-mail confirmations before doing anything). Answers: username_1: Hi, and thanks for opening a new issue. There are multiple reasons for this, and it's not only in this library, [google does this too](https://firebase.google.com/docs/auth/web/password-auth?authuser=0#create_a_password-based_account). The first reason is that when creating an anonymous account, if the user is not signed in immediately, then the account is lost, and cannot be recovered again(because it's anonymous). Another reason is that Google internally uses the same endpoint for both, anonymous and email(called password accounts internally) accounts, so when using either, google will create an account and send us back the credentials in the response. Now, even if the user itself didn't validate their email, the account is still created. So I recommend showing a UI element telling the user that they must validate the email before being able to use full functionality of the app. You can check that by looking at the `user.emailVerified` property. However I do believe that we should provide a way to overwrite this, so I'm leaving this issue open until I implement a solution. username_0: That makes perfect sense. If the original API does this, then perhaps you should indeed close this issue. Thanks a ton for responding thoroughly. username_1: I like to take time to decide such things. Thats why I'm leaving this open. I do agree that we should at least give developers the option, so I'm pretty sure I'll implement this. Even if it's just a third argument that could be a boolean. And also I try never to let the fact that Firebase doesn't something in a certainty dictate if its a good idea or not. In my opinion they made a lot of mistakes, so its okay, and even welcome to question everything. And thank you for taking the time to look into this, and open an issue. Feedback is what makes the libs better.
botman/botman
576198279
Title: Support for symfony/http-foundation v5.0.5 Question: username_0: Hi, Laravel 7 Requires symfony/http-foundation Version 5.0. Could that be supported in Botman ? Thank you very much :) Answers: username_1: There is an active PR for this (#1117) now we only need to wait for the merge and release :) Status: Issue closed
gisaia/ARLAS-wui
751439262
Title: Support shapefile download Question: username_0: Depends on https://github.com/gisaia/ARLAS-server/issues/680 As an arlas.city user, I want to be able to download data in shapefile format from ARLAS-wui (or at least from ARLAS-wui-4city) so that I can import this file in my GIS tools. The cinematic to download this file must be equivalent to the existing download of geojson file.<issue_closed> Status: Issue closed
cleebp/csc-510-group-g
144041257
Title: pyobjc on mac Question: username_0: halp Answers: username_1: If you can't get it to work you can try - http://docs.python-guide.org/en/latest/dev/virtualenvs/ username_0: Looks like some stuff is really bonkers in my system, pip won't work but the solution online is to use home-brew to install python, turns out my home-brew is broken too - weird stuff username_1: Oh, that's weird. That makes sense though. Because I used homebrew to install Python. Didn't realize it actually made a difference though. Apple :confounded: username_0: It looks like a lot of these problems are common with the El Capitan update which changed a lot of permissions and such, still digging username_1: You could try manually installing it. https://pythonhosted.org/pyobjc/install.html It seems like you just have to run a bunch of setup.py files. username_1: Although they might throw the same errors at you. :unamused: username_0: I was able to get virtualenv installed through easy_install and am going to try to get that to work for this, which version of python are you using? username_1: 2.7 for sure. Not sure about the minor version, but I don't think there's much difference between them in terms of what we're going to be needing. I'm at work now so don't have access to my computer. username_0: So after activating the virtualenv with 2.7 I'm getting literally all of the same errors >.< username_0: Idea: I let them work through the tasks and have a stopwatch with me, record the start stop and total times of each task manually... username_1: It might work if you can manage to fix homebrew, and install Python through that. If you can't/don't have time, using a stopwatch should be fine. You only need to log copies though (pastes are done through atom, except when there is no plugin being used). username_0: Alright sounds good, I'll keep trying to get it to work but will fall back on that as a last resort. Status: Issue closed
shinta0806/NicoKaraMaker
213570069
Title: 動画の最初に全面タイトルを入れられるようにして欲しい Question: username_0: アルバム名、曲名、作詞作曲、発売年などのタイトルクレジットを、動画の最初の方に全面表示の形で入れられるようにして欲しいです。 [対応状況]検討中 [報告経路]コンタクト Answers: username_0: 構想メモ タイトルタブで、 ・左上のヘッドラインタイトル(複数行可) ・真ん中の曲名(複数行可) ・右下のその他情報(複数行可) を、それぞれ ・文字列 ・適用フォント ・適用位置 ・表示時刻 をコントロールできるようにするのが現実的か? username_0: Ver 7.40 にて対応完了しました。 Status: Issue closed
tree-sitter/tree-sitter
823530176
Title: Incompatible language version 0. Compatibility range 13 through 13. Question: username_0: Tried with firefox and chromium, same result. With ts-cli <=0.17.x it works. I have another similar grammar where the web interface still works with >=0.18.x, so I'm not sure if there is something with my grammar or if there is a problem with tree-sitter. Note that the grammar generated with 0.19.x works great in neovim, just the web interface doesn't work Answers: username_1: I am getting similar error: ``` Incompatible language version 10. Expected minimum 13, maximum 13 ``` with `tree-sitter-verilog` username_2: I think both of these grammars need to be regenerated with `tree-sitter` CLI version >= 0.19, in order to work version 0.19+ of the Tree-sitter library. username_1: I don't see `tree-sitter` 0.19 in NPM https://www.npmjs.com/package/tree-sitter username_2: The parser generation is done with the `tree-sitter-cli` crate. username_0: The linked grammar is already being re-generated with 0.19.x https://github.com/username_0/tree-sitter-comment/pull/2 username_1: Yes, correct. For generation I am using NPM package `"tree-sitter-cli": "^0.19.2"` And that is where I am getting the error: ``` Incompatible language version 10. Expected minimum 13, maximum 13 ``` username_2: It looks like you're still loading the old (not regenerated) version of the parser. After regenerating and compiling, the parser would have ABI version 13. username_1: Ah. Yes. My grammar does not work with v0.19 anymore. https://github.com/tree-sitter/tree-sitter-verilog/issues/47 Do you have grammar migration guiding? username_2: Hmm, there is no known migration required, but there may have been some bugfixes at some point a long time ago, which caused conflicts to be detected that were not detected before. I would just debug the conflict the same way as usual. username_1: Thank you @username_2 . I fixed the grammar. username_3: I have the same problem trying to build `tree-sitter-python` and using it in [`language-python`](https://github.com/atom/language-python). Atom throws an error as I open a python file. If I generate the file using `--prev-abi`, Atom crashes right away. username_3: I fixed the error I mentioned above by updating the tree-sitter dependency in Atom. I made a PR here: https://github.com/atom/atom/pull/22130 username_0: I still get the error that I originally reported (incompatible language version 0, this is different from building with an old version of treesitter). Doing some more tests, I'm able to launch the playground if I delete any of these definitions https://github.com/username_0/tree-sitter-comment/blob/894b61d68a31d93c33ed48dcc7f427174b440abe/src/scanner.c#L6-L9 from my custom scanner, but then I'd get that `tresitter...._create` isn't defined. I don't make use of the serialization/deserialization for my custom scanner, so I'm leaving the definitions empty (returning 0 when required). Am I maybe declaring those in a wrong way? username_0: So, I was able to find the problem. Looks like this will happen if you have a function that starts with `tree_sitter_{lang}`, fixed it by renaming the function https://github.com/username_0/tree-sitter-comment/commit/d06d2a72e60dc7939aa43d552f6444a91cf0c488. Status: Issue closed
CrunchyData/postgres-operator
913723222
Title: pgo 4.5.0 scheduled backup failing with [UnknownError] remote-0 process on '10.244.140.50' terminated unexpectedly [255]: kex_exchange_identification: Connection closed by remote host\nERROR: [056]: unable to find primary cluster - cannot proceed\n] Question: username_0: pgo 4.5.0 We are getting an intermittent error on backups as follows: ``` time="2021-06-07T06:00:15Z" level=info msg="pgo-backrest starts" time="2021-06-07T06:00:15Z" level=info msg="debug flag set to false" time="2021-06-07T06:00:15Z" level=info msg="backrest backup command requested" time="2021-06-07T06:00:15Z" level=info msg="backrest command will be executed for both local and s3 storage" time="2021-06-07T06:00:15Z" level=info msg="command to execute is [pgbackrest backup --stanza=db --type=full --repo1-retention-full=10 --db-host=10.244.140.50 --db-path=/pgdata/retroelk-prod-kdca && pgbackrest backup --stanza=db --type=full --repo1-retention-full=10 --db-host=10.244.140.50 --db-path=/pgdata/retroelk-prod-kdca --repo1-type=s3 --no-repo1-s3-verify-tls]" time="2021-06-07T06:00:15Z" level=info msg="command is pgbackrest backup --stanza=db --type=full --repo1-retention-full=10 --db-host=10.244.140.50 --db-path=/pgdata/retroelk-prod-kdca && pgbackrest backup --stanza=db --type=full --repo1-retention-full=10 --db-host=10.244.140.50 --db-path=/pgdata/retroelk-prod-kdca --repo1-type=s3 --no-repo1-s3-verify-tls " time="2021-06-07T06:22:50Z" level=error msg="command terminated with exit code 56" time="2021-06-07T06:22:50Z" level=info msg="output=[]" time="2021-06-07T06:22:50Z" level=info msg="stderr=[WARN: unable to check pg-1: [UnknownError] remote-0 process on '10.244.140.50' terminated unexpectedly [255]: kex_exchange_identification: Connection closed by remote host\nERROR: [056]: unable to find primary cluster - cannot proceed\n]" time="2021-06-07T06:22:50Z" level=error msg="command terminated with exit code 56" ``` This is from a scheduled backup job ^ When we run the backup manually it works. Answers: username_0: Seems not intermittent. All scheduled backups are failing. Manual backups work fine. username_1: Same thing here on V5 ``` kex_exchange_identification: Connection closed by remote host ``` I have tried to reinitialise everything, but nothing worked. My current situation is that I was testing loss of backups. I cannot get pgbackrest to re-establish connection to the instances to create a new backup.
radiasoft/sirepo
455400025
Title: warpvnd: red: Invalid color string Question: username_0: The Two Poles example is now raising a javascript error on the Source tab: ```Possibly unhandled rejection: red: Invalid color string``` This is related to the new plotting modulateRGBA code. In cases like this where there is no color modulation, the text color name should be accepted. Answers: username_1: Since this affects three strings in one line of code for one app, should we just use hex like everywhere else? username_0: Whatever is easiest to fix, I think. Status: Issue closed
AustinMarler/punchtalk
439360112
Title: Snapchat delete timer Question: username_0: As a general user I want a "snapchat delete timer" feature that will allow me to optionally have a message delete itself after a set amount of time so that sensitive or private information/messages will have less chance of leaking.
mage2pro/square
203807594
Title: «Invalid value for 'capabilities', must be one of 'CREDIT_CARD_PROCESSING'» Question: username_0: #0 vendor/square/connect/lib/ObjectSerializer.php(273): SquareConnect\Model\Location->setCapabilities(Array) #1 vendor/square/connect/lib/ObjectSerializer.php(241): SquareConnect\ObjectSerializer::deserialize(Object(stdClass), '\\SquareConnect\\...') #2 vendor/square/connect/lib/ObjectSerializer.php(273): SquareConnect\ObjectSerializer::deserialize(Array, '\\SquareConnect\\...') #3 vendor/square/connect/lib/Api/LocationApi.php(167): SquareConnect\ObjectSerializer::deserialize(Object(stdClass), '\\SquareConnect\\...', Array) #4 vendor/square/connect/lib/Api/LocationApi.php(104): SquareConnect\Api\LocationApi->listLocationsWithHttpInfo('sandbox-sq0atb-...') #5 vendor/mage2pro/square/Source/Location.php(23): SquareConnect\Api\LocationApi->listLocations('sandbox-sq0atb-...') #6 vendor/mage2pro/core/Config/SourceT.php(22): Dfe\Square\Source\Location->map() Answers: username_0: The Square team breaks its own library (PHP SDK). I have forked it and fixed the issue. Status: Issue closed
whatwg/encoding
261573776
Title: Missing legacy encodings Question: username_0: A few encodings are missing, mostly interested in adding Aimga-1251 to the list of supported ones A full list of registered encodings and tables is available at iana.org Answers: username_1: I recommend reading through https://encoding.spec.whatwg.org/#preface. It's not a goal to cover all known encodings. Status: Issue closed
coopernurse/node-pool
57607742
Title: Shutting down the whole pool Question: username_0: Hi, Is there a way to completely shut down the whole pool ? I'm looking for something along the lines of: ```javascript var poolModule = require('generic-pool'); var pool = poolModule.Pool({ name: 'vertica', create: function(callback) { callback(null, {test: 'test'}); }, destroy: function(client) { console.log('destroying'); }, max: 10, min: 2, idleTimeoutMillis: 10000, log: false }); process.on('SIGINT', function() { pool.shutdown(function(err) { process.exit(); }); }); ``` I've not seen anything that clearly indicates how one would go about shutting down the pool. Thanks. Answers: username_1: Does these help? https://github.com/coopernurse/node-pool#step-3---drain-pool-during-shutdown-optional https://github.com/coopernurse/node-pool#draining The best way to do this does depend slightly on your circumstances and use, and if you can afford to trash pooled resources while they are being loaned. Status: Issue closed username_1: Going to close this, feel free to re-open if the above solution doesn't help you and you need something else.
jinshuju/tech-blog
941456334
Title: 一次 polyfill 对 babel 的思考🤔 Question: username_0: 然后修改我们的 babel 配置文件: ```json { "presets": [ [ "@babel/preset-env" ] ], "plugins": [ [ "@babel/plugin-transform-runtime", { "corejs": { "version": 3, "proposals": true } } ] ] } ``` 编译运行: ![](https://img.zhouzh.tech/images/image-1626005918129.png) 与第一种方式的 polyfill 对比可以发现,runtime 将新的 API 进行了重命名,它是不会重写覆盖原生方法的。 需要注意 `@babel/plugin-transform-runtime` 的默认配置中,是不会注入对提案的 polyfill 代码。如果想要支持提案中的 API,只需要增加和 @babel/preset-env 类似的配置项。 ```json "corejs": { "version": 3, "proposals": true } ``` # 七、小结 这里可能有人好奇,如果我们 `@babel/plugin-transform-runtime` 开启 corejs 并且 `@babel/preset-env` 也开启 `useBuiltIns` 会怎么样。 结论是:被使用到的 API polyfill 将会采用 runtime 的不污染全局方案(注意:@babel/preset-env targets 设置将会失效),而不被使用到的将会采用污染全局的。 对于 `@babel/preset-env` 我们可以设置 `debug` 参数,来查看那些 API 被 polyfill,配置如下: ```json { "presets": [ [ "@babel/env", { "debug": true, "useBuiltIns": "usage", "corejs": 3 } ] ] } ``` [Truncated] "version": 3, "proposals": true } } ] ] } ``` 可能这里又有新的困惑: 为什么 `@babel/preset-env` 不能使用不污染全局的 polyfill。(不污染全局的 polyfill 必须由 `@babel/plugin-transform-runtime` 引入) 为什么要使用不污染全局的 polyfill 就必须要使用 `@babel/plugin-transform-runtime`,而与此同时我必须妥协掉 `@babel/preset-env` `targets` 带来的体积优势。 没有解决方案? 抱歉在现有的 babel 体系下还真的没有好办法来解决这个问题,当然 babel 也意识到了这个问题,于是有了 [babel-polyfills](https://github.com/babel/babel-polyfills)(不是 @babel/polyfills)这里不在详细展开,有兴趣的可以自行查看。 对没错,我们还遗漏了 react 工程中如何 polyfill。🙂 Answers: username_1: Headers in the content should not be Header 1 (header 1 should be used only for issue header)
umijs/umi
598707213
Title: Top-Level await 支持 Question: username_0: babel插件 `@babel/plugin-syntax-top-level-await` 我看插件官方文档说[webpack@5's experiments.topLevelAwait](https://babeljs.io/docs/en/babel-plugin-syntax-top-level-await)支持 umi可以考虑内置么? Answers: username_1: 内置的是 webpack 4,babel 语法开了也没用。 username_0: @username_1 云谦大佬 会不会考虑在 `@umijs/plugin-webpack-5` 插件添加支持呢 😄 Status: Issue closed
vaadin/flow
776315264
Title: WebComponentIT fails on Windows Question: username_0: https://bender.vaadin.com/viewLog.html?buildId=207085&buildTypeId=Flow_FlowMaster_Nightly_ValidationWithWindows Answers: username_0: Should be fixed by https://github.com/vaadin/flow/pull/10365 This test has been enabled again in the build steps configuration for Flow 6.0 and 7.0.
PurpleI2P/i2pd
199202797
Title: Local i2pd stops proxying after a few minutes Question: username_0: My local i2pd daemon works perfectly fine first several minutes (as an http proxy for browsing .i2p sites in Firefox). After that it continues working as an i2p node (showing a considerable transit traffic), but stops proxying my requests for .i2p sites. It may be associated with the number of connections or tunnels (I don't know). But even if it is the case, why don't my proxy requests for my local daemon have higher priority? Or maybe some nonzero priority at least? Thanks! Answers: username_1: what does it mean "stops"? Connection refused? Reset? No reponse? username_2: i get this too, i suspect it's related to hitting open file limit. username_0: @username_1 I wish I could answer anything smart to this question... In fact, after some minutes after start, if I try to open some .i2p site, the browser tries to connect for a very very long time, and does not succeed. And no message. So I am not sure of what kind of error it is. username_1: That's another issue I need to take a look at. Looks like LeaseSet request never finishes. username_3: Have a same issue as username_0. Infinite page loading after a few mins/hours. username_1: Infinite or just longer? Wait for few minutes and see the result. username_2: it's because of hitting open file limit username_1: @username_3 @username_0 have you tried ulimit -n 4096 before start? username_3: So how to do it? I'm using a i2pd_2.11.0_win64_mingw on win7 username_1: than never mind. try win32 just in case username_3: Nothing changed :( username_1: go there http://127.0.0.1:7070/?page=i2p_tunnels Find one called "HTTP Proxy" and see what's going on. Tunnels? LeaseSets? Number of tags? username_3: ![](http://storage7.static.itmages.ru/i/17/0113/h_1484350475_5794611_15079204c3.png) Some site is loading about 15 minutes and doesn't load yet username_3: So i2pd_2.11.0_win64_mingw after run 32-bit version force closes. But pre-release 64-bit ver 2.10.2 stil works. username_3: And i2pd_2.11.0_win64_mingw_no-avx works good. username_1: do you see .b32 address of your site your are connecting to among LeaseSets? How about streaming section? Do you see that connection there? username_3: Можно я по-русски буду, пока username_0 молчит? Да, адреса своего сайта есть и LeaseSets, и в Tags Outgoing, но я только что перезапустил i2pd, поэтому пока все работает хорошо и нужно подождать по моим наблюдениям около часа. username_1: Можно разумеется :) Вот когда проблема возникнет надо посмотреть: 1. Если ли лизсет запрашиваемого адреса 2. Если есть то есть ли стрим к нему и какие там значения username_3: Так, все снова отвалилось. Соответственно, в LeaseSets и Streams запрашиваемые сайты присутствуют: ![](http://storage1.static.itmages.ru/i/17/0114/h_1484355974_1381588_dd66cca649.png) username_1: Какой статус стримов? username_3: А где это увидеть? username_1: прокрутить правее username_3: Да, извиняюсь, не заметил прокрутку: ![](http://storage7.static.itmages.ru/i/17/0114/h_1484357054_3853534_cefa1fb67f.png) username_1: А если попробовать другим браузером? Я не вижу никаких проблем. Ощущение что браузер по какой то причине не отобразил. curl есть? Хорошо бы им увидеть что там username_4: И что за браузер кстати? Что если попробовать из [бандла](https://github.com/PurpleI2P/i2pdbrowser/releases/latest) запустить браузер? (чисто его, а i2pd запускаете какой вам угодно отдельно)? username_3: Ок, сейчас проверю браузером из бандла. Сам пользуюсь Firefox 50.1.0. Так. Это странно, но да, проблема в браузере. И сайте, на который я пытался зайти. username_4: @username_3 кстати от 50й версии для бандла я решил отказаться лишь из-за того, что как раз там что то странное с серфингом происходит. Сейчас там используется 45.6.0ESR. username_3: В таком случае, спасибо. Проблема есть, но она, как оказалось, не в i2pd. username_1: Странно лишь что, что с 50-ой версией под линукс таких проблем не наблюдается username_5: I've been experiencing same problems as @username_0, problem was with open file descriptors limit. It's always very frustrating and not obvious how to fix. I suggest running i2pd with such script: #!/bin/bash ulimit -n 4096 /path/to/i2pd [your options here] Status: Issue closed
opensource-workshop/connect-cms-ideas
912569806
Title: WYSIWYG で文字サイズの指定 Question: username_0: 現在、WYSIWYG で文字サイズの指定ができません。 ・要望として文字サイズの指定があること。 ・サイトポリシーとして文字サイズ指定をさせたくないケースにも対応したい。 これらを実現させる方法として、サイト管理で文字サイズ指定を表示するかどうか、設定できるようにする。 Answers: username_1: ・文字の大きさアイコンを 表示する/表示しない。 これだけ対応。 username_1: 小さい文字、対応しました。 https://github.com/opensource-workshop/connect-cms/commit/6878fc194884646d423b15ae786adffb14cd04c3 username_1: 対応しました。 文字サイズは下記で設定しました。 ```js fontsize_formats: '0.65rem 0.85rem 1rem 1.15rem 1.3rem 1.5rem 2rem 3rem', ``` ##修正後画面 ### サイト管理>WYSIWYG設定(新規追加) ![image](https://user-images.githubusercontent.com/2756509/121659009-d8921400-cadc-11eb-8cb9-a10d10b38821.png) ### WYSIWYG(文字サイズON) ![image](https://user-images.githubusercontent.com/2756509/121659270-18f19200-cadd-11eb-8dbc-a6aeecdfe4b3.png) ## 修正プログラム https://github.com/opensource-workshop/connect-cms/commit/820984d8a22540d59d7ab1bb77d7acd4253bfbbe username_1: オンラインマニュアル更新しました。 サイト管理 https://connect-cms.jp/manual/manager/site#frame-381 Status: Issue closed
CS130-W20/team-B8
557325672
Title: Creating/deleting an event UI Question: username_0: ## User Story As a User, I would like create, modify and delete events so that other users can see events that the user is holding and can see details about the event. ## Detailed Description The requirement consists of one page for entering event details, as well as buttons for creating, editing, and deleting events. The page should contain fields for all event attributes (name, date, status, tags, etc) and have a button to finish creation or editing of an event. Changes to the event should be stored in service database once changes are confirmed. ## Acceptance Criteria - [ ] Given clicking a button to create an event, it takes the user to a event details page to fill details - [ ] Given clicking a button to finish creating the event on the event details page, the event is saved on the system and can be seen by other users - [ ] Given clicking a button to delete an event, then the event should be deleted by the system after a warning Answers: username_1: Have added buttons and UI elements for creating, editing, deleting, and rating events. Need to work with backend/database team to test and visualize such features on the UI.
App-vNext/Polly
963922405
Title: Best practices for using Polly with Azure durable functions to rate limit HTTP calls Question: username_0: I've been tasked with building an Azure durable function app that is aware of rate limits on HTTP endpoints. The code from #666 appears to be the solution when coupled with typed HttpClients, but not having a lot of experience with either durable functions or Polly, I'm wondering how best to implement this (or if it's even a supported scenario). Essentially I'm going to have a timer trigger, that fires an orchestrator, that fires off an activity that gets a list of data. Using that list the orchestrator will then fire off a number of calls to activities, each of which will perform a single HTTP request. The catch here is that the HTTP endpoints are rate-limited, but don't support status code 429 or the Retry-After header, so I need to be the one limiting my requests to them. Hence the (potential) need for the code from the aforementioned PR. The naïve solution would be to fire off `list.Count` durable functions activities from the orchestrator and let Polly handle rate-limiting of the actual calls. Then, each activity attempts to make its HTTP request and if the request fails due to exceeding the rate limit, the activity catches that and returns a "please requeue me" status to the orchestrator. The problem with this solution, of course, is that most of those activities will initially fail and need to be re-queued, which is a waste of compute power and therefore money, so I don't want to do that. A better option is to batch the activities into sets of N items, where N is the maximum number of requests the endpoint allows, and run each batch. After a batch has completed, review the total time it took to complete said batch: * If that duration is less than the rate limit duration, have the orchestrator sleep for the remainder of the rate limit duration * Otherwise, run the next batch With the above solution, I technically don't even need Polly, but it feels a little... *coarse*, and I'm worried there are pitfalls I'm not aware of. Am I overthinking this, or is there a simpler/more effective solution that I'm missing? Answers: username_1: If you're limiting yourself at the client (rather than the server doing it), would maybe a [bulkhead policy](https://github.com/App-vNext/Polly#bulkhead) work for you? With an appropriate limit on the number of actions that are allowed to wait to proceed when the bulkhead is "full", it would self-limit itself. But yeah otherwise if that doesn't work, without the server being co-operative to give you some sort of feedback to know when to re-submit (like 429 and `Retry-After`), maybe Polly isn't the right off-the-shelf solution for you? username_0: @username_1 thanks for the response! Bulkhead could be useful, except for the fact that it doesn't do (requests/duration) limiting, but perhaps I could combine it with the rate-limiting code from #666 using PolicyWrap? As for the "right solution", I'm not sure what that is at the moment, hence this question (which may be better directed at the durable functions guys). All I know is that I'd much rather use something like Polly, that's been written by experts in the field and is backed by extensive testing (both unit tests and in the field), than hand-roll something kludgy. Ideally this question would be best asked on Stack Overflow, except getting a decent answer there nowadays on a technical question, is generally a lost cause. username_1: Certainly worth a try to see if you can get any mileage out of it. I've not used Azure Durable Functions myself, so it might be that people who've had experience of it might be able to give you some better guidance more specific to your use case.
geofffranks/spruce
663665397
Title: spruce merge with json pipe Question: username_0: $ spruce merge aab aaa aaa: unmarshal []byte to yaml failed: yaml: line 31: did not find expected ',' or ']' $ echo $? 2 and when I use spruce json pipe with spruce **1.16.2** , it's returning with exit 2 $ spruce merge aab aaa**|spruce json** aaa: unmarshal []byte to yaml failed: yaml: line 31: did not find expected ',' or ']' $ echo $? 2 but if I use **1.25.2 or 1.25.3**, it's returning exit 0 $ spruce merge aab aaa**|spruce json** aaa: unmarshal []byte to yaml failed: yaml: line 31: did not find expected ',' or ']' $ echo $? 0 Would you check this? Answers: username_1: This is a side effect of 1.25.3 allowing null input to output an empty yaml document. Previously the failure to merge passed empty data into `spruce json` which then exited 2. Now this failure is passing empty data to `spruce json` which is allowable, and that exits 0. To catch the merge failure, you will want to `set -o pipefail` before running the `spruce merge | spruce json` Status: Issue closed
Magikcraft/product-board
255823119
Title: Add Plots again Question: username_0: # User Story As a user I can ... # Background _What happens now, what's been tried, why is this useful, etc..._ # Feature Description _Detailed description of the feature, including feature list._ # Out of Scope Great things that can be done in version 2: # Acceptance Criteria The following need to function to consider this feature complete: # User Acceptance Test Plan Here is the process for testing this feature: # End-User Documentation _[Docs that can be copypasta to the user docs]_ Answers: username_1: Hi @username_0 thanks for opening this issue. I have not forgotten about it. @username_2 - what will it take to add plots back in to Magikcraft again? username_2: The plot ability has been in the game since day one. Just give me a server and world and it can be done instantly. In that requirements, there are two things that are currently very difficult. 1. Disallowing magic in a area is impossible. I've been asking for it for ages but ive never seen it implimented. 2. Is the monetry aspect. We have no way currently for anyone to make money, to that end, there is nothing in setup to handle it. I can enable it, but i cant give any projection or timeline for it username_0: Hi @username_1 and @username_2. We could combine the Magikcraft lobby idea and the Plot idea to create one world. We could do this by putting the lobby in the middle of the world, with all the pots around it and putting a area where you can get items for free from a command block. To do this i will be happy to build it but i will need permissions to build and put down command blocks. The command blocks will not be able to be tampered with. username_0: @username_2 couldn't we just not have the magikcraft mod in that world? username_1: @username_2 what's involved in it? For example: on the current production server at play.magikcraft.io does anything need to be done to the configuration in the image, or is it something that is done at runtime? username_2: Plots have been added in, with the understanding of action that @username_1 is to block magik on the plots or persistent/plots world Status: Issue closed username_1: # User Story As a user I can ... Build whatever I want to without having to worry about it being destroyed or deleted # Background _What happens now, what's been tried, why is this useful, etc..._ # Feature Description A place where anyone can buy a plot for free, then expand for some money, where they can build anything they want without it getting deleted or tampered with by other players. # Out of Scope Great things that can be done in version 2: # Acceptance Criteria There is to be NO MAGIC in the plots so players can't blow stuff up The following need to function to consider this feature complete: # User Acceptance Test Plan Here is the process for testing this feature: # End-User Documentation _[Docs that can be copypasta to the user docs]_ username_1: * How do users get to the plots world? * When they join for the first time. * Any time they join. * Currently users can claim a plot, but cannot build / mine in their plot. See: https://www.youtube.com/watch?v=x_MM-p4G__M username_2: To access the plots world, the users need to go to the lobby world, which is the world 'world' currently. There are two portals there, one for mining and another for the plots. Both are marked. And since players are spawned in the plots world on a new join, they have instant access The problem with users not being able to build in the plots is due to the permissions. IE Players are unable to build anywhere. I did update the permissions to allow the loophole on the world plots and mining but it must not be working. Ill look at it again when i get an opotunity username_1: On a server update, I spawn in ljblockbuster's lounge world `world-blockbuster`. username_0: hey @username_2 I can't find the lobby or the plots whenever I join. is it supposed to be like that or are you still working on it? Status: Issue closed
gruntjs/grunt
155611097
Title: Cannot find module 'brace-expansion' Question: username_0: ``` E:\Dropbox\flat.fm>grunt module.js:442 throw err; ^ Error: Cannot find module 'brace-expansion' at Function.Module._resolveFilename (module.js:440:15) at Function.Module._load (module.js:388:25) at Module.require (module.js:468:17) at require (internal/module.js:20:19) at Object.<anonymous> (E:\node_modules\grunt-cli\node_modules\minimatch\minimatch.js:10:14) at Module._compile (module.js:541:32) at Object.Module._extensions..js (module.js:550:10) at Module.load (module.js:458:32) at tryModuleLoad (module.js:417:12) at Function.Module._load (module.js:409:3) ``` I have `brace-expansion` in my `E:\node_modules` folder. ``` [email protected] [email protected] ``` Answers: username_1: It sounds like `grunt-cli` didn't get installed properly (could be related to having Dropbox managing that folder). Try `npm i grunt-cli -g` again. username_0: @username_1 Thank you! (No-no, Dropbox is not related to this issue at all :) Status: Issue closed username_2: How is this got fixed? I am facing the same issue.
hvac/hvac
355262503
Title: Client.enable_auth_backend() does not support config Question: username_0: It looks like the [API](https://www.vaultproject.io/api/system/auth.html#enable-auth-method) accepts a config map (similar to enable_secrets) for enabling an auth method, but the `enable_auth_backends()` method does not: ```TypeError: enable_auth_backend() got an unexpected keyword argument 'config'``` Also it looks like the API accepts `plugin_name`, but the client does not. Answers: username_1: Ideally closed with #253 merged. Please reopen if that is not the case 😉 . Status: Issue closed username_0: thanks!