repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
frictionlessdata/goodtables-py
674877849
Title: Mistake in example used in documentation Question: username_0: --- Please preserve this line to notify @username_1 (lead of this repository) Answers: username_1: Hi thanks, It's actually correct in goodtables terms ( row at line 4). Terminology is quite confusing so the new frictionless-py lib uses position word Status: Issue closed
department-of-veterans-affairs/va.gov-team
586270352
Title: [Backend] Populate test user data for participants Question: username_0: ## User Story or Problem Statement As a research participant testing the medical device reordering system I would like to see my own data populated - however to protect PII I would like to log in with a test user credentials. ## Goal _A test user account populated with data from research participants. ## Tasks - [ ] _Get participant's names and unique identifier from research team - [ ] _Tie participants data with test user and populate data ## Acceptance Criteria - [ ] _On test user to one participant with real data on log-in --- ## How to configure this issue - [ ] **Attached to a Milestone** (when will this be completed?) - [ ] **Attached to an Epic** (what body of work is this a part of?) - [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`) - [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`) - [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)<issue_closed> Status: Issue closed
gradle/gradle
1147481273
Title: Make it possible to disable writing to the execution history Question: username_0: First: This is a niche request, and it may not impact a lot of other users. Nonetheless, it would have a significant impact on the performance of our CI builds, as explained below, so we’re hoping you’ll consider merging some version of our PR (incoming). **Background:** For our pre-merge CI builds, we run a very large number of tests in parallel by checking out VMs from a cloud service, making several containers per VM using Docker, and running tests in each of those containers. Before this happens, we run compilation (and some other tasks) using Gradle, and include the resulting files in the docker image we send to VMs. Each container process then runs a certain set of tests using Gradle. One issue we’ve run into is that executionHistory.bin becomes huge in our builds -- currently 5.9 GB after running compilation tasks plus a second set of tasks with similar classpath inputs. This is an issue because of the copy-on-write behavior from using Docker and overlayfs. When Gradle runs, it writes to this file, which prompts Docker to make a copy of the entire file. Even though we’re using an in-memory filesystem, which reduces the runtime impact of this copy, the 6 GB added to memory usage from each container is large enough to significantly reduce the amount of parallelism we can do -- it’s about 25% of our memory used. (Why is it 5.9 GB? I’ve poked around in the related code. This is mostly an issue of how it scales with both number of projects and number of source files. For a single task, the stored information includes an entry for each .class file elsewhere in the repo that is included in the task’s dependencies. There’s no sharing of this information among tasks. Our largest tasks store 14MB each, with 11000 tasks run from 770 projects.) **The request:** We’d like to be able to disable writing to the executionHistory.bin file during a given Gradle invocation. I expect this would happen via a system property or other suitably obscure method, and it would be undocumented and clearly unstable/unsafe (“this will void your warranty”). We aren’t asking for future support, beyond “please don’t remove this for no reason”. We have tested this approach, and it works for us (with one caveat: we also have to set the file to read-only in the filesystem, or else it will get copied when it is opened anyway). If the executionHistory.bin could be made smaller, that would also serve our needs, but I expect that would be a much more difficult undertaking. Answers: username_1: Duplicates #19978 username_1: Could you pls. let us know the Gradle version you are using? username_0: This is with Gradle 7.3.3. username_2: IIUC, there are some tasks in your build that work with a large number of inputs that result in this behavior. One thing you can try is to disable up-to-date checks for just those tasks: https://docs.gradle.org/current/userguide/more_about_tasks.html#sec:disable-state-tracking username_0: Are you asking me to disable up-to-date checks for all my `compileJava`/`compileTestJava` tasks in my downstream projects? 😕 username_2: I'm not sure about the "downstream projects" part, but I believe yes. I also don't understand the disappointed face. If you disabled writing to the execution history, you'd get no up-to-date checks or incremental execution for any of your tasks. How is my recommendation worse than that? username_0: We write to the execution history in a build that compiles everything and runs other preparatory work. Then we run a second build (in the containers) that runs some tests, but reuses the compilation, etc. from the first run. Only the second run would use a read-only execution history. username_2: I'm lacking some serious context here, but you could also use the local cache to keep your downstream builds from repeating compilation. Less efficient, but avoids writing to the execution history, which was the stated goal. To take a step back, I don't see us implementing this functionality in this way as the resulting complexity would not be worth it. If we invested here, it would be more useful to de-dupe the execution history. It would decrease the size of the history file, and would increase performance for other builds, too. We have some plans to do that, but unfortunately it's not high on the priority list at the moment. username_3: @username_0 One more question: Which `executionHistory.bin` is using 6GB? Is it the one in the Gradle user home or the one in the project directory? username_0: @username_3: This is the one in the project directory. @username_2: I agree that read-only execution history shouldn't be a supported feature or an area that Gradle devs put time into. The most I'm hoping for is if https://github.com/gradle/gradle/pull/20006 could be considered, to spare us the pain of maintaining a fork or equivalent hacks. Status: Issue closed username_2: With #20006 we'd need to add a lot more testing, make sure it's properly documented and available as a public feature. I once again don't think it's something we'll have bandwidth for, and even if we had it, there are better ways to invest here. Sorry about that. We have decreased memory consumption in Gradle in #20002. Maybe this will help your use case already; we've seen it save a lot of heap for one of our customers with a large build.
moznion/p6-Backtrace-AsHTML
114495476
Title: Provide an option to include the actual error message in the generated page Question: username_0: The generated HTML page shows *where* the exception happened, but not *what* happened. This would be much more useful if it had an option to display $exception.message at least (it looks like the P5 inspiration for this module includes the error message?). Answers: username_1: Thanks for your suggestion. Devel::StackTrace::AsHTML (p5 module) doesn't include exception message. I have no objection to add that message.
vim-airline/vim-airline
50450199
Title: Add an option for customizing trailing criteria Question: username_0: This is a feature request. In autoload/airline/extensions/whitespace.vim line 49, trailing spaces are identified using the regular expression ```'\s$'```. However, I'd like to use something different like ```'[^,]\s%\|^\s$'``` to allow trailing ', '. Is it possible to add such an option for end users?<issue_closed> Status: Issue closed
CocoaPods/CocoaPods
665533535
Title: get error when use pod command Question: username_0: # Report I install cocoapods with `sudo gem install cocoapods` in my react-native project, however i get this error when i do pod install or every command starts with pod ``` /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require': dlopen(/Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi_c.bundle, 9): no suitable image found. Did find: (LoadError) /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi_c.bundle: code signature in (/Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi_c.bundle) not valid for use in process using Library Validation: mapped file has no cdhash, completely unsigned? Code has to be at least ad-hoc signed. - /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi_c.bundle from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi.rb:6:in `rescue in <top (required)>' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ffi-1.13.1/lib/ffi.rb:3:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/ethon-0.12.0/lib/ethon.rb:2:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/typhoeus-1.4.0/lib/typhoeus.rb:2:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/cocoapods-1.9.3/lib/cocoapods/sources_manager.rb:5:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/cocoapods-1.9.3/lib/cocoapods/core_overrides.rb:1:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/cocoapods-1.9.3/lib/cocoapods.rb:75:in `<module:Pod>' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/cocoapods-1.9.3/lib/cocoapods.rb:17:in `<top (required)>' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `rescue in require' from /Applications/MAMP/Library/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:39:in `require' from /Applications/MAMP/Library/lib/ruby/gems/2.3.0/gems/cocoapods-1.9.3/bin/pod:36:in `<top (required)>' from /Applications/MAMP/Library/bin/pod:22:in `load' from /Applications/MAMP/Library/bin/pod:22:in `<main>' ``` ## What did you do? ℹ sudo gem install cocoapods & pod install << get error here ## What did you expect to happen? ℹ be able to use pod command ## What happened instead? ℹ i get above error whenever use `pod` commands ... ## CocoaPods Environment ℹ i get same error when rum pod env 😢 Answers: username_1: This seems an issue with installing. the `ffi` gem that CocoaPods depends upon. I would file an issue with them instead. Status: Issue closed username_2: @username_1 Hi, I'm having the exact same issue currently - I've raised the following issue with FFI: https://github.com/ffi/ffi/issues/836 However, the owner of that repo apparently doesn't own a Mac and knows little about the build process. If you look in that issue you should see all the info I've tried to give him from myself directly and from Apple, but if you're able to assist by adding anything he might need to know (or where to look in the resources I linked) I would _hugely_ appreciate it. I appreciate you're probably busy and being a dev too I understand how vague issues can be when they're in third-party packages, but in building with CocoaPods everything's third-party to us :-) Any info I can contribute more to help please let me know Thanks James
tox-dev/tox
787599729
Title: Tox is too cowardly to handle PyPy on Windows Question: username_0: While working on https://github.com/twisted/towncrier/pull/314 I started getting the ```cowardly refusing to delete `envdir` ``` error. https://github.com/twisted/towncrier/pull/314/checks?check_run_id=1715424156 I just recreated it locally as well. Note that the `pep517` branch is over at https://github.com/username_0/towncrier/tree/ba56cc52d99810f93cfa3a2667025a06e7eda7e8. In GitHub Actions: ``` ERROR: cowardly refusing to delete `envdir` (it does not look like a virtualenv): D:\a\towncrier\towncrier\.tox\pypy37-tests ``` Locally: ``` ERROR: cowardly refusing to delete `envdir` (it does not look like a virtualenv): C:\epc\towncrier\.tox\pypy37 .tox finish: provision after 1.49 seconds ``` <details> <summary>tox run</summary> ``` PS C:\epc\towncrier> venv/scripts/tox -rvve pypy37 using tox.ini: C:\epc\towncrier\tox.ini (pid 1876) removing C:\epc\towncrier\.tox\log could not satisfy requires PackageNotFoundError('tox-wheel') c:\epc\towncrier\venv\scripts\python.exe (c:\epc\towncrier\venv\scripts\python.exe) is {'executable': 'c:\\epc\\towncrier\\venv\\scripts\\python.exe', 'implementation': 'CPython', 'ver sion_info': [3, 8, 3, 'final', 0], 'version': '3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)]', 'is_64': True, 'sysplatform': 'win32', 'extra_version_in fo': None} .tox uses c:\epc\towncrier\venv\scripts\python.exe using tox-3.21.1 from c:\epc\towncrier\venv\lib\site-packages\tox\__init__.py (pid 1876) .tox start: getenv C:\epc\towncrier\.tox\.tox .tox cannot reuse: -r flag .tox recreate: C:\epc\towncrier\.tox\.tox removing C:\epc\towncrier\.tox\.tox setting PATH=C:\epc\towncrier\.tox\.tox\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Python36-32\Scripts\;C:\Program Files (x86)\Python36-32\;C:\Program Files (x8 6)\Python37-32\Scripts\;C:\Program Files (x86)\Python37-32\;bC:\Program Files (x86)\Python36-32\Scripts\;bC:\Program Files (x86)\Python36-32\;bC:\Program Files (x86)\Python35-32\Script s\;bC:\Program Files (x86)\Python35-32\;bC:\Python27\;bC:\Python27\Scripts;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Progra m Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files\TortoiseGit\bin;C:\ProgramData\chocolatey\bin;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\PuTTY\;C:\Progr am Files\Git\cmd;C:\epc\pypy\pypy2.7-v7.3.3-win32;C:\epc\pypy\pypy3.6-v7.3.3-win32;C:\epc\pypy\pypy3.7-v7.3.3-win32;C:\Users\sda\AppData\Local\Microsoft\WindowsApps;C:\Program Files\Me rcurial;C:\Program Files (x86)\Graphviz2.38\bin;C:\Program Files\Python36\Scripts;C:\Program Files\Python36;C:/msys2;;C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\bin; [5600] C:\epc\towncrier\.tox$ 'c:\epc\towncrier\venv\scripts\python.exe' -m virtualenv --no-download --python 'c:\epc\towncrier\venv\scripts\python.exe' .tox created virtual environment CPython3.8.3.final.0-64 in 699ms creator CPython3Windows(dest=C:\epc\towncrier\.tox\.tox, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\sda\AppData\Local\pypa\virtualenv) added seed packages: pip==20.3.3, setuptools==51.1.2, wheel==0.36.2 activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator .tox installdeps: virtualenv>=20.0.35, tox-wheel>=0.5.0, tox >= 3.21.1 setting PATH=C:\epc\towncrier\.tox\.tox\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Python36-32\Scripts\;C:\Program Files (x86)\Python36-32\;C:\Program Files (x8 6)\Python37-32\Scripts\;C:\Program Files (x86)\Python37-32\;bC:\Program Files (x86)\Python36-32\Scripts\;bC:\Program Files (x86)\Python36-32\;bC:\Program Files (x86)\Python35-32\Script s\;bC:\Program Files (x86)\Python35-32\;bC:\Python27\;bC:\Python27\Scripts;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Progra m Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files\TortoiseGit\bin;C:\ProgramData\chocolatey\bin;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\PuTTY\;C:\Progr am Files\Git\cmd;C:\epc\pypy\pypy2.7-v7.3.3-win32;C:\epc\pypy\pypy3.6-v7.3.3-win32;C:\epc\pypy\pypy3.7-v7.3.3-win32;C:\Users\sda\AppData\Local\Microsoft\WindowsApps;C:\Program Files\Me rcurial;C:\Program Files (x86)\Graphviz2.38\bin;C:\Program Files\Python36\Scripts;C:\Program Files\Python36;C:/msys2;;C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\bin; [9208] C:\epc\towncrier$ 'C:\epc\towncrier\.tox\.tox\Scripts\python.EXE' -m pip install 'virtualenv>=20.0.35' 'tox-wheel>=0.5.0' 'tox >= 3.21.1' Collecting tox>=3.21.1 Using cached tox-3.21.1-py2.py3-none-any.whl (84 kB) Collecting tox-wheel>=0.5.0 Using cached tox_wheel-0.6.0-py2.py3-none-any.whl (6.2 kB) Requirement already satisfied: wheel>=0.33.1 in c:\epc\towncrier\.tox\.tox\lib\site-packages (from tox-wheel>=0.5.0) (0.36.2) [Truncated] │ │ │ └───__pycache__ │ │ └───__pycache__ │ └───__pycache__ ├───wheel-0.36.0.dist-info ├───zope │ └───interface │ ├───common │ │ ├───tests │ │ │ └───__pycache__ │ │ └───__pycache__ │ ├───tests │ │ └───__pycache__ │ └───__pycache__ ├───zope.interface-5.2.0.dist-info ├───_distutils_hack │ └───__pycache__ └───__pycache__ ``` </details> Answers: username_1: looks like pypy chose `lib` and `Scripts` instead of `Lib` and `Scripts` for some reason -- I wonder if this is something pypy should change to be more like cpython: https://github.com/tox-dev/tox/blob/4061f56b91e01e4b8b36043eb67a80115a39d9be/src/tox/venv.py#L752 username_2: I disagree. IMHO here the issue is we're not interrogating the target environment about its sysconfig, but assuming hardcoded paths. 🤷🏻 Python distributions are free to change their directory layout as they wish and there's no PEP to restrict them in any way... username_1: @username_2 why didn't tox write its `.tox-config1` into this environment? username_2: I'm not sure from the top of my head 🤔 username_1: the directory interrogation is really only meant for the situation where tox is replacing a `--devenv` -- so something must've changed that prevented it from writing the `.tox-config` correctly username_2: Closing this as seems is a transient state. Status: Issue closed username_0: Transient? It happens every time I run the CI on that PR. username_0: I also didn't add PyPy testing on Windows for Twisted due to this. https://github.com/twisted/twisted/pull/1516#issuecomment-779449653 username_1: I can't reproduce, so it's something you're not showing I assume: ``` (venv) C:\Users\asott\AppData\Local\Temp\x\astpretty>pip freeze --all appdirs==1.4.4 cffi==1.14.3 colorama==0.4.4 distlib==0.3.1 filelock==3.0.12 greenlet==0.4.13 importlib-metadata==3.7.0 packaging==20.9 pip==20.1.1 pluggy==0.13.1 py==1.10.0 pyparsing==2.4.7 readline==6.2.4.1 setuptools==47.1.0 six==1.15.0 toml==0.10.2 tox==3.22.0 typing-extensions==3.7.4.3 virtualenv==20.4.2 zipp==3.4.0 (venv) C:\Users\asott\AppData\Local\Temp\x\astpretty>tox -e pypy37 GLOB sdist-make: C:\Users\asott\AppData\Local\Temp\x\astpretty\setup.py pypy37 create: C:\Users\asott\AppData\Local\Temp\x\astpretty\.tox\pypy37 pypy37 installdeps: -rrequirements-dev.txt pypy37 inst: C:\Users\asott\AppData\Local\Temp\x\astpretty\.tox\.tmp\package\1\astpretty-2.1.0.zip pypy37 installed: astpretty @ file:///C:/Users/asott/AppData/Local/Temp/x/astpretty/.tox/.tmp/package/1/astpretty-2.1.0.zip,atomicwrites==1.4.0,attrs==20.3.0,cffi==1.14.3,colorama==0.4.4,covdefaults==1.2.0,coverage==5.4,greenlet==0.4.13,importlib-metadata==3.7.0,iniconfig==1.1.1,packaging==20.9,pluggy==0.13.1,py==1.10.0,pyparsing==2.4.7,pytest==6.2.2,readline==6.2.4.1,toml==0.10.2,typing-extensions==3.7.4.3,zipp==3.4.0 pypy37 run-test-pre: PYTHONHASHSEED='198' pypy37 run-test: commands[0] | coverage erase pypy37 run-test: commands[1] | coverage run -m pytest tests ================================================= test session starts ================================================= platform win32 -- Python 3.7.9[pypy-7.3.3-beta], pytest-6.2.2, py-1.10.0, pluggy-0.13.1 cachedir: .tox\pypy37\.pytest_cache rootdir: C:\Users\asott\AppData\Local\Temp\x\astpretty collected 28 items tests\astpretty_test.py ......................xxxxxx [100%] ============================================ 22 passed, 6 xfailed in 3.34s ============================================ pypy37 run-test: commands[2] | coverage report --fail-under 100 Name Stmts Miss Branch BrPart Cover Missing --------------------------------------------------- --------------------------------------------------- TOTAL 193 0 36 0 100% 3 files skipped due to complete coverage. summary pypy37: commands succeeded congratulations :) ``` username_0: I'm not sure what I could be not showing in my public repositories and CI runs... `:]` With a tool that tox uses, no less. Did you try reproducing using the PR that was shared as exposing the issue? I'm not surprised that an unrelated project doesn't have a problem. But sure, it doesn't happen everywhere. Anyways, I'll try to break down the change to get to something more specific since complete build records apparently aren't sufficient. Sorry there's some snark here, but suggesting I am keeping secrets doesn't really seem fair given everything noted above. username_1: my guess is you're using some plugin which is monkeypatching tox internals in a broken way and that's what's causing your problem I've demonstrated it's not an issue with tox note also that we're doing this for free and we're not your personal debugging service, so please have some empathy username_0: I'm trying to get the next version of a tool you use released... so likewise. I didn't complain about it not getting fixed quickly. I didn't complain about the closure, just tried to clarify. Being told I'm not sharing things when the complete code and build log are available is not reasonable. And you know perfectly well that a single case working is not remotely close to evidence it isn't an issue with tox. But sure, there is a whole project involved here. username_2: While working on https://github.com/twisted/towncrier/pull/314 I started getting the ```cowardly refusing to delete `envdir` ``` error. https://github.com/twisted/towncrier/pull/314/checks?check_run_id=1715424156 I just recreated it locally as well. Note that the `pep517` branch is over at https://github.com/username_0/towncrier/tree/ba56cc52d99810f93cfa3a2667025a06e7eda7e8. In GitHub Actions: ``` ERROR: cowardly refusing to delete `envdir` (it does not look like a virtualenv): D:\a\towncrier\towncrier\.tox\pypy37-tests ``` Locally: ``` ERROR: cowardly refusing to delete `envdir` (it does not look like a virtualenv): C:\epc\towncrier\.tox\pypy37 .tox finish: provision after 1.49 seconds ``` <details> <summary>tox run</summary> ``` PS C:\epc\towncrier> venv/scripts/tox -rvve pypy37 using tox.ini: C:\epc\towncrier\tox.ini (pid 1876) removing C:\epc\towncrier\.tox\log could not satisfy requires PackageNotFoundError('tox-wheel') c:\epc\towncrier\venv\scripts\python.exe (c:\epc\towncrier\venv\scripts\python.exe) is {'executable': 'c:\\epc\\towncrier\\venv\\scripts\\python.exe', 'implementation': 'CPython', 'ver sion_info': [3, 8, 3, 'final', 0], 'version': '3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)]', 'is_64': True, 'sysplatform': 'win32', 'extra_version_in fo': None} .tox uses c:\epc\towncrier\venv\scripts\python.exe using tox-3.21.1 from c:\epc\towncrier\venv\lib\site-packages\tox\__init__.py (pid 1876) .tox start: getenv C:\epc\towncrier\.tox\.tox .tox cannot reuse: -r flag .tox recreate: C:\epc\towncrier\.tox\.tox removing C:\epc\towncrier\.tox\.tox setting PATH=C:\epc\towncrier\.tox\.tox\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Python36-32\Scripts\;C:\Program Files (x86)\Python36-32\;C:\Program Files (x8 6)\Python37-32\Scripts\;C:\Program Files (x86)\Python37-32\;bC:\Program Files (x86)\Python36-32\Scripts\;bC:\Program Files (x86)\Python36-32\;bC:\Program Files (x86)\Python35-32\Script s\;bC:\Program Files (x86)\Python35-32\;bC:\Python27\;bC:\Python27\Scripts;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Progra m Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files\TortoiseGit\bin;C:\ProgramData\chocolatey\bin;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\PuTTY\;C:\Progr am Files\Git\cmd;C:\epc\pypy\pypy2.7-v7.3.3-win32;C:\epc\pypy\pypy3.6-v7.3.3-win32;C:\epc\pypy\pypy3.7-v7.3.3-win32;C:\Users\sda\AppData\Local\Microsoft\WindowsApps;C:\Program Files\Me rcurial;C:\Program Files (x86)\Graphviz2.38\bin;C:\Program Files\Python36\Scripts;C:\Program Files\Python36;C:/msys2;;C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\bin; [5600] C:\epc\towncrier\.tox$ 'c:\epc\towncrier\venv\scripts\python.exe' -m virtualenv --no-download --python 'c:\epc\towncrier\venv\scripts\python.exe' .tox created virtual environment CPython3.8.3.final.0-64 in 699ms creator CPython3Windows(dest=C:\epc\towncrier\.tox\.tox, clear=False, no_vcs_ignore=False, global=False) seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\sda\AppData\Local\pypa\virtualenv) added seed packages: pip==20.3.3, setuptools==51.1.2, wheel==0.36.2 activators BashActivator,BatchActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator .tox installdeps: virtualenv>=20.0.35, tox-wheel>=0.5.0, tox >= 3.21.1 setting PATH=C:\epc\towncrier\.tox\.tox\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files (x86)\Python36-32\Scripts\;C:\Program Files (x86)\Python36-32\;C:\Program Files (x8 6)\Python37-32\Scripts\;C:\Program Files (x86)\Python37-32\;bC:\Program Files (x86)\Python36-32\Scripts\;bC:\Program Files (x86)\Python36-32\;bC:\Program Files (x86)\Python35-32\Script s\;bC:\Program Files (x86)\Python35-32\;bC:\Python27\;bC:\Python27\Scripts;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\Progra m Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files\TortoiseGit\bin;C:\ProgramData\chocolatey\bin;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\PuTTY\;C:\Progr am Files\Git\cmd;C:\epc\pypy\pypy2.7-v7.3.3-win32;C:\epc\pypy\pypy3.6-v7.3.3-win32;C:\epc\pypy\pypy3.7-v7.3.3-win32;C:\Users\sda\AppData\Local\Microsoft\WindowsApps;C:\Program Files\Me rcurial;C:\Program Files (x86)\Graphviz2.38\bin;C:\Program Files\Python36\Scripts;C:\Program Files\Python36;C:/msys2;;C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\bin; [9208] C:\epc\towncrier$ 'C:\epc\towncrier\.tox\.tox\Scripts\python.EXE' -m pip install 'virtualenv>=20.0.35' 'tox-wheel>=0.5.0' 'tox >= 3.21.1' Collecting tox>=3.21.1 Using cached tox-3.21.1-py2.py3-none-any.whl (84 kB) Collecting tox-wheel>=0.5.0 Using cached tox_wheel-0.6.0-py2.py3-none-any.whl (6.2 kB) Requirement already satisfied: wheel>=0.33.1 in c:\epc\towncrier\.tox\.tox\lib\site-packages (from tox-wheel>=0.5.0) (0.36.2) [Truncated] │ │ │ └───__pycache__ │ │ └───__pycache__ │ └───__pycache__ ├───wheel-0.36.0.dist-info ├───zope │ └───interface │ ├───common │ │ ├───tests │ │ │ └───__pycache__ │ │ └───__pycache__ │ ├───tests │ │ └───__pycache__ │ └───__pycache__ ├───zope.interface-5.2.0.dist-info ├───_distutils_hack │ └───__pycache__ └───__pycache__ ``` </details> username_0: It seems that PyPy 3.7 itself is unable to create a venv. Further it fails without a failure exit code from what I can tell. I noticed this locally first but recreated in CI for sharing. I'll try to dig into this in the PyPy direction. PyPy 3.6 does successfully create an env directly in CI and exhibits the same issue reported above when used via tox. The fact that 3.6 and 3.7 both have the same issue via tox but different issues when calling its venv directly leaves me thinking there is a PyPI issue and also another issue still that may be in tox. But sure, when there's an issue this flagrant in a relevant bit of the underlying tooling (PyPy venv), I understand that being a thing that should perhaps be explored first. PyPy 3.6: https://github.com/twisted/towncrier/pull/321/checks?check_run_id=1995458131 I noticed that the tree I provided in the initial report lacks the actual files, instead only listing the directories. Apparently `tree` in Windows required `/f` to show files... My apologies for that error. Note that this exploration was added in the separate PR https://github.com/twisted/towncrier/pull/321/files. https://github.com/twisted/towncrier/pull/321/checks?check_run_id=1995458141 ```yaml - name: Try creating an env if: matrix.python.major == 3 run: | python -m venv a_unique_env_directory tree /f a_unique_env_directory ``` ``` 2021-02-27T21:01:27.3134872Z python -m venv a_unique_env_directory 2021-02-27T21:01:27.3135441Z tree /f a_unique_env_directory 2021-02-27T21:01:27.3200612Z shell: C:\Program Files\PowerShell\7\pwsh.EXE -command ". '{0}'" 2021-02-27T21:01:27.3201167Z env: 2021-02-27T21:01:27.3201713Z pythonLocation: C:\hostedtoolcache\windows\PyPy\3.7.9\x86 2021-02-27T21:01:27.3202321Z ##[endgroup] 2021-02-27T21:01:28.0784308Z Unable to copy 'C:\\hostedtoolcache\\windows\\PyPy\\3.7.9\\x86\\venvlauncher.exe' 2021-02-27T21:01:28.2212747Z Error: [WinError 2] The system cannot find the file specified 2021-02-27T21:01:28.2590284Z Folder PATH listing for volume Temporary Storage 2021-02-27T21:01:28.2591041Z Volume serial number is 3672-F364 2021-02-27T21:01:28.2591678Z D:\A\TOWNCRIER\TOWNCRIER\A_UNIQUE_ENV_DIRECTORY 2021-02-27T21:01:28.2594289Z � pyvenv.cfg 2021-02-27T21:01:28.2594893Z � 2021-02-27T21:01:28.2595728Z ����Include 2021-02-27T21:01:28.2596391Z ����Lib 2021-02-27T21:01:28.2596961Z � ����site-packages 2021-02-27T21:01:28.2598233Z ����Scripts 2021-02-27T21:01:28.2599258Z libpypy3-c.dll 2021-02-27T21:01:28.2600784Z pypy3.exe ``` ![image](https://user-images.githubusercontent.com/543719/109400229-79393b00-7915-11eb-95dd-78736a9730b9.png) username_2: I'm 99% sure this will end up being a pypy bug and we can't do much here. username_2: PyPy3 on Windows uses lowercase ``lib`` instead of uppercase ``Lib`` and this breaks our assumptions in tox 3 - see https://github.com/tox-dev/tox/blob/4061f56b91e01e4b8b36043eb67a80115a39d9be/src/tox/venv.py#L752 username_2: The underlying issue here is that newer pypy patches the path for venv at https://foss.heptapod.net/pypy/pypy/-/blob/branch/py3.7/lib-python/3/venv/__init__.py#L128-137, rather than use ``sysconfig.purelib``/``sysconfig.platlib`` as one would expect 🤔 and virtualenv patches for an old behaviour of pypy... username_2: The issue here is https://github.com/pypa/virtualenv/issues/2071 Status: Issue closed username_0: @username_2, thank you for your time digging into this. You are certainly more familiar with these mechanisms including how they should be the same and who should accommodate which variations. username_1: @username_2 actually, tox-wheel is what's breaking this -- they're monkeypatching tox's venv setup so it doesn't write the config out username_2: That's not a problem (though admittedly unfortunate behavior). Would you remove ``tox-wheel``, It still fails due to the above. I've tested it. username_1: it doesn't for me, though one run of `tox-wheel` poisons the `.tox` cache username_2: Do you have pypy3.7-7.3.3 on Windows? For me, that does replicate it (and remove tox-wheel). username_1: yep
kelleyma49/PSFzf
548320681
Title: Support --expect and --execute Question: username_0: It looks like some fzf options like --expect and --execute work OK on Windows. Could they be added to the Invoke-Fzf wrapper? Answers: username_1: I'm working on this. Having some issues with my github actions - please bear with me. :) username_1: Here's a prerelease version with expect support: https://www.powershellgallery.com/packages/PSFzf/1.1.35-alpha. I haven't had time to test if it really works yet, but feel free to try it out. username_1: @username_0, did this work for you? username_0: Yes, this works! Sorry, came back to check something else and realized this was still open. Status: Issue closed
BryanWilhite/SonghayCore
654420424
Title: add `GetRelativePath` based on `GetPathFromAssembly` Question: username_0: take these lines for reuse: ```csharp if (string.IsNullOrWhiteSpace(fileSegment)) throw new ArgumentNullException("fileSegment", "The expected file segment is not here."); fileSegment = FrameworkFileUtility.TrimLeadingDirectorySeparatorChars(fileSegment); if (Path.IsPathRooted(fileSegment)) throw new FormatException("The expected relative path is not here."); fileSegment = FrameworkFileUtility.NormalizePath(fileSegment); ``` https://github.com/username_0/SonghayCore/blob/f10086b6e6f6abe3fb5328cf58f85f97151611ac/SonghayCore/FrameworkAssemblyUtility.cs#L84 Answers: username_0: throw this in the hopper as well: ```csharp /// <summary> /// Gets the absolute or relative path with the specified file segment. /// </summary> /// <param name="baseInfo">The base information.</param> /// <param name="fileSegment">The file segment.</param> /// <returns></returns> /// <exception cref="ArgumentNullException"> /// baseInfo /// or /// fileSegment /// </exception> public static string GetAbsoluteOrRelativePath(DirectoryInfo baseInfo, string fileSegment) { if (baseInfo == null) throw new ArgumentNullException(nameof(baseInfo)); if (string.IsNullOrEmpty(fileSegment)) throw new ArgumentNullException(nameof(fileSegment)); fileSegment = FrameworkFileUtility.TrimLeadingDirectorySeparatorChars(fileSegment); fileSegment = FrameworkFileUtility.NormalizePath(fileSegment); return Path.IsPathRooted(fileSegment) ? fileSegment : baseInfo.ToCombinedPath(fileSegment); } ``` username_0: `GetRelativePath` will be the equivalent of calling: - `TrimLeadingDirectorySeparatorChars(string)` - `NormalizePath(string)` - `RemoveBackslashPrefixes(string)` - `RemoveForwardslashPrefixes(string)` This will allow a re-factoring of `GetCombinedPath()` to pass through a rooted `path` as well as use `Path.Combine()` as usual. Status: Issue closed
alphagov/govuk-design-system
852440076
Title: Scope options for mailing list platform Question: username_0: ##What We want to create a release/event based mailing list. Before we do that, we need to scope which platform will be appropriate in terms of functionality and metrics. ##Why There has been demand for more information on release notes/design decisions. These go into GitHub, but clearly people are not finding them easily. A mailing list will surface important information and send it directly to their inboxes. ##Who Community manager Content designer Answers: username_1: So far options I'm aware of are: - Mailchimp - used by cross-gov UCD (send only) and content communities (capture and send) - Smartsurvey - capture only - used by cross-gov UCD (capture) - GOV.UK Notify - send only - not sure anyone uses for newsletters username_2: Agreed to use Mailchimp as it meets our needs. Status: Issue closed
openshift/origin
294660569
Title: Routes should support Generation and ObservedGeneration Question: username_0: https://github.com/kubernetes/community/blob/master/contributors/devel/controllers.md It's hard to build controllers on top of it otherwise because they have no way of reporting what generation they have already reconciled. And you end up with flaky tests full of sleep(magic_number). Should be an easy change: 1. In admission set `generation==1` on Create. 1. In admission raise `generation` for every Spec change. 1. Set `status.observedGeneration` to `generation` at the end of sync loop if they don't already match. @openshift/sig-networking @username_2 Answers: username_1: /kind feature username_1: /assign @knobunc username_2: For routes it’s a bit more complex. Routes can have multiple controllers. The status field is subdivided. We also are very judicious about routers setting status because it can generate write storms. We should add generation for sure. We should probably output generation into the template for debugging. Not sure how much further we should go. username_0: I am fine with having at least Generation as that will provide a base for higher level controllers to report their own observedGeneration (using an annotation). I guess observedGeneration wouldn't really mean that much for Routes as the controller doesn't do a lot that's externally visible as the admission is set by the appropriate router. On a side note, if I recall correctly, I couldn't figure out a way to find out if Route is exposed. I could find out that it's admitted but that doesn't mean its exposed. Am I missing something or do we need to report such status better? username_0: /lifecycle froze username_1: /lifecycle frozen username_3: /remove-sig networking /sig network-edge Status: Issue closed
ernbrn/quiet-comment
500332471
Title: PRs with long title / PR number push elements to the right down and break the sticky navbar Question: username_0: <img width="1060" alt="Screen Shot 2019-09-30 at 10 50 48 AM" src="https://user-images.githubusercontent.com/7559041/65890399-d28fea00-e370-11e9-9b3a-42a7887e0cb1.png"> The "Quiet Comment" button and others are pushed out of the sticky nav container 😳
Atlantiss/NetherwingBugtracker
365223006
Title: [Quest][Wetlands] Young Crocolisk Skins Question: username_0: [//]: # (Enclose links to things related to the bug using http://wowhead.com or any other TBC database.) [//]: # (You can use screenshot ingame to visual the issue.) [//]: # (Write your tickets according to the format:) [//]: # ([Quest][Azuremyst Isle] Red Snapper - Very Tasty!) [//]: # ([NPC] Magistrix Erona) [//]: # ([Spell][Mage] Fireball) [//]: # ([Npc][Drop] Ghostclaw Lynx) [//]: # ([Web] Armory doesnt work) **Description**: Quest is not available for my level 22 Draenei hunter, even after completing breadcrumb quest 469 Daily Delivery. **Current behaviour**: <NAME> does not offer quest 484 Young Crocolisk Skins , tested both before and after completing 469 Daily Delivery. Maybe he doesn't like Draeneis? **Expected behaviour**: The quest should be available, other bug report suggests others have been able to take it previously. **Server Revision**: 2086 Answers: username_0: I just got the quest, seems to be working as of rev 2094 :+1: Acquired at level 23, but should be available from level 18 as per https://web.archive.org/web/20071111191900/http://thottbot.com:80/q484 . username_1: I think the problem is that it requires honored Ironforge reputation, so maybe you got that eventually and could take it. I don't think it should require that though! I can't find any place that mentions it besides other private servers. It also makes no sense because the quest gives Stormwind reputation. There was a comment where someone assumed Daily Delivery needed honored reputation but someone corrected them - it's just a guess but maybe someone set them both to that long ago based on the probably bad comment, then later the first one was fixed while this one was forgotten about. It will be removed soon unless someone finds evidence that that's how it should be. username_1: That requirement is removed in revision 2152. Status: Issue closed
evennia/evennia
324132090
Title: Compile/Run error when loading cmdset Question: username_0: (Unsuccessfully tried 'pegasus_cmdsets.TableTestCmd').Traceback (most recent call last): File "/home/swift/muddev/evennia/evennia/commands/cmdsethandler.py", line 198, in import_cmdset cmdsetclass = cmdsetclass(cmdsetobj) TypeError: __init__() takes exactly 1 argument (2 given) ``` #### Extra information, such as Evennia revision/repo/branch, operating system and ideas for how to solve / implement: Version output: ``` Evennia 0.7.0 (rev 5335f217) MU* development system Licence https://opensource.org/licenses/BSD-3-Clause Web http://www.evennia.com Irc #evennia on irc.freenode.net:6667 Forum http://www.evennia.com/discussions Maintainer (2010-) username_1 (griatch AT gmail DOT com) Maintainer (2006-10) <NAME> OS posix Python 2.7.14 Twisted 16.0.0 Django 1.11.10 ``` OS Info: ``` Distributor ID: Ubuntu Description: Ubuntu 17.10 Release: 17.10 Codename: artful ``` Status: Issue closed Answers: username_1: As Tehom (and the wiki) says, you should a cmdset, not a Command. Closing. username_0: Ah! that makes sense. Sorry for the false alarm, thought I'd found a bug. thanks Tehom & username_1
NEU-Libraries/cerberus
164604255
Title: A/V out of sync Question: username_0: I have a small set of videos that meet the requirements for Wowza streaming (MP4, H.264, ACC), but the audio and video are out of sync when uploaded to the DRS and streamed: Joey and I have tweaked the videos in a few different ways (changed to ACC passthru, basic transform using ffmpeg, change the frame rater from 30 to 29.97), but the audio and video stay out of sync when uploaded. Are there clues in our Wowza or JWPlayer configurations that might help us investigate this issue? Answers: username_1: I'm getting a different issue looking at https://repository.library.northeastern.edu/files/neu:cj82n908x ```There was an error playing this file. Please try on a different device, verify Flash is enabled, or download the file.``` combined with ```[.PPAPIContext]GL ERROR :GL_INVALID_ENUM : glTexImage2D: target was GL_TEXTURE_RECTANGLE_ARB``` in the Javascript console. Will look into it. username_1: Ok, fixed my own problem (related to my specific Chrome settings it seems). Trying to reproduce the sync issue. username_0: Let me know if you need more sample videos! username_1: Hm, not sure how much we can do about this. General search queries for this kind of issue with Wowza tends to turn up posts suggesting to look at the encoding (which I know you and Joey have done) There is this - https://www.wowza.com/forums/content.php?356-How-to-debug-AAC-or-MP3-timecode-issues-with-Apple-HLS-packetization - which we can try sometime, but all it might do is highlight what we can see/hear, without giving cause username_1: I'm seeing artifacts and delays offline, as well? username_0: The videos aren't high quality to begin with, but none of the original videos i was given seem to be out of sync username_1: Hard to tell with the quality. I'll try and get some logging occurring, either with with master file offline or with Wowza Status: Issue closed username_0: I'm pretty sure I've fixed the videos.
ant-design/ant-design
515638846
Title: DatePicker[onChange] receiving mismatched values for callback Question: username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/antd-reproduction-template-3ey68?fontsize=14) ### Steps to reproduce Change the date of each of the two DatePickers to any day, let's say 2019-10-31. Observe the console log. Note that the first DatePicker with no value prop shows the date and dateString as having matching date portions (2019-10-31). The date is a moment, and thus also includes a time component. Note that the second DatePicker with a value prop defined shows the date and dateString as having different date portions. The date is a moment with a date portion still at the date of the value passed in (2019-01-01), but the dateString has a date matching the newly selected date (2019-10-31) ### What is expected? First and foremost, I believe the date and dateString should match. It does not make sense to have a callback where the two values diverge. This is where you would handle what you want to do with the new date, but it is only passing me the previous date. A common use case here would be to have redux update the store with the new `date` value in the callback. ### What is actually happening? When the value prop is passed to DatePicker, it does not update the `date` argument passed to the onChange callback, but it does for the `dateString` argument. | Environment | Info | |---|---| | antd | 3.24.3 | | React | [email protected] | | System | Mac OS Mojave (10.14.6) | | Browser | Google Chrome Version 77.0.3865.120 (Official Build) (64-bit) | --- I ran into this bug because my redux store update would not change the date as I had anticipated. Diving in, I noticed that the date and dateString diverged when using the value prop. Note that I encounter the same issue if I use defaultValue instead of Value for a prop. <!-- generated by ant-design-issue-helper. DO NOT REMOVE --> Answers: username_0: moving to another issue. Status: Issue closed
nodejs/nodejs.org
252703775
Title: Wikimedia logo marked for deletion because of licencing issue Question: username_0: See https://commons.wikimedia.org/wiki/Commons:Deletion_requests/File:Node.js_logo.svg Apparently the SVG logo approximation I made based on the AI file was marked for deletion because the linked source page (https://nodejs.org/en/about/resources/) does not make that logo available under a free licence. I'm not sure how to proceed. Shall we add the SVG logo to that page and note it as freely available as opposed to the official AI version? Answers: username_1: Hmmm, not really sure either. :/ username_2: I will bring this up in our internal team meeting. This feels silly. More like a relic of the group who originally worked on the files vs. what's actually allowed. username_2: You may use Node.js or the Node.js logo on your website as a hyperlink to the home page of the Node.js Foundation. Please make sure any use of the marks follow the Visual Identity Guidelines. For example, the marks may be resized, but not modified in any other way. And remember that you must avoid any use that implies any endorsement of you or your site or otherwise by the Foundation. The Foundation may in the future define specific image files to use for these use cases. Maybe I'm misunderstanding the problem. username_3: The request was made by an anonymous user, that's a red flag. I'm referencing the policy in a reply to that request. username_3: I opened a deletion request for an old duplicate that might have been scraped by a bot 3 years ago. https://commons.wikimedia.org/wiki/File:Node.js_logo_2015.svg The references in that pages show it's not used in any wiki entry. username_4: Deletion debate on Wikimedia is flagged as closed, closing this issue as well. Status: Issue closed
jsonata-js/jsonata
303060229
Title: Alternative to require('jsonta') to be used in server side Question: username_0: I am wrting code for the server side using Python and need jsonata to parse the JSON data.For interfacing between python and javscript, I am using js2py library.It works well for any given Javascript code and processes the result.However when I run the code with the given javascript code for jsonata, it throws an error ``` Traceback (most recent call last): File "/home/souvik/PycharmProjects/ServiceHandler/Testjs.py", line 67, in <module> data = js2py.eval_js(data) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/evaljs.py", line 113, in eval_js return e.eval(js) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/evaljs.py", line 182, in eval self.execute(code, use_compilation_plan=use_compilation_plan) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/evaljs.py", line 177, in execute exec(compiled, self._context) File "<EvalJS snippet>", line 2, in <module> File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/base.py", line 899, in __call__ return self.call(self.GlobalObject, args) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/base.py", line 1344, in call return Js(self.code(*args)) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/host/jseval.py", line 42, in Eval executor(py_code) File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/host/jseval.py", line 49, in executor exec(code, globals()) File "<string>", line 2, in <module> File "/home/souvik/utorapp/lib/python3.5/site-packages/js2py/base.py", line 1079, in get raise MakeError('ReferenceError', '%s is not defined' % prop) js2py.internals.simplex.JsException: ReferenceError: require is not defined ``` It seems the error is generated by 'require(jsonata)'. Since I want jsonata to work on the server side, is there any way to do it? Answers: username_1: @username_0 Can you show the original Python code you're using, and can you show the JavaScript where the error occurred? I'm looking through the `jsonata` source and distribution but I don't see a place where we do `require(jsonata)` or `require('jsonata')` or anything like that... I think a line of code like that is more likely to appear in your code than in ours.
pyvista/pyvista
1110620794
Title: Fail to read mesh from obj in CentOs Question: username_0: **Describe the bug, what's wrong, and what you expect:** On linux it fails to load obj file while the same loads on Windows. The model is a simple box exported by Ansys tool. If needed I can send the file. ----- **To Reproduce** ```py import pyvista as pv data = pv.read(file) ``` **Screenshots** If applicable, add screenshots to help explain your problem. ----- ![pyvisa_screenshot](https://user-images.githubusercontent.com/77293250/150556547-a470c3b5-999e-4fb6-bb27-f77ccd89640a.png) **System Information:** Linux CentOs Answers: username_1: Can you share the mesh file? I suspect this is an issue with the data and not CentOS or PyVista username_1: @username_0, I don't think attaching files works when replying to the comment via email (at least not for this format). Would you share via a Dropbox or Google Drive link (or something similar) username_0: You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>> username_2: No issue with: ```py import pyvista as pv data = pv.read('./Model_AllObjs_AllMats.obj') ``` ### `pyvista.Report`: ``` -------------------------------------------------------------------------------- Date: Sat Jan 22 09:44:05 2022 MST OS : Linux CPU(s) : 8 Machine : x86_64 Architecture : 64bit RAM : 38.8 GiB Environment : IPython File system : ext4 GPU Vendor : Intel GPU Renderer : Mesa Intel(R) UHD Graphics 620 (WHL GT2) GPU Version : 4.6 (Core Profile) Mesa 21.0.3 Python 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] pyvista : 0.34.dev0 vtk : 9.1.0 numpy : 1.21.4 imageio : 2.14.0 appdirs : 1.4.4 scooby : 0.5.9 matplotlib : 3.5.1 pyvistaqt : 0.6.0 PyQt5 : 5.11.3 IPython : 8.0.0 colorcet : 3.0.0 cmocean : 2.0 ipyvtklink : 0.2.2 scipy : 1.7.3 itkwidgets : 0.32.1 tqdm : 4.62.3 meshio : 5.2.5 -------------------------------------------------------------------------------- ``` username_0: You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>> username_1: @username_0, I don't think the contents of your email reply are coming through. I'd recommend using the GitHub web interface for this issue: https://github.com/pyvista/pyvista/issues/2060 username_0: Hello, sorry, I was sure email image attachments would have worked. I attach the picture from the github site. ![image](https://user-images.githubusercontent.com/77293250/150824567-4eeaed59-5800-4b26-9872-88f7b0ac4346.png) username_1: My recommendation is to upgrade your Python environment (3.6 has passed the end of life and we are half-heartedly support 3.6 at this point). After upgrading Python to say 3.8 or 3.9, then also try upgrading VTK vo v9
aymericbeaumet/metalsmith-concat
145673709
Title: Simplify paths Question: username_0: Make sure complex paths like `a///b////c` are properly simplified to `a/b/c` Answers: username_0: also handle `a\\\\b\\\\c` username_0: closed by https://github.com/username_0/metalsmith-concat/commit/be0c30053de262f1881ad67778590a5157c1841e Status: Issue closed
dapr/components-contrib
906409682
Title: Improve error handling for AKV secret store component with Managed Identity Question: username_0: I would like to suggest improving the error handling for the AKV (Azure Key Vault) secret store component when using Managed Identity. Managed Identity uses [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) to authenticate against the AKV. To do so you will need to map the Selector from the AzureIdentityBindings resources within the Deployment/Pod you would like to use it. ``` yaml apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentityBinding metadata: name: test-identity-binding spec: azureIdentity: test-identity selector: test-identity ``` ``` yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-app name: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: annotations: dapr.io/app-id: test-app dapr.io/enabled: "true" creationTimestamp: null labels: aadpodidbinding: test-identity app: test-app spec: containers: .... ``` If the selector "test-identity" from the Binding does not match the "aadpodidbinding" label in the deployment AAD Pod Identity will not work. In case of wrong mapping, the Dapr sidecar will successfully start, the component will be loaded and no errors are displayed within the logs (with debugging enabled). Of course, you will be unable to retrieve any secrets. The request gets no response and runs into a timeout. I tested this via curl: `curl http://localhost:3500/v1.0/secrets/<akv>/bulk` with Dapr 1.2 I suggest implementing better error handling and output to allow easier debugging. Answers: username_1: @username_0 do you have any suggestions how this should be done? I want to defer the MSI logic to the standard Azure Go SDK and there simply would be no way to check your intention to use pod identity. The Azure Go SDK (which we don't use for KV auth right now) does this: ``` // NewAuthorizerFromEnvironment creates a keyvault dataplane Authorizer configured from environment variables in the order: // 1. Client credentials // 2. Client certificate // 3. Username password // 4. MSI ``` For 1, 2 and 3 I believe it examines environment variables. I would argue that pod identity deployment and labeling should be handled in CI/CD. But either way, perhaps you could raise this issue on the AAD pod identity repo https://github.com/Azure/aad-pod-identity? This to me sounds like something that should live within the pod identity logs. username_0: @username_1 you’re right. I guess it is something AAD Pod Identity should fix. But maybe (and if it’s possible at all) some debugging output might be helpful for easier debugging. Not sure whether users will review AAD Pod Identity logs to analyse such issues. username_1: @username_0 maybe some sort of global debug / print option for the sidecar that could check for a pod identity and print the ID of the identity if it found one into the Dapr logs. This wouldn't be Key Vault specific as every service using MSI could benefit from this. Could you update this issue to something like the following? "Enable logging of AAD pod identity for Azure component authentication" username_0: @username_1 Yes, this would be great. I just updated the title to "Enable logging of AAD pod identity for Azure component authentication".
ovnisoftware/ASP.NET-Youtube-Downloader
963514218
Title: Could not parse the Youtube page Question: username_0: Error: "Could not parse the Youtube page for URL http://youtube.com/watch?v=XXXXXXXX This may be due to a change of the Youtube page structure. Please report this bug at www.github.com/flagbug/YoutubeExtractor/issues"
phetsims/chipper
1075933999
Title: Add custom locale: Karakalpak Question: username_0: [Karakalpak Wiki page](https://en.wikipedia.org/wiki/Karakalpak_language) I've written back to the user to inform them that adding a custom locale is not trivial and it may be a while until we are able to fulfill this request. Assigning to @username_1 since he is familiar with #1111. Answers: username_1: Assigning the locale subgroup. This would definitely be ideal to work on with #1111. username_2: For reference, the ISO 693-3 language code for this language is `kaa`, and in the reference page at https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Languages/List_of_ISO_639-3_language_codes_(2019) the language is listed as Kara-kalpak. username_2: In 639-1, Karakalpak is not listed, and the `ka` locale is already in use by Kartvelian, also known as Georgian. `kk` and `kr` are also used. The code `kp` does _not_ appear to be used, so perhaps we could use that one. username_2: @username_0 and @username_3 - This issue was discussed with @kathy-phet at today's development planning meeting, and @kathy-phet recommended that we assign it to you to do a bit of investigation and make sure that people would really use this custom locale if we create it. We've done some of these before, such as in https://github.com/phetsims/rosetta/issues/158, where no one ended up using it, so we want to make sure it's worth the effort. username_2: @username_0 - Once this is vetted, please assign it back to me. username_3: PhET Global doesn't have any specific initiatives in this language's region, so it would not be as high on Global's priority list as say, some African languages with similar numbers of speakers. The real question about utility comes down to if this person is seriously willing to do the translations and intends to integrate them into their teacher ed programs--something that is clearly more systemically impactful than doing it for a single classroom. I'm happy to get in touch with this person if you provide the e-mail contact so I can learn more about their level of commitment. username_0: I sent a follow up email to the user, will update when we receive a response. username_0: I haven't heard back from the requesting user in over 2 weeks. I think we should close this issue and wait on supporting any new custom locales until we have another serious request. Status: Issue closed
googleapis/google-cloud-python
761487293
Title: Always use GAPICBazel() to generate GAPIC libraries Question: username_0: Clean up `synth.py` files to always use `GAPICBazel()` rather than `GAPICMicrogenerator()`. See https://github.com/search?q=org%3Agoogleapis+GAPICMicrogenerator+python&type=Code - [ ] [python-texttospeech](https://github.com/googleapis/python-texttospeech/blob/48d540e89522bf1ca9fccc4e21fb3ef43d76196a/synth.py) - [ ] [python-memcache](https://github.com/googleapis/python-memcache/blob/63d578c2938fcb8dc1442eca2d33129b5d3c237d/synth.py) - [ ] [python-media-translation](https://github.com/googleapis/python-media-translation/blob/b0f04ad867e44ce270a3c873df9652b85776b97a/synth.py) - [ ] [python-recommendations-ai](https://github.com/googleapis/python-recommendations-ai/blob/fe7e6edae9cf2887b727a97227398b027372525a/synth.py) - [ ] [python-recaptcha-enterprise](https://github.com/googleapis/python-recaptcha-enterprise/blob/4ecda99c87ccbd7edacef278a7dfa51a5f04ad5d/synth.py) - [ ] [python-os-config](https://github.com/googleapis/python-os-config/blob/4d8605e2d92af271b2c363490926689266c1d4b6/synth.py) CC @hkdevandla @danoscarmike
DonJayamanne/pythonVSCode
223221879
Title: Refactor fails on code with type annotations Question: username_0: [PEP 3107](https://www.python.org/dev/peps/pep-3107/) introduced type annotations to python syntax, but the VS Code python extension's refactoring mechanism can't handle them. ## Environment data VS Code version: 1.11.2 Python Extension version: 0.6.3 Python Version: 3.6 OS and version: OSX ## Steps to reproduce: - Take some code with type annotations, e.g. ```python def increment(x: int) -> int: return x + 1 ``` - Attempt to rename the variable `x` to `y`. This produces an error: <img width="997" alt="screen shot 2017-04-20 at 3 37 52 pm" src="https://cloud.githubusercontent.com/assets/128019/25255430/5ef1472a-25df-11e7-8b01-8f980fe8eee3.png"> Answers: username_1: Closing in favor of https://github.com/username_1/pythonVSCode/issues/824 Status: Issue closed
fedora-python/pyp2rpm
291357620
Title: option to omit %check (and all test related BuildRequires) Question: username_0: Sometimes packaging a Python module in and of itself is simple, but the tests it has bundled introduces a deep stack of required modules. I.e.: ``` 'tests_require': [ 'coverage!=4.4,>=4.0', 'fixtures>=3.0.0', 'hacking!=0.13.0,<0.14,>=0.12.0', 'mock>=2.0', 'python-subunit>=0.0.18', 'sphinx!=1.6.1,>=1.5.1', 'oslosphinx>=4.7.0', 'six>=1.9.0', 'testrepository>=0.0.18', 'testresources>=0.2.4', 'testscenarios>=0.4', 'testtools>=1.4.0', 'virtualenv>=13.1.0' ], ``` Sometimes I just need to get the module packaged without diving deeply into a rabbit-hole of building a deep stack of RPMs just to satisfy testing. I am happy to trust that upstream has tested sufficiently and don't need my RPM packaging to do it also. Yes, I realise that I am risking building a broken module if I don't run tests. I'm OK to take on that risk for the short-term gain. An option to tell `pyp2rpm` that this is what I want would be nice for such Q&D builds. Ideally all PyPI-RPM modules would have working tests and perhaps once I have provided a working RPM I will go back and build the test stack also, but I need avoid having to build this stack gating other tasks that depend on the RPM. Thoughts? Answers: username_1: I think we can add a non-default `--skip-check` flag or smth like that to omit %check section and it's required build dependencies. What do you think @mcyprian ? username_0: @username_1, @mcyprian That would be super! username_2: I would also be interested by this option ! username_3: pyp2rpm si on life support. Neither Iryna or Michal longer work on the project and I'm only fixing critical bugs. username_2: @username_3 ah I didn't realize that :/ Too bad then.
Azure/azure-cli
498976241
Title: Parameters being ignored when installing artifacts with vm create and inline JSON. Question: username_0: ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. ``` $artifacts = '[{"""artifactId""": """/subscriptions/*******/resourceGroups/safetestrg/providers/Microsoft.DevTestLab/labs/testlab/artifactSources/public%20repo/artifacts/windows-chocolatey""", """parameters""": [{"""packages""": """firefox"""}]}]' az lab vm create --resource-group TestRG --lab-name TestLab --name 'ScriptVM5' --image "Windows 10 Pro, Version 1809" --image-type gallery --size 'Standard_B2s' --admin-username '****' --admin-password '****' --artifacts $artifacts ``` ## Expected Behavior The VM is created and Firefox is installed via chocolatey ## Environment Summary ``` Windows-7-6.1.7601-SP1 Python 3.6.6 Shell: powershell.exe azure-cli 2.0.74 ``` ## Additional Context i know there is already an artifact for Firefox. This is just a proof of concept around deploying other chocolatey packages and seems to highlight a bigger issue. <!--Please don't remove this:--> <!--auto-generated--> Answers: username_1: @username_0 Apologies for the delayed response. This GitHub issue has been open for quite some time. Could you please let us know if you still facing this issue and need assistance? Awaiting your reply.
PARINetwork/pari
228663224
Title: Spike for CMS structure Question: username_0: Admin console to create an article with order-able modules/gadgets. Create 4 modules to start with. Each module should be a first class entity and should capture only the data of portion of the article it represents. Article will be visualized by arranging such modules top-down. Responsive public facing article<issue_closed> Status: Issue closed
VioletGiraffe/cppcheck-vs-addin
258799038
Title: Release 1.3.4 not loading with Visual Studio 2012 Question: username_0: I'm loading simple C++ project with single cpp file and VS2012 is prompting this ![cppcheck-addin-with-vs2012](https://user-images.githubusercontent.com/80741/30591802-944ef1f6-9d44-11e7-9acc-ec4971a19989.png) Here is relevant part of the log: ``` <?xml version="1.0" encoding="utf-16"?> <?xml-stylesheet type="text/xsl" href="ActivityLog.xsl"?> <activity> <entry> <record>1</record> <time>2017/09/19 12:11:21.155</time> <type>Information</type> <source>VisualStudio</source> <description>Microsoft Visual Studio 2012 version: 11.0.61030.0</description> </entry> ... <entry> <record>289</record> <time>2017/09/19 12:11:43.667</time> <type>Information</type> <source>VisualStudio</source> <description>Begin package load [CPPCheckPluginPackage]</description> <guid>{127D8BD3-8CD7-491A-9A63-9B4E89118DA9}</guid> </entry> <entry> <record>290</record> <time>2017/09/19 12:11:43.673</time> <type>Error</type> <source>VisualStudio</source> <description>CreateInstance failed for package [CPPCheckPluginPackage]</description> <guid>{127D8BD3-8CD7-491A-9A63-9B4E89118DA9}</guid> <hr>80070002</hr> <errorinfo>Could not load file or assembly 'Microsoft.VisualStudio.Shell.15.0, Version=15.0.0.0, Culture=neutral, PublicKeyToken=<KEY>' or one of its dependencies. The system cannot find the file specified.</errorinfo> </entry> <entry> <record>291</record> <time>2017/09/19 12:11:43.673</time> <type>Error</type> <source>VisualStudio</source> <description>End package load [CPPCheckPluginPackage]</description> <guid>{127D8BD3-8CD7-491A-9A63-9B4E89118DA9}</guid> <hr>80004005 - E_FAIL</hr> <errorinfo>Could not load file or assembly 'Microsoft.VisualStudio.Shell.15.0, Version=15.0.0.0, Culture=neutral, PublicKeyToken=<KEY>' or one of its dependencies. The system cannot find the file specified.</errorinfo> </entry> </activity> ``` Full log attached in [ActivityLog.zip](https://github.com/username_1/cppcheck-vs-addin/files/1314156/ActivityLog.zip) Answers: username_1: I'm pretty sure I have removed support for VS 2012 and 2013 in d591f1268806508e1d6094b62651bf626472ce97 username_0: So, I decided to try it out, also for comparison with my previous tests in VS2017 username_1: Good catch, I'll edit that. Status: Issue closed username_0: Thanks. Since I can't help with porting to/testing VS2012 myself, I'm closing the issue.
WoWManiaUK/Blackwing-Lair
527626358
Title: [Talent] Protector of the Innocent - Paladin Question: username_0: Spell ID: 20140 ![image](https://user-images.githubusercontent.com/56992013/69487089-d2860500-0e5c-11ea-9e0b-3c313477fce5.png) **What is happening:** Healing any target other than yourself with a spell heals you. This affects all healing, except healing over time. This means that healing 6 other people with Light of Dawn (85222) will cause this talent to heal you 6 times. Even worse, using Holy Radiance can proc this talent one time for every friendly character that is in range, potentially allowing you to pull some incredibly high self healing numbers in situations where raid members stack on top of each other. What should happen: Successfully casting any targeted paladin healing spell should proc the talent once, regardless of how many characters the spell heals. Below is a list of targeted paladin spells in case it is needed: Holy Light 635 Holy Shock 20473 Flash of Light 19750 Word of Glory 85673 Divine Light 82326 Holy Radiance 82327 Lay on Hands 633 Answers: username_1: Fixed, will proc only once now username_2: @username_1 Multiple procs confirmed fixed on PTR. Doesn't proc on Holy Shock though. Not sure if it should - gotta research that one. Status: Issue closed
gatsbyjs/gatsby
453425282
Title: Gatsby CLI Commands not working for a GitHub Repo Question: username_0: <!-- Please fill out each section below, otherwise your issue will be closed. This info allows Gatsby maintainers to diagnose (and fix!) your issue as quickly as possible. Useful Links: - Documentation: https://www.gatsbyjs.org/docs/ - How to File an Issue: https://www.gatsbyjs.org/contributing/how-to-file-an-issue/ Before opening a new issue, please search existing issues: https://github.com/gatsbyjs/gatsby/issues --> ## Description I am trying to clone a webservice built with gatsby and run it locally in my laptop. But typing any gatsby command gives the following alert on command prompt: `-bash: gatsby: command not found` I found out [this closed issue](https://github.com/gatsbyjs/gatsby/issues/12352) in Gatsby Repository and tried uninstall and reinstall gatsby manually without working on the GitHub Repository. But it still did not work out. <img width="934" alt="Screenshot 2019-06-07 at 15 13 33" src="https://user-images.githubusercontent.com/51476428/59093933-dff31580-8936-11e9-9481-029b991df5b9.png"> I am suspecting something might be missing inthe .npm-global or Path variables, but I might be wrong. Please help. ### Steps to reproduce 1. Go to the ReadMe section of [this GitHub Repo](https://github.com/PlatformOfTrust/pot-websites#frontends-with-gatsbygraphql-react) to get instructions 2. Open terminal in Mac and navigate inside the GitHub folder 3. run `git clone [email protected]:PlatformOfTrust/pot-websites.git` 4. run `cd pot-website/developer-site` 5. run `npm install` 6. run `npm install --global gatsby-cli` 7. run `gatsby develop -o -p 8000` ` ### Expected result The Site should open in browser localhost:8000 ### Actual result On running `gatsby develop -o -p 8000` terminal shows the following: **-bash: gatsby: command not found** site can't be run on localhost:8000 Here is the screenshot of all the execute commands: <img width="1339" alt="Screenshot 2019-06-07 at 15 32 51" src="https://user-images.githubusercontent.com/51476428/59095226-c30c1180-8939-11e9-899b-e1e07998d0af.png"> ### Environment As no gatsby commands work at all, I am providing here information on my Operating System. OS: macOS Mojave (version 10.14.4) npm version: 6.4.1 Answers: username_1: I believe you'll need to set up your path to account for global NPM packages. We recommend using [Homebrew to install](https://www.gatsbyjs.org/tutorial/part-zero/#install-homebrew-for-nodejs) but you can also go without with a guide like [this](https://coolestguidesontheplanet.com/installing-node-js-on-macos/) I'm going to close this out, but please re-open or reply if we can help further! Status: Issue closed
dotnet/runtime
558011948
Title: Add ImmutableArray.Builder.AddRange, MoveToArray methods Question: username_0: The builder currently has APIs: ``` C# public void AddRange(T[] items, int length); public ImmutableArray<T> MoveToImmutable(); ``` but is missing the following: ``` C# public void AddRange(T[] items, int start, int length); public T[] MoveToArray(); ``` Status: Issue closed Answers: username_1: Triage: given that the issue has not seen a lot of activity recently, I'm going to close this. We can maybe reopen in the future provided we have a more concrete api proposal.
material-components/material-components-ios
412104965
Title: [Dialogs] Internal issue: b/124770786 Question: username_0: This was filed as an internal issue. If you are a Googler, please visit [b/124770786](http://b/124770786) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/124770786](http://b/124770786) Answers: username_0: The internal issue [b/124770786](http://b/124770786) is now closed. This issue is being closed as a result. Status: Issue closed
isi-vista/adam
619274162
Title: Learning Dynamic Prepositions Question: username_0: A selection of our prepositions are truly dynamic meaning that it is difficult to express them outside of the context of an action (i.e. 'toward'). CURRENT LIST TO LEARN: - [ ] Toward - [ ] Away_From - [ ] Out_Of We will test that our current sub-set verb learner is capable of learning the individual prepositions with the verbs they appear with. When we have an integrated learner we need to be able to isolate which objects in our scene are the two relevant to the relationship being expressed by the preposition, or the theme. For the sentence "a bird moves a box towards a door" - the learner needs to be able to identify whether the theme is the bird, box or door. While English word order from the non-dynamic prepositions might help us identify that door is a part of the preposition, we might struggle to identify if 'box' or 'bird' should be selected as the theme as *both* do follow the same SpatialPath of the action. If the above issue can be resolved then we believe our `relation` learner should be capable of making a good attempt at the pattern for the preposition independent of the verb. (For reference see my comments on #748 ) Answers: username_0: If we try to run our current verb subset learner over the entire dynamic preposition curriculum, knowing that we won't learn the individual relationships but rather a variance of each verb with the attached prepositional information, we are unable to learn at all. Why? In trying to do pattern relaxation for `beside` we at some point in pattern reduction eliminate a `variable_object_slots` in the pattern causing `slot_3` to no longer exist. Here's the console log dump from the failure. ``` [2020-05-18 11:41:00] INFO:root:Observation 15: a dog pushes a box beside a table [2020-05-18 11:41:00] INFO:root:Object recognizer recognized: [('ground',), ('dog',), ('box',), ('table',), ('car',)] [2020-05-18 11:41:00] INFO:root:object matching: ms in success: 4831.08033426106, ms in failed: 396.00267447531223 [2020-05-18 11:41:00] INFO:root:Learner observing LanguageAlignedPerception(language=[(a/det, dog/noun, pushes/verb, a/det, box/noun, beside/adp, a/det, table/noun), tree=DependencyTree(_graph=<networkx.classes.digraph.DiGraph object at 0x7f212269af90>, root=pushes/verb, tokens=i{dog/noun, a/det, box/noun, a/det, table/noun, a/det, beside/adp, pushes/verb})], perception_graph=PerceptionGraph(nodes=[gravitational-up[-curved, +directed, +aligned_to_gravity], south-to-north[-curved, +directed, -aligned_to_gravity], west-to-east[-curved, +directed, -aligned_to_gravity], learner, learner-vertical[-curved, +directed, +aligned_to_gravity], learner-left-to-right[-curved, +directed, -aligned_to_gravity], learner-back-to-front[-curved, +directed, -aligned_to_gravity], (Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278917904), (Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278918800), (Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278917776) and 40 more], edges=[(learner, learner-vertical[-curved, +directed, +aligned_to_gravity], primary-axis), (learner, learner-left-to-right[-curved, +directed, -aligned_to_gravity], has-axis), (learner, learner-back-to-front[-curved, +directed, -aligned_to_gravity], has-axis), (learner, (Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278916624), in-region), (learner, (is-addressee[binary,perceivable], learner), has-property), (learner, (is-learner, learner), has-property), (learner, (animate[binary], learner), has-property), (learner, (self-moving[binary], learner), has-property), (learner, (aboutSameSizeAsLearner, learner), has-property), (learner-back-to-front[-curved, +directed, -aligned_to_gravity], learner, facing-axis), ((Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278917904), gravitational-up[-curved, +directed, +aligned_to_gravity], +_GravitationalAxis()), ((Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278917904), MatchedObjectNode(name=('ground',)), reference-object), ((Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278918800), gravitational-up[-curved, +directed, +aligned_to_gravity], +_GravitationalAxis()), ((Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278918800), MatchedObjectNode(name=('ground',)), reference-object), ((Region(the ground,distance=exterior-but-in-contact,direction=+_GravitationalAxis()), 139780278917776), gravitational-up[-curved, +directed, +aligned_to_gravity], +_GravitationalAxis()) and 54 more]), node_to_language_span=i{MatchedObjectNode(name=('dog',)): [0:2), MatchedObjectNode(name=('box',)): [3:5), MatchedObjectNode(name=('table',)): [6:8)}, language_span_to_node=i{[0:2): MatchedObjectNode(name=('dog',)), [3:5): MatchedObjectNode(name=('box',)), [6:8): MatchedObjectNode(name=('table',))}, aligned_nodes=i{MatchedObjectNode(name=('dog',)), MatchedObjectNode(name=('box',)), MatchedObjectNode(name=('table',))}) [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsOntologyNodePredicate(property_value=hollow[binary]) [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsOntologyNodePredicate(property_value=hollow[binary])] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsOntologyNodePredicate(property_value=self-moving[binary]) [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsOntologyNodePredicate(property_value=self-moving[binary])] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsOntologyNodePredicate(property_value=biggerThan) [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsOntologyNodePredicate(property_value=biggerThan)] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsOntologyNodePredicate(property_value=biggerThan) [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsOntologyNodePredicate(property_value=biggerThan)] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsColorNodePredicate(color=#000000) [2020-05-18 11:41:00] INFO:root:Deleting extra color nodes: [IsColorNodePredicate(color=#000000), IsColorNodePredicate(color=#000000), IsColorNodePredicate(color=#000000)] [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsColorNodePredicate(color=#000000), IsColorNodePredicate(color=#000000), IsColorNodePredicate(color=#000000), IsColorNodePredicate(color=#000000)] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsColorNodePredicate(color=#dbbf21) [2020-05-18 11:41:00] INFO:root:Deleting extra color nodes: [IsColorNodePredicate(color=#dbbf21)] [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsColorNodePredicate(color=#dbbf21), IsColorNodePredicate(color=#dbbf21)] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is IsColorNodePredicate(color=#0000ff) [2020-05-18 11:41:00] INFO:root:Deleting extra color nodes: [IsColorNodePredicate(color=#0000ff)] [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [IsColorNodePredicate(color=#0000ff), IsColorNodePredicate(color=#0000ff)] [2020-05-18 11:41:00] INFO:root:Relaxation: last failed pattern node is MatchedObjectPerceptionPredicate() [2020-05-18 11:41:00] INFO:root:Nodes to delete directly: [MatchedObjectPerceptionPredicate()] [2020-05-18 11:41:00] INFO:root:Relaxation: deleted due to disconnection: [IsOntologyNodePredicate(property_value=inanimate[binary]), IsColorNodePredicate(color=#6e5f13), IsOntologyNodePredicate(property_value=biggerThan)] ``` The second to last line is an invalid removal because removing the `MatchedObjectPerceptionPredicate` node means we won't be able to realign the English. username_1: @username_0 : Is this problem still extant or do we expect it to be improved by any of our recent changes? username_0: @username_1 This should be improved by our recent changes. I haven't tried to run the Integrated Relation Learner over any curriculum other than the same curriculum the original learners were run over. username_0: Crashes at observation 140 ``` [2020-07-13 12:45:23] INFO:root:Observation 140: a dog goes to a cookie [2020-07-13 12:45:23] INFO:root:Object recognizer recognized: [('ground',), ('dog',), ('cookie',), ('table',)] [2020-07-13 12:45:23] INFO:root:object matching: ms in success: 77603.38945500553, ms in failed: 3089.8627983406186 [2020-07-13 12:45:23] INFO:root:Observation 140: a dog goes to a cookie Traceback (most recent call last): File "/nas/home/jacobl/miniconda3/envs/adam/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/nas/home/jacobl/miniconda3/envs/adam/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/nas/home/jacobl/projects/adam-root/adam/adam/experiment/run_m13.py", line 73, in <module> parameters_only_entry_point(main) File "/nas/home/jacobl/projects/coref-alternatives-root/repos/vistautils/vistautils/parameters_only_entrypoint.py", line 43, in parameters_only_entry_point args=sys.argv[1:], File "/nas/home/jacobl/projects/coref-alternatives-root/repos/vistautils/vistautils/parameters_only_entrypoint.py", line 73, in _real_parameters_only_entry_point main_method(params) File "/nas/home/jacobl/projects/adam-root/adam/adam/experiment/run_m13.py", line 69, in main log_experiment_entry_point(experiment_params) File "/nas/home/jacobl/projects/adam-root/adam/adam/experiment/log_experiment.py", line 115, in log_experiment_entry_point "log_hypothesis_every_n_steps", default=250 File "/nas/home/jacobl/projects/adam-root/adam/adam/experiment/__init__.py", line 169, in execute_experiment LearningExample(perceptual_representation, linguistic_description) File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/integrated_learner.py", line 132, in observe self.action_learner.learn_from(current_learner_state) File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/template_learner.py", line 286, in learn_from self._learning_step(preprocessed_input, thing_whose_meaning_to_learn) File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/subset.py", line 208, in _learning_step for previous_pattern_hypothesis in previous_pattern_hypotheses File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/subset.py", line 209, in <listcomp> for hypothesis_from_current_perception in hypotheses_from_current_perception File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/verbs.py", line 265, in _update_hypothesis for previous_slot, node1 in previous_pattern_hypothesis.template_variable_to_pattern_node.items() File "/nas/home/jacobl/projects/adam-root/adam/adam/learner/perception_graph_template.py", line 148, in intersection trim_after_match=trim_after_match, File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/perception_graph.py", line 1001, in intersection trim_after_match=trim_after_match, File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/perception_graph.py", line 1284, in relax_pattern_until_it_matches initial_partial_match=partial_match, File "/nas/home/jacobl/miniconda3/envs/adam/lib/python3.7/site-packages/more_itertools/more.py", line 138, in first return next(iter(iterable)) File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/perception_graph.py", line 1393, in _internal_matches initial_partial_match=merged_initial_partial_match, File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/_matcher.py", line 552, in subgraph_isomorphisms_iter self.initialize(initial_partial_match=initial_partial_match) File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/_matcher.py", line 204, in initialize self._jump_to_partial_match(initial_partial_match) File "/nas/home/jacobl/projects/adam-root/adam/adam/perception/_matcher.py", line 266, in _jump_to_partial_match f"Requested to begin matching from an alignment which aligns " RuntimeError: Requested to begin matching from an alignment which aligns semantically infeasible nodes: ObjectSemanticNodePerceptionPredicate() to ObjectSemanticNodePerceptionPredicate() ``` This error looks like it's coming from the `slot` alignment choice for jumping to a match. username_0: After fixing the curriculum we can learn these just fine. [m13-verbs-with-dynamic-prepositions.zip](https://github.com/isi-vista/adam/files/4933347/m13-verbs-with-dynamic-prepositions.zip) Status: Issue closed
stripe/stripe-java
59022035
Title: Charge.create ambiguity Question: username_0: The return of Charge.create() is ambiguous in the doc and could be made clearer. From doc page: https://stripe.com/docs/api/java#charge_object -- -- Returns Returns a charge object if the charge succeeded. Throws an error if something goes wrong. -- -- 1. The returned charge object contains FailureCode and FailureMessage ... it is unclear if these are guaranteed to be null when returning from Charge.create() - the doc should specify 2. For record keeping in the case that the Charge.create() call throws a CardException the charge.id is not captured so we can not record the id of the failed charge and correlate charges to our system events (CardException only contains code/param) 3. Status field of Charge object is missing ... however it isn't clear what that field indicates Answers: username_0: Another ambiguity: * It isn't clear when a CardException may be thrown and by which methods and with which codes ... does Customer.createCard() throw CardException? ... maybe, but since every method is marked as throwing all exceptions who knows which error cases need to be handled by which calls. * What are the possible CardException codes that can be thrown by Customer.createCard() ? ... in general there is near zero information on the potential error behavior of these calls - there are only 'catch-em-all' style exceptions that everything throws and they contain a universal list of codes even though I'm sure it isn't possible to get certain codes as a response from certain calls username_1: Ugh Chrome crashes right as I finish. If a charge fails when you use Charge.create(), no charge object is created, and an exception is through. The status of "failed", and FailureCode and FailureMessage would only be set if a charge fails during some automatic billing (like that of an invoice). Since no charge object was created, no charge id exists. I will give you that status does appear to be missing from the Java library. For the most part pretty much every exception but Card exception could happen on any call, a card error would only be thrown if for some reason the card couldn't be charged, and the [docs](https://stripe.com/docs/api/java#errors) give a short description about the other errors, incorrect parameters, connection issues, stripe issues, authentication issues. Status: Issue closed username_0: This makes things even more confusing ... (how does an invoice come into the picture?) Points: * The com.stripe.model.Charge class contains failureCode and failureMessage fields. * I've dug into the source of the client and you guys are doing a GSON.fromJson to parse the response body in the case a non-200 is returned by the API. So the question is: Can the API return a HTTP 200 in response to a Charge.create() call and have failureCode populated? And, if so: what are the possible failure codes? username_1: Charge.create() should never return a failed charge with a failureCode set, it would be a 402 (payment required). Charge.retrieve() can return a charge with a failureCode set (that would be a 200) username_0: awesome! https://stripe.com/docs/api#create_charge is unclear on this point,.. would be nice for you guys to ping a doc writer to see if this can be made clearer in the api docs. Cheers username_3: I'm here after getting very confused about this as well - not using Java, but PHP. I know the docs said that a charge object is created if it succeeded and if not an error is thrown, but then I started looking at the response and seen the `failure_code` and `failure_message` so figured I needed to account for these - then couldn't work out why the object was always empty on failures. Docs could use some clearer explanation of this. username_4: @username_3 When a charge succeeds, a charge object is returned. If it failed, an error is returned instead but we also return the corresponding charge id in the `charge` property on the error. This lets you retrieve the charge afterwards and you can look at the `failure_code` and `failure_message` on that failed charge to understand what blocked it. username_3: @username_4 Thanks for that. Just think the docs could be clearer as initially it's very confusing on why the object contains those properties when it's stated a charge object is only returned on success.
a3rk/remix
375684826
Title: v0.1.1.2 Question: username_0: - [ ] builds for v0.1.1.2 core given latest PR fixing seg fault - [ ] verification & validation of neighboring software using core to also be released as v0.1.1.2 variants after whatever housekeeping, clean up, parity work is completed - [ ] close out tickets, issues, tie up loose ends - [ ] release announcement to all communities, on all fronts, for v0.1.1.2 release Answers: username_0: Draft for the v0.1.1.2 release is in front of us, FYI. Just drop in packages, hashes, descriptions as you move on the work, gentlemen. username_0: - [x] builds for v0.1.1.2 core given latest PR fixing seg fault - [x] verification & validation of neighboring software using core to also be released as v0.1.1.2 variants after whatever housekeeping, clean up, parity work is completed - [x] close out tickets, issues, tie up loose ends - [x] release announcement to all communities, on all fronts, for v0.1.1.2 release Status: Issue closed
maxfridbe/websocketextensions
1052539769
Title: Server can't run in Unix through libmono (HTTP.sys) Question: username_0: I am trying to run a websocket server using websocketextensions on unix via libmono, However seems it's using HTTP.sys which is not compatible with unix (https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/httpsys?view=aspnetcore-6.0) ``` HTTP.sys is a web server for ASP.NET Core that only runs on Windows. HTTP.sys is an alternative to Kestrel server and offers some features that Kestrel doesn't provide. ``` ![image](https://user-images.githubusercontent.com/1508444/141605400-a2af065d-e95a-4974-8327-f5ea5b6fb611.png) https://stackoverflow.com/questions/54234872/how-to-fix-unable-to-load-shared-library-httpapi-dll-or-one-of-its-dependenci Is it possible to make it compatible with unix somehow ?
dcwatson/bbcode
548583112
Title: replace_links=False doesn't work on video tag/mp4 Question: username_0: ``` def render_video(name, value, options, parent, context): return '<video width="100%" controls><source src="' + format( value) + '" type="video/mp4"></video>' parser.add_formatter('video', render_video, replace_links=False) ``` output: `<video width="100%" controls="" flashstopped="true" id="dummyid75" preload="metadata"><source src="<a rel=" nofollow"="" href="https://domain.com/media/ts/2020/01/11/09/27/f6e662d8-9d69-4512-8c8c-27284c09ce39.mp4">https://domain.com/media/ts/2020/01/11/09/27/f6e662d8-9d69-4512-8c8c-27284c09ce39.mp4" type="video/mp4"&gt;</video>`<issue_closed> Status: Issue closed
SmileyChris/easy-thumbnails
227000352
Title: Looks like a bug or version incompatibility in easy_thumbnail. It calls the base class without the required argument settings_module Question: username_0: from ..models import Image File "C:\Python35\lib\site-packages\filer\models\__init__.py", line 3, in <mod ule> from .clipboardmodels import * # flake8: noqa File "C:\Python35\lib\site-packages\filer\models\clipboardmodels.py", line 9, in <module> from . import filemodels File "C:\Python35\lib\site-packages\filer\models\filemodels.py", line 18, in < module> from ..fields.multistorage_file import MultiStorageFileField File "C:\Python35\lib\site-packages\filer\fields\multistorage_file.py", line 1 2, in <module> from easy_thumbnails import fields as easy_thumbnails_fields File "C:\Python35\lib\site-packages\easy_thumbnails\fields.py", line 2, in <mo dule> from easy_thumbnails import files File "C:\Python35\lib\site-packages\easy_thumbnails\files.py", line 14, in <mo dule> from easy_thumbnails import engine, exceptions, models, utils, signals, stor age File "C:\Python35\lib\site-packages\easy_thumbnails\engine.py", line 12, in <m odule> from easy_thumbnails import utils File "C:\Python35\lib\site-packages\easy_thumbnails\utils.py", line 15, in <mo dule> from easy_thumbnails.conf import settings File "C:\Python35\lib\site-packages\easy_thumbnails\conf.py", line 334, in <mo dule> settings = Settings() File "C:\Python35\lib\site-packages\easy_thumbnails\conf.py", line 21, in __in it__ super(AppSettings, self).__init__(*args, **kwargs) TypeError: __init__() missing 1 required positional argument: 'settings_module'` Answers: username_1: please learn how to issue proper error reports. Status: Issue closed
hyphacoop/organizing
580677195
Title: Discuss our response to COVID-19 Question: username_0: <sup>_This initial comment is collaborative and open to modification by all._</sup> ## Task Summary 🎟️ **Re-ticketed from:** # 🗣 **Loomio:** N/A 📅 **Due date:** N/A 🎯 **Success criteria:** ... Per request from @username_2 we should discuss COVID-19 & hypha. ## To Do - [ ] Take time at an all hands to discuss our approach to COVID-19 for us as workers, our clients & projects Answers: username_0: https://www.theatlantic.com/health/archive/2020/02/covid-vaccine/607000/ username_0: https://nationalpost.com/news/canada/coronavirus-could-infect-35-to-70-per-cent-of-canadian-population-experts-say username_0: My notes from NCBA webinar 2020-03-13: "COVID-19 and the Co-op Business Community": - **TOP 5** things: 1. Come up with a scheduling/remote work/cross-training plan NOW - Stress test! - Consider data security - Lots of phishing related to COVID-19 mailouts 2. Tell your employees the plan - Communicate early and often 3. Monitor closings/things affecting employees so you can respond - e.g., not a lot of advance in some jurisdictions with school closures 4. Inform/re-assure employees options for sick leave - Consider unlimited sick leave plan for duration ("legally permitted" in US) - Consider loss of earnings/wages - Can **require** folks not to work if exhibiting symptoms 5. Have a plan for if an employee develops symptoms for is diagnosed with COVID-19 - Have a PLAN for how to handle people testing positive - Do more than required about notifying customers, being proactive, etc... - Planning for preparedness 1. Document review -- what do we have, when was is updated? 2. Facilitated planning -- generally good to incl. outsider to bring org together. "Risk management" included biz dev (procurement), ops, finance 3. Plan development -- what are the risks? Blue sky unknown unknown plans 4. Plan rollout -- educate <> exercise ("train animals, educate people) - Key components of a plan - Communications / miscommunications - Policies and Processes - Specific to COVID-19: PPE, self-quarantine - HR, OHSA, what are jurisdiction requirements implemented - Mitigation - Critical Processes - Loss of people, sites - Alignment - Educating and exercising - "Pain of discipline or pain of regret" - For risk mitigation, and to understand impact: Documenting clearly! - Impacts: - services unavailable? worse than normal? - NCBA Making a co-operative focused resource on this (tho US-focused), will share username_0: Implications for **travel**: the [Canadian Federal Government] announced a series of other new measures to limit the spread of COVID-19: - International flights will only be permitted to land at a smaller number of airports. Those locations have not yet been announced. - Boats and cruise ships carrying more than 500 people will be banned from docking at Canadian ports until July. - All travellers arriving in Canada from international points are being asked to self-isolate for 14 days as a precaution. People arriving from Hubei, China, Iran and Italy already have been asked to self-isolate. - Enhanced screening measures at airports as well as marine, land and rail points of entry. https://www.cbc.ca/news/politics/trudeau-covid-19-1.5496367 username_0: (Hopefully) final dump, ways to think about supporting folks, and examples of existing support that has been pulled together and shared across my networks: - Accessible Teaching in the Time of COVID-19: https://www.mapping-access.com/blog-1/2020/3/10/accessible-teaching-in-the-time-of-covid-19 - Team-building on remote calls: https://twitter.com/flohdot/status/1238458234815614977 ; https://twitter.com/pzlr/status/1238489261248217089 - Refunding copies of book "REMOTE" [ENDED]: https://twitter.com/jasonfried/status/1237773562322259970 ; https://twitter.com/jasonfried/status/1238134519737319425 - Online conferences: https://twitter.com/tinysubversions/status/1233107955584626688?s=20; https://twitter.com/The_Maintainers/status/1237393550980997120?s=20 - "our internet capabilities: https://twitter.com/People4Bernie/status/1238672035066204162 - Zoom "cheat sheet" / guide: https://twitter.com/dannagal/status/1237474786844041217 - Humorous/opinionated takes on remote work: https://apenwarr.ca/log/20200309 username_1: Pad shared from dc: https://hackmd.io/D_EFCPkpTeKyAuYJ_C-3Cw?both username_2: - I made some edits to pad, and there was some name out of sync in diff places for our WhatsApp group, I have named everything to same as the actual name in WhatsApp - I also moved trello up above github, we should update text to emphasize trello waaaay better if ppl aren't alrdy on gh - I would recommend that we add two additional resource links: - meetings site to show archive as example of how we use templates - service inventory as format for ppl to keep track of the tools that ppl are about to register for As I brought up before, I think showing examples of what we've been going, over time, will help inform many of the choices, and it is a good addition to the comparison tables. I propose we also use an emoji to denote where we make references of our own practices, maybe 🌱 ? @username_0 @udit and others are +1 on these I will go ahead and add these sections - I am now convinced we need a nav bar to all the subsections! username_3: @username_2 all these sound good. As for using emoji's can we just stick to the 🍄 where it's required, just to avoid confusion? username_2: Here are some of my thoughts about goals: - **Short-term**: help orgs to quickly become minimally functional as a remote org, so small businesses aren't having to choose to work / force workers to work during the pandemic, or possibly to close shop / layoff workers. Keeping orgs functional is critical to prevent societal collapse. - **Medium-term**: as community mutual aid efforts sprung up, we may have opportunities to help amplify their impact and contribute to their infrastructural resiliency. Looking at some links that @username_0 @username_3 shared in chat, like https://covidmutualaid.org and https://docs.google.com/spreadsheets/d/1r-8rr27mPHO3-5M4Rs_oiQFacgZh_lMvwlkVv7bxQDA/edit#gid=0 the heavy reliance on facebook and google infrastructure creates a huge risk that if any major outage occurs, it'd be catastrophic to local community efforts globally. - **Long-term**: as covid may become a long-term reality, we need to help many orgs transition to remote-first work. This may present additional challenge for resource-constraint orgs that have little budget investing in tech support, advice, and infrastructure. It's also a huge opportunity for FANG and silicon valley to expand their ecosystem capture, so we must represent alternatives that are more resilient and sustainable, and compatible with the values of those we serve. username_4: I am translating your text about [Solidarity for Workers during the COVID-19 Pandemic ](https://hackmd.io/D_EFCPkpTeKyAuYJ_C-3Cw?both) to Spanish. Let me know how you would like to handle the translated copy. This is the draft of the first paragraphs. --- ### Solidaridad a lxs trabajadorxs durante la Pandemia COVID-19 Como otra gente de la región de Great Lakes y Toronto, hemos estado siguiendo de cerca la situación del Nuevo Coronavirus a través de la [Salud Pública de Toronto](https://www.toronto.ca/community-people/health-wellness-care/diseases-medications-vaccines/coronavirus/) y del [Ministerio de Sanidad de Ontario](https://www.ontario.ca/page/2019-novel-coronavirus). Con el ritmo con el que la situación ha evolucionado desde el 11 de Marzo, cuando la Organización Mundial de la Salud [declaró el COVID-19 como una pandemia](https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020), podemos asumir que continuará cambiando rápido durante las próximas semanas. Como cooperativa de trabajadorxs que funciona por defecto en remoto, Hypha ha estado debatiendo cómo dirigir esta situación internamente, como miembros co-propietarios, y externamente, como miembros de la comunidad, cuando las expresiones de solidaridad y apoyo mutuo son más necesarias que nunca. Ahora estamos haciendo caso de los consejos de [distancia social](https://hub.jhu.edu/2020/03/13/what-is-social-distancing/) y [autoaislamiento](https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection/being-prepared.html) de los funcionarios de la salud pública y los profesionales médicos. Sin embargo, los consejos no incluyen cómo afrontar la enfermedad a nivel de comunidad. Como gesto inicial, aunque pequeño, hemos preparado esta guía con ofertas de ayuda para compartir nuestra experiencia trabajando en remoto. Reconocemos que esto no aborda los asuntos más generales sobre el trabajo de quiénes puede convertirse en remoto, a qué inseguridad económica se están enfrentando muchxs trabajadorxs canadienses, o cómo lxs trabajadorxs por obra y servicio están ahora siendo forzados a cambiar a puestos de salud en primera línea por encima de sus reponsabilidades actuales. En las próximas semanas esperamos pensar colectivamente y más en profundidad cómo podemos afrontar estos y otros asuntos urgentes. username_2: Status update: @garrying @ASoTNetworks and I did the following: - Ported hackmd to website, covid19.hypha.coop is the official domain - Copy edit of final copy, moved things around and added some 🍄 (copy now out of sync with hackmd) - Hypha theming applied, prepared to add translations and @garrying started Navigation work - Added ics calendar with all dates - Added jitsi link for calls, based on WhatsApp group name - Pulled contributing guidelines from edgi and added license Things left to do before launch: - [ ] Contributing guidelines copy still links to edgi, and that section is generally not fully implemented (@ASoTNetworks working on) - [ ] Navigation is not showing up, @garrying to follow up username_0: Updated top! I'm going to spin out the developing internal policy in another issue username_2: - Tracking `es` translation on new issue https://github.com/hyphacoop/remote/issues/16 - Tracking updates to infrastructure and data policy on #246 - @hyphacoop/business-planning-wg responsible for stewarding next steps discussion This is done, thanks all! Status: Issue closed
michaelalvin/tip-calculator
198788266
Title: Project Feedback! Question: username_0: Looks good, this exercise is intended in part to give you an introduction to the general rhythm of this course. The course is entirely project-based with an app being assigned each week and then due the following week. Each project builds on the last to help each engineer learn the practical elements of Web Security development and best practices as quickly as possible. We also do a code review for each submitted project once the program begins. Great to see you were able to complete some optional features to your app already. The optional tasks available on each project are often the most valuable learnings since they dive deeper into common real-world use cases. We encourage you to continue working on [extensions to your tip calculator](https://courses.codepath.com/snippets/web_security_university/prework#heading-4-optional-add-support-for-custom-tip-percentage) as a way to further explore development in PHP. See if you can expand the functionality of the app or instead work to improve the user interface by experimenting with colors, spacing, styling, icons, etc. You can update your submission at any time [here](https://apply.codepath.com/dashboard/), and it will notify us to review again. We'll be following up with you again shortly to outline the next steps in the admissions process.
v-adhithyan/aumba16
541605663
Title: test Question: username_0: test 123 Ticket Custom Fields: Multi CF: Geeky ranjit: Handson Pt: 2 another test: 3 Answers: username_0: Ticket update by <NAME> (<EMAIL>) From HF This will be pushed username_0: From github, this will not be be pushed username_0: push 2 pushing to hf
DefinitelyTyped/DefinitelyTyped
512398611
Title: @types/analytics-node - export whole AnalyticsNode namespace? Question: username_0: @fongandrew @thomasthiebaud need access to the interfaces in the AnalyticsNode namespace - how can it be exported/made available? I think the problem is segment only exports the one class so we can't export the namespace have to export the class as export not sure there is a way to do this... Status: Issue closed Answers: username_1: Hi thread, we're moving DefinitelyTyped to use [GitHub Discussions](https://github.com/DefinitelyTyped/DefinitelyTyped/issues/53377) for conversations the `@types` modules in DefinitelyTyped. To help with the transition, we're closing all issues which haven't had activity in the last 6 months, which includes this issue. If you think closing this issue is a mistake, please pop into the [TypeScript Community Discord](https://discord.gg/typescript) and mention the issue in the `definitely-typed` channel.
Azure/azure-functions-pack
361230117
Title: Getting errors in function after enabling function pack Question: username_0: My Azure function was working fine. I followed the instructions and installed azure-functions-pack and now i get the following error in the function log when i access it: `2018-09-18T09:56:53 Welcome, you are now connected to log-streaming service. 2018-09-18T09:56:57.156 [Info] Function started (Id=845c970a-f992-4000-8f9d-76ab41b5f670) 2018-09-18T09:56:57.265 [Error] Exception while executing function: Functions.GetImages. mscorlib: One or more errors occurred. TypeError: Object.entries is not a function at addConstants (D:\home\site\wwwroot\.funcpack\index.js:76905:10) at Object.<anonymous> (D:\home\site\wwwroot\.funcpack\index.js:76923:1) at __webpack_require__ (D:\home\site\wwwroot\.funcpack\index.js:21:30) at Object.module.exports.__webpack_exports__.a (D:\home\site\wwwroot\.funcpack\index.js:75988:69) at __webpack_require__ (D:\home\site\wwwroot\.funcpack\index.js:21:30) at Object.module.exports.__webpack_exports__.a (D:\home\site\wwwroot\.funcpack\index.js:75970:71) at __webpack_require__ (D:\home\site\wwwroot\.funcpack\index.js:21:30) at Object.module.exports.Object.defineProperty.value (D:\home\site\wwwroot\.funcpack\index.js:75925:14) at __webpack_require__ (D:\home\site\wwwroot\.funcpack\index.js:21:30) at Object.module.exports.module.exports.context.res (D:\home\site\wwwroot\.funcpack\index.js:34444:20). 2018-09-18T09:56:57.281 [Error] Function completed (Failure, Id=845c970a-f992-4000-8f9d-76ab41b5f670, Duration=120ms) ` I deleted node_modules directory from the function folder through the Kudu console and still getting this error. Am i missing something?
filecoin-project/specs
474736024
Title: TODO for August 6 Question: username_0: Closes out #378 ## @whyrusleeping: - [ ] Review PR #383 (P0, slashing mechanism from zx) - [ ] Review PR #395 (P0, PIP PR from porcu) - [ ] Review PR #367 (and close #333 if we merge 367) - [ ] Review PR #326 (old PR that looks ready to merge) - [ ] Review PR #324 (old PR that looks ready to merge) - [ ] Review PR #416 (jesse's PR about unlocking mechanism) - [ ] **Make sure the IPLD HAMT specs land:** Follow these PRs ipld/specs#109 and ipld/specs#131, and close issue #224 when those are merged - [ ] **Upgradeability:** Document the approach to upgrading the protocol, versioning, governance. Addresses #277, #197 ## @sternhenri: - [ ] **Improve the spec for deal arbitration:** #200, #89, #343, #353, #306, also address signed deal responses, miner automated response to arbitration challenges (if any), handle terminated data transfer/flaky miners and clients ## @dignifiedquire: - [ ] Review PR #412 - [ ] **Complete PoSt spec #332 :** Spans many issues, including #371, #332, #321, #375, #119 - [ ] **Resolve faults open issues/questions in #379 :** Spans many issues, including #92, #292, #328 ## @nicola: - [ ] **Finish Proofs PR:** Add bucket and DRG sampling to the proofs spec ## @laser: - [ ] Finish up PR #318 ## @
kimcoder/react-simple-image-slider
589609199
Title: Not Responsive on mobile screen Question: username_0: ![image](https://user-images.githubusercontent.com/27056663/77827752-88c86680-713d-11ea-9a9d-0d583d001d8c.png) As of now, the slider is not responsive on different screens, and it is not even possible to make provision for that. It would be great if we can add the functionality of setting the width & height in percentage(& other formats). Let me know if there is some already existing functionality for the same purpose. Thank You! Answers: username_1: Hi, First of all Thanks your oppinion :) I have to develop that you said it. ( It means not exist setting for dynamically width & height at now ) I will fix this as soon as possible. Thanks. username_1: @username_0 Hi. It is fixed :)! You can see this feature 1.0.3 version. https://www.npmjs.com/package/react-simple-image-slider Thank you ! Status: Issue closed username_0: That's awesome. Thanks!
realm/realm-js
664733595
Title: How to filter a list in Schema? Question: username_0: If I have in a schema like this: ``` FormularioSchema = { name: user, properties: { names: 'string?[]', }; ``` How can I get all user that have a "Jim" name in names array ?? Its there a query for that? I cant find a way to do it. Answers: username_1: You can use [the `ANY` operator](https://realm.io/docs/javascript/latest/api/tutorial-query-language.html) for this: ``` realm.objects('User').filtered('ANY name = $0', 'Jim'); ``` Status: Issue closed
lovell/sharp
848499529
Title: Animation removed from webp Question: username_0: As per https://github.com/username_1/sharp/issues/518 and https://github.com/libvips/libvips/issues/823 sharp should be able to handle animated webps. I am not very fluent in sharp nor JS in general, so there may be something I am doing wrong. If that is the case please enlighten me :) Also question on the side, why can't I use `fileStream.pipe(sharp().foFile(destPath))` (results in `TypeError: dest.on is not a function`). If this is out of scope that's ok as well. Using sharp `0.28.0` What are the steps to reproduce? File used for testing: https://files.catbox.moe/lap1sg.webp File saved by sharp: https://files.catbox.moe/3okmbm.webp ``` const fileStream = ... // FileStream of mentioned animated webp const destPath = ... // string of destination path const chunks = [] fileStream.on('data', (chunk) => { chunks.push(chunk) }) .on('end', () => { const buffer = Buffer.concat(chunks); // fs.writeFile(destPath, buffer, () => {}); // results in working animated webp at destPath sharp(buffer).toFile(destPath) // Removes metadata by default .catch(abortWithError); }); ``` What is the expected behaviour? Metadata removed and animation untouched Running inside WSL 2.0 ``` ❯ npx envinfo --binaries --system npx: installed 1 in 0.811s System: OS: Linux 5.4 Ubuntu 20.04.2 LTS (Focal Fossa) CPU: (8) x64 Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz Memory: 4.19 GB / 7.74 GB Container: Yes Shell: 3.1.0 - /usr/bin/fish Binaries: Node: 14.15.3 - ~/.nvm/versions/node/v14.15.3/bin/node Yarn: 1.22.10 - ~/.nvm/versions/node/v14.15.3/bin/yarn npm: 6.14.10 - ~/.nvm/versions/node/v14.15.3/bin/npm ``` Answers: username_1: Hi, you can specify that you want all frames, rather than the default first frame, by using: ```diff - sharp(buffer).toFile(destPath) + sharp(buffer, { animated: true }).toFile(destPath) ``` https://sharp.pixelplumbing.com/api-constructor#parameters Status: Issue closed username_0: ahh darn, I looked at the documentation about webp and toFile not at the sharp constructor itself... 🤦 Thank you! username_1: I've added an animated WebP example to the `webp()` docs via commit https://github.com/username_1/sharp/commit/08a25a0c8fb4dfaa5bddd7d0f4f0eea55cce2f65
HERA-Team/hera_qm
371643151
Title: ant_metrics_run doesn't work for files containing all pols Question: username_0: Right now `ant_metrics_run` assumes the files given are single polarization and it goes off and finds the other pols assuming a filename structure. For H2C we need to be able to input files that already contain all pols. I think we can fix this by making `utils.generate_fullpol_file_list()` smart enough. Alternatively we could just add a keyword to bypass looking for other files.<issue_closed> Status: Issue closed
chundermike/rpi-fruitbox
345277816
Title: FT5406 error Question: username_0: Hello! I am working on making a custom jukebox and I'm having an issue setting up button controls. I got the software to run, put songs on it etc. however a few things occurred. First off I noticed that when I plug a USB keyboard into the pi, the pi recognizes the USB keyboard but the software does not. In other words the buttons on the USB keyboard do nothing when I press them using fruitbox. I am using the WallSmall skin. Now I think this may be because the keyboard I'm using may not be compatible but I have no way of proving that to be the case, I then tried to wire the buttons to the gpio pins. With the WallSmall skin, there are only 4 number buttons and I also added a left/right and play button. That was it. I perused through the forum and found that retrogame could be used to map the gpio to keys. So I edited the pigrrl code accordingly to map the gpio to the specific keys I needed. Still the buttons did nothing, So from what I read I have to go to the button config to map the gpio pins to the keys/buttons, But when I try to open the button-config it gives me an error saying "couldn't find device FT5406" which as I read on the forum is the 7 inch raspberry pi touchscreen. I'm not using a touch screen fir this. So I'm assuming that's the problem. Please let me know what I am doing wrong and how I can get the button controls working, I'm using a raspberry pi 3 B and putty via ssh. Let me know if you can help me. Status: Issue closed Answers: username_1: Please try version 1.14 and re- open the issue if it still doesn't work. Thanks Mike
appium/appium-desktop
498667901
Title: Appium is not able to recognize any element for iOS Real device with Xcode11 - #unrecognized selector sent to instance Question: username_0: ## The problem After Xcode got updated to v11, Appium is unable to recognize any element and throwing an error "An unknown server-side error occurred while processing the command. Original error: -[XCUIElement resolve]: unrecognized selector sent to instance 0x280ea0fa0" Same issue is happening for iOS App as well as iOS Safari. ## Environment * Appium version (or git revision) that exhibits the issue: appium.app. Version 1.14.0 (1.14.0.20190705.4). Replicable in 1.13 as well. * Last Appium version that did not exhibit the issue (if applicable): N/A * Desktop OS/version used to run Appium: Mac OS 10.14.5 * Node.js version (unless using Appium.app|exe): appium.app * Npm or Yarn package manager: N/A * Mobile platform/version under test: iOS 12.4 * Real device or emulator/simulator: Real Device * Appium CLI or Appium.app|exe: appium.app ## Details After overnight update of xCode from 10.2 to Xcode 11, Appium is not able to recognize any element. Though it is able to push the WDA to the device, app and safari is getting launched. But then it is throwing the error #unrecognized selector sent to instance. I have tried to replace the WDA v 1.3.5 (https://github.com/appium/WebDriverAgent/releases) for xCode11 and tried with latest xcuitest driver 2.133.1 (https://github.com/appium/appium-xcuitest-driver/releases) but unable to launch the app or Safari with the new versions. ## Link to Appium server logs for actual issue https://gist.github.com/username_0/0c912b65ad92b820520fb7170843dc6b ## Link to Appium server logs after trying with latest WDA and xcuitest version [in case required for debugging] https://gist.github.com/username_0/a1c3fc6dda711331ace94b3451180f9a Answers: username_0: Thanks @username_2. I will try out and update here. But can you please let us know when we are going to get the desktop app version1.15.0 username_1: Since we don't have an desktop app, So how can we use that version 1.15 to inspect elements?? username_2: https://github.com/appium/appium-desktop seems has the lates appium but not has the release tag. @dpgraham Can we build the module? ---- I usually use ruby console https://github.com/appium/appium/blob/438d6c3b38e785edc701354cf660aa9f76baceaf/docs/en/writing-running-appium/finding-elements.md#repl to take a look at elements interactively username_3: similar problem! have on board 1.15.0 appium( can't get any normal session with ios 13 and xcode 11 webDriver agent [DevCon Factory] Port #5281 is busy [XCUITest] Error: The port #5281 is occupied by an other process. You can either quit that process or select another free port. [XCUITest] at DeviceConnectionsFactory.requestConnection (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/device-connections-factory.js:147:15) [XCUITest] at XCUITestDriver.startWda (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:487:5) [XCUITest] at XCUITestDriver.start (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:438:5) [XCUITest] at XCUITestDriver.createSession (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:214:7) [XCUITest] at AppiumDriver.createSession (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/lib/appium.js:353:35) [XCUITest] at AppiumDriver.executeCommand (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-base-driver/lib/basedriver/driver.js:376:13) [XCUITest] at AppiumDriver.executeCommand (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/lib/appium.js:482:14) [XCUITest] at asyncHandler (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-base-driver/lib/protocol/protocol.js:306:21) [DevCon Factory] Releasing connections for 00008020-000B045C0E42002E device on any port number [DevCon Factory] No cached connections have been found [debug] [XCUITest] Parsed BUILD_DIR configuration value: '/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-webdriveragent/DerivedData/WebDriverAgent/Build/Products' [debug] [XCUITest] Got derived data root: '/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-webdriveragent/DerivedData/WebDriverAgent' [debug] [XCUITest] Started background XCTest logs cleanup: find -E /private/var/folders -regex '.*/Session-WebDriverAgentRunner.*\.log$|.*/StandardOutputAndStandardError\.txt$' -type f -exec sh -c 'echo "" > "{}"' \; [XCUITest] Cleaning test logs in '/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-webdriveragent/DerivedData/WebDriverAgent/Logs' folder [debug] [iOS] Clearing log files [debug] [iOS] Deleting '/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-webdriveragent/DerivedData/WebDriverAgent/Logs'. Freeing 12K. [debug] [iOS] Finished clearing log files [debug] [BaseDriver] Event 'newSessionStarted' logged at 1569575986870 (12:19:46 GMT+0300 (Eastern European Summer Time)) [debug] [MJSONWP] Encountered internal error running command: Error: The port #5281 is occupied by an other process. You can either quit that process or select another free port. [debug] [MJSONWP] at DeviceConnectionsFactory.requestConnection (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/device-connections-factory.js:147:15) [debug] [MJSONWP] at XCUITestDriver.startWda (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:487:5) [debug] [MJSONWP] at XCUITestDriver.start (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:438:5) [debug] [MJSONWP] at XCUITestDriver.createSession (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/driver.js:214:7) [debug] [MJSONWP] at AppiumDriver.createSession (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/lib/appium.js:353:35) [debug] [MJSONWP] at AppiumDriver.executeCommand (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-base-driver/lib/basedriver/driver.js:376:13) [debug] [MJSONWP] at AppiumDriver.executeCommand (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/lib/appium.js:482:14) [debug] [MJSONWP] at asyncHandler (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-base-driver/lib/protocol/protocol.js:306:21) [HTTP] <-- POST /wd/hub/session 500 16454 ms - 246 [HTTP] org.openqa.selenium.WebDriverException: An unknown server-side error occurred while processing the command. Original error: The port #5281 is occupied by an other process. You can either quit that process or select another free port. (WARNING: The server did not provide any stacktrace information) Command duration or timeout: 17.10 seconds Build info: version: '2.53.1', revision: 'a36b8b1cd5757287168e54b817830adce9b0158d', time: '2016-06-30 19:26:09' System info: host: 'efg-mini-appium.local', ip: '10.105.1.235', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.14.6', java.version: '1.8.0_102' username_2: It sais the port `5281` is used by another process. Please attach the full log so that we can predict the situation. (And the error is a different one for this issue. So, please create as another one.) username_3: i resolve this!) main problem is about **[debug] [WD Proxy] Got response with status 200: {"value":"-[XCUIElementQuery elementSnapshotForDebugDescription]: unrecognized selector sent to instance 0x280ded360\n\n(\n\t0 CoreFoundation 0x000000022c3abf68 <redacted> + 256\n\t1 libobjc.A.dylib 0x000000022b5a8284 objc_exception_throw + 60\n\t2 CoreFoundation 0x000000022c2c10e0 <redacted> + 0\n\t3 CoreFoundation 0x000000022c3b19bc <redacted> + 1416\n\t4 CoreFoundation 0x000000022c3b3730 _CF_forwarding_prep_0 + 96\n\t5 WebDriverAgentLib 0x0000000106810d2c -[XCUIElement(FBUtilities) fb_lastSnapshot] + 60\n\t6 WebDriverAgentLib 0x0000000106808800 -[XCUIElement(WebDriverAttributesForwarding) fb_snapshotForAttributeName:] + 756\n\t7 WebDriverAgentLib 0x0000000106808a30 -[XCUIElement(WebDriverAttributesForwarding) forwardingTargetForSelector:] + 168\n\t8 CoreFoundation 0x000000022c3b14d8 <redacted> + 164\n\t9 Cor... [WD Proxy] The response has an unknown format [debug] [MJSONWP] Matched JSONWP error code 13 to UnknownError [debug] [MJSONWP (566f855e)] Encountered internal error running command: NoSuchElementError: An element could not be located on the page using the given search parameters. [debug] [MJSONWP (566f855e)] at XCUITestDriver.doNativeFind (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/commands/find.js:130:13) [debug] [MJSONWP (566f855e)] at runNextTicks (internal/process/task_queues.js:55:5) [debug] [MJSONWP (566f855e)] at processImmediate (internal/timers.js:412:9) [HTTP] <-- POST /wd/hub/session/566f855e-5be9-4927-ae9a-6a8405e0d59a/element 500 297 ms - 164 [HTTP] [HTTP] --> POST /wd/hub/session/566f855e-5be9-4927-ae9a-6a8405e0d59a/element [HTTP] {"using":"id","value":"Sign In"} [debug] [MJSONWP (566f855e)] Calling AppiumDriver.findElement() with args: ["id","Sign In","566f855e-5be9-4927-ae9a-6a8405e0d59a"] [debug] [XCUITest] Executing command 'findElement' [debug] [BaseDriver] Valid locator strategies for this request: xpath, id, name, class name, -ios predicate string, -ios class chain, accessibility id [debug] [BaseDriver] Waiting up to 0 ms for condition [debug] [WD Proxy] Matched '/element' to command name 'findElement' [debug] [WD Proxy] Proxying [POST /element] to [POST http://localhost:4312/session/DA659110-EE10-4261-8663-DA4FE7B64F15/element] with body: {"using":"id","value":"Sign In"} [debug] [XCUITest] Connection to WDA timed out [debug] [WD Proxy] Got response with status 200: {"value":"-[XCUIElementQuery elementSnapshotForDebugDescription]: unrecognized selector sent to instance 0x280dce260\n\n(\n\t0 CoreFoundation 0x000000022c3abf68 <redacted> + 256\n\t1 libobjc.A.dylib 0x000000022b5a8284 objc_exception_throw + 60\n\t2 CoreFoundation 0x000000022c2c10e0 <redacted> + 0\n\t3 CoreFoundation 0x000000022c3b19bc <redacted> + 1416\n\t4 CoreFoundation 0x000000022c3b3730 _CF_forwarding_prep_0 + 96\n\t5 WebDriverAgentLib 0x0000000106810d2c -[XCUIElement(FBUtilities) fb_lastSnapshot] + 60\n\t6 WebDriverAgentLib 0x0000000106808800 -[XCUIElement(WebDriverAttributesForwarding) fb_snapshotForAttributeName:] + 756\n\t7 WebDriverAgentLib 0x0000000106808a30 -[XCUIElement(WebDriverAttributesForwarding) forwardingTargetForSelector:] + 168\n\t8 CoreFoundation 0x000000022c3b14d8 <redacted> + 164\n\t9 Cor... [WD Proxy] The response has an unknown format [debug] [MJSONWP] Matched JSONWP error code 13 to UnknownError [debug] [iProxy] recv failed: Operation not permitted [debug] [MJSONWP (566f855e)] Encountered internal error running command: NoSuchElementError: An element could not be located on the page using the given search parameters. [debug] [MJSONWP (566f855e)] at XCUITestDriver.doNativeFind (/usr/local/Cellar/node/12.5.0/lib/node_modules/appium/node_modules/appium-xcuitest-driver/lib/commands/find.js:130:13) [debug] [MJSONWP (566f855e)] at runNextTicks (internal/process/task_queues.js:55:5) [debug] [MJSONWP (566f855e)] at processImmediate (internal/timers.js:412:9) [HTTP] <-- POST /wd/hub/session/566f855e-5be9-4927-ae9a-6a8405e0d59a/element 500 261 ms - 164 EVEN WITH LATEST APPIUM 1.15.0 Who solved this problem??? username_3: [HTTP] --> GET /wd/hub/sessions [HTTP] {} [GENERIC] Calling AppiumDriver.getSessions() with args: [] [GENERIC] Responding to client with driver.getSessions() result: [] [HTTP] <-- GET /wd/hub/sessions 200 6 ms - 40 [HTTP] [HTTP] --> POST /wd/hub/session [HTTP] {"desiredCapabilities":{"app":"/Users/nade/Downloads/apps/ds-chat-ios.ipa","automationName":"XCUITest","deviceName":"iPhone XR","fullReset":false,"platformName":"iOS","platformVersion":"12.0","udid":"00008020-000B045C0E42002E","newCommandTimeout":0,"connectHardwareKeyboard":true}} [MJSONWP] Calling AppiumDriver.createSession() with args: [{"app":"/Users/nade/Downloads/apps/ds-chat-ios.ipa","automationName":"XCUITest","deviceName":"iPhone XR","fullReset":false,"platformName":"iOS","platformVersion":"12.0","udid":"00008020-000B045C0E42002E","newCommandTimeout":0,"connectHardwareKeyboard":true},null,null] [BaseDriver] Event 'newSessionRequested' logged at 1569590028580 (16:13:48 GMT+0300 (EEST)) [Appium] Appium v1.13.0 creating new XCUITestDriver (v2.113.2) session [Appium] Capabilities: [Appium] app: /Users/nade/Downloads/apps/ds-chat-ios.ipa [Appium] automationName: XCUITest [Appium] deviceName: iPhone XR [Appium] fullReset: false [Appium] platformName: iOS [Appium] platformVersion: 12.0 [Appium] udid: 00008020-000B045C0E42002E [Appium] newCommandTimeout: 0 [Appium] connectHardwareKeyboard: true [BaseDriver] Creating session with MJSONWP desired capabilities: {"app":"/Users/nade/Downloa... [BaseDriver] Session created with session id: 7449f0c6-8e42-45f9-86aa-79bcdd70d6bf [XCUITest] Current user: 'nade' [XCUITest] Available devices: 00008020-000B045C0E42002E [XCUITest] Creating iDevice object with udid '00008020-000B045C0E42002E' [XCUITest] Determining device to run tests on: udid: '00008020-000B045C0E42002E', real device: true [XCUITest] iOS SDK Version set to '13.0' [BaseDriver] Event 'xcodeDetailsRetrieved' logged at 1569590029170 (16:13:49 GMT+0300 (EEST)) [BaseDriver] Using local app '/Users/nade/Downloads/apps/ds-chat-ios.ipa' [BaseDriver] Unzipping '/Users/nade/Downloads/apps/ds-chat-ios.ipa' [BaseDriver] Extracted 476 item(s) from '/Users/nade/Downloads/apps/ds-chat-ios.ipa' [BaseDriver] Matched 475 item(s) in the extracted archive. Assuming 'Payload/ds-chat-ios.app' is the correct bundle [BaseDriver] Unzipped local app to '/<KEY>27-92361-1r595fw.x1b/Payload/ds-chat-ios.app' [BaseDriver] Event 'appConfigured' logged at 1569590032719 (16:13:52 GMT+0300 (EEST)) [XCUITest] Checking whether app '/<KEY>827-92<KEY>ds-chat-ios.app' is actually present on file system [XCUITest] App is present [iOS] Getting bundle ID from app '/<KEY>ds-chat-ios.app': 'ch.efg.mobile.chat.ios' [BaseDriver] Event 'resetStarted' logged at 1569590032725 (16:13:52 GMT+0300 (EEST)) [XCUITest] Reset: running ios real device reset flow [BaseDriver] Event 'resetComplete' logged at 1569590032726 (16:13:52 GMT+0300 (EEST)) [iOSLog] Attempting iOS device log capture via libimobiledevice idevicesyslog [iOSLog] Starting iOS device log capture with: 'idevicesyslog' [XCUITest] Crash reports root '/Users/nade/Library/Logs/CrashReporter/MobileDevice/Pizza' does not exist. Got nothing to gather. [BaseDriver] Event 'logCaptureStarted' logged at 1569590032883 (16:13:52 GMT+0300 (EEST)) [XCUITest] Setting up real device [XCUITest] Verifying application platform [XCUITest] CFBundleSupportedPlatforms: ["iPhoneOS"] [XCUITest] Calling: 'ios-deploy --exists --id 00008020-000B045C0E42002E --bundle_id ch.efg.mobile.chat.ios' [XCUITest] Stdout: '[....] Waiting for iOS device to be connected [XCUITest] [....] Using 00008020-000B045C0E42002E (D331pAP, D331pAP, uknownos, unkarch) a.k.a. 'Pizza'. [XCUITest] true [XCUITest] ' [XCUITest] Reset requested. Removing app with id 'ch.efg.mobile.chat.ios' from the device [XCUITest] Installing '/<KEY>ds-chat-ios.app' on device with UUID '00008020-000B045C0E42002E'... [XCUITest] The app has been installed successfully. [BaseDriver] Event 'appInstalled' logged at 1569590037109 (16:13:57 GMT+0300 (EEST)) [XCUITest] Using WDA path: '/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent' [XCUITest] Using WDA agent: '/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-xcuitest-driver/WebDriverAgent/WebDriverAgent.xcodeproj' [XCUITest] No obsolete cached processes from previous WDA sessions listening on port 8100 have been found [Truncated] [MJSONWP (7449f0c6)] 64 CoreFoundation 0x00000001ae8524fc + 28 [MJSONWP (7449f0c6)] 65 CoreFoundation 0x00000001ae851de0 + 276 [MJSONWP (7449f0c6)] 66 CoreFoundation 0x00000001ae84cbec + 1052 [MJSONWP (7449f0c6)] 67 CoreFoundation 0x00000001ae84c4b8 CFRunLoopRunSpecific + 452 [MJSONWP (7449f0c6)] 68 GraphicsServices 0x00000001b0affbe8 GSEventRunModal + 104 [MJSONWP (7449f0c6)] 69 UIKitCore 0x00000001dc57c950 UIApplicationMain + 216 [MJSONWP (7449f0c6)] 70 WebDriverAgentRunner-Runner 0x0000000102eafab4 main + 192 [MJSONWP (7449f0c6)] 71 libdyld.dylib 0x00000001ae301050 + 4 [MJSONWP (7449f0c6)] ) [MJSONWP (7449f0c6)] at errorFromMJSONWPStatusCode (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-base-driver/lib/protocol/errors.js:789:10) [MJSONWP (7449f0c6)] at ProxyRequestError.errorFromMJSONWPStatusCode [as getActualError] (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-base-driver/lib/protocol/errors.js:683:14) [MJSONWP (7449f0c6)] at JWProxy.getActualError [as command] (/Applications/Appium.app/Contents/Resources/app/node_modules/appium/node_modules/appium-base-driver/lib/jsonwp-proxy/proxy.js:235:19) [HTTP] <-- GET /wd/hub/session/7449f0c6-8e42-45f9-86aa-79bcdd70d6bf/window/current/size 500 163 ms - 7749 [HTTP] [WD Proxy] Got response with status 200: "{\n \"value\" : \"<KEY>RQABBBBA\\r\\nAAEEEEAAAQQQcCBAHHWATi0IAggggAACCCCAAAIIIIAAcZQ4igACCCCAAAIIIIAA\\r\\nAggg4ECAOOoAnVoQBBBAAAEEEEAAAQQQQAAB4ihxFAEEEEAAAQQQQAABBBBAwIEA\\r\\ncdQBOrUgCCCAAAIIIIAAAggggAACxFHiKAIIIIAAAggggAACCCCAgAMB4qgDdGpB\\r\\nEEAAAQQQQAABBBBAAAEEiKPEUQQQQAABBBBAAAEEEEAAAQcCxFEH6NSCIIAAAggg\\r\\ngAACCCCAAAIIEEeJowgggAACCCCAAAIIIIAAAg4EiKMO0KkFQQABBBBAAAEEEEAA\\r\\nAQQQII4SRxFAAAEEEEA... [XCUITest] Connection to WDA timed out [iProxy] recv failed: Operation not permitted [MJSONWP (7449f0c6)] Responding to client with driver.getScreenshot() result: "<KEY>\r\nAAIIIIAAAggQR<KEY>... [HTTP] <-- GET /wd/hub/session/7449f0c6-8e42-45f9-86aa-79bcdd70d6bf/screenshot 200 349 ms - 603090 [HTTP] username_2: https://github.com/appium/appium-desktop/issues/1099#issuecomment-535933543 says `[Appium] Appium v1.13.0 creating new XCUITestDriver (v2.113.2) session`. Need to do https://github.com/appium/appium-desktop/issues/1099#issuecomment-535253078 so far to solve this issue. username_3: @username_2 its UI appium) cant find desctop version 1.15.0 username_2: Published https://github.com/appium/appium-desktop/releases/tag/v1.15.0 Please try out it, thanks Status: Issue closed
tensorflow/tensorflow
127079233
Title: Tensor Flow IN Android Question: username_0: Please someone help me how i set the tensor flow library in android i try to follow the steps explain in the android example but stuck in first and i do not know how to do it which is Get the recommended Bazel version Answers: username_1: On MacOSX, I first upgraded java to Java8, and then installed Bazel. I've also prepared Android Example building environment with AndroidStudio and NDK, without Bazel. https://github.com/username_1/TensorFlowAndroidDemo If you would like to quickly try Android Example, it might help. Status: Issue closed username_2: http://bazel.io/docs/install.html for installing bazel
square/workflow
588669389
Title: Convert Container Screens in Sample to Generic Containers [swift] Question: username_0: Instead of using `AnyScreen` content, use a generic `ScreenType`. * `CrossFadeScreen` * `BackStackScreen` Answers: username_1: Just to leave a verbal conversation comment trail, we talked about how this also inspires a SwiftUI like IfScreen<TrueScreen, FalseScreen> that could be used to swap out a given screen should a condition evaluates to true. Relates to #1164 username_0: `CrossFadeScreen` is meant to handle transitions between different screens. We're not planning on switching that from `AnyScreen` at this time. Status: Issue closed
keanemind/rigor-checker
499989340
Title: Fix string searching algorithm Question: username_0: Pattern 1: The proof is quite obvious Pattern 2: Quite simple is the proof Text: The proof is quite simple is the proof Currently, neither pattern will be matched. Pattern 2 should be matched. Answers: username_0: Completed by https://github.com/username_0/rigor-checker/commit/ecd1007f7f05b94178d8795bd55d0763db21027f. Status: Issue closed
sul-dlss/searchworks_traject_indexer
804001732
Title: MODS, author sort and capitalization of the relator term Question: username_0: Hannah reported that, after searching for a collection, she used the "sort by" and selected author but the results list failed to sort the results list alphabetically by last name. Example search: https://searchworks.stanford.edu/?f%5Bcollection%5D%5B%5D=qf495kg6879&sort=author&view=list Jessie did some looking and reported: The issue, at least w/ the collection that is reported here, (or at least one of the issues) is that the author relator terms are not capitalized which is what is expected by the stanford-mods gem for identifying main authors. This is causing the author sort field to have a different author than is being displayed as the first/main author in the record. As an example, here are the first 3 records currently showing up for this result (and the author that is being sorted on): kg477dc3533: <NAME> jc844vf0304: <NAME> bw479pd2422: <NAME> This is because when a relator with "Author" or "Creator" role is not present it selects the first name w/o a role as the main author. At the 5th record in the result set we no longer have name's w/o roles, so we no longer have an author in the sort field and we begin the title sub-sort. If the roles are updated in these records to be correctly capitalized as described https://id.loc.gov/vocabulary/relators.html then I believe the sort will be updated appropriately. For what it's worth, it would also be possible to update the stanford-mods gem to be more permissive of the capitalization of the relator term (I already have a small change that I could propose), however we might also want to figure out what part of our process is generating invalid relator terms into the MODS. Status: Issue closed Answers: username_1: This was addressed in sul-dlss/stanford-mods#113. This search result should be fixed on our next full re-index (or the items in the SITE collection could be re-published, or whatever would trigger the re-index of them).
github/rest-api-description
944813683
Title: [Schema Inaccuracy] lots of `type: array` missing the `items` property Question: username_0: # Schema Inaccuracy <!--- Describe the problem shortly. Include the specific operation / schema that contains an error. --> ## Expected When there is a type of an array that the items property is defined. This is breaking some parsers. Seems like most are supposed to be strings, but would be great to make it explicit Answers: username_1: This issue appears to be a duplicate of https://github.com/github/rest-api-description/issues/158 Status: Issue closed
OpenSourceBrain/PinskyRinzelModel
93880597
Title: twoCompartment Figure 2 spike times not exactly the same as in LEMS Figure 2 Question: username_0: Vhe values for Vs/Vd in LEMS and the twoCompartment models are very close, but not exactly the same. For times <100ms, the values are off by a fraction of ms. For larger times, the errors accumulate resulting in spikes off by a few ms. It seems like it's some sort of numerical/rounding/integration issue. **The step size is the same. The model parameters are the same too.** Even in the simplest case, when all the channels are taken out, and just leaving the soma leak current in LEMS and twoCompartment model, the values are still not exactly the same. I.e. In LEMS, set the initial Vs to -30 mv and make sure Vs is just this: ``` <TimeDerivative variable="Vs" value="(-gLs*(Vs-VL)) / Cm"/> ``` In Figure2Aand3.parameters.nml, delete all the soma channels, except the somaLeak. In twoCompartmentCell.cell.nml, set initMembPotential to -30 mv, and set Vs: ``` <TimeDerivative variable="Vs" value="(iDensitySoma)/Cm" /> ``` In both versions, Vs eventually drops down to -60mv (as expected), but the exact values are not the same (see table below). Time Step (LEMS) | Vs (LEMS) | Time Step (twoCompartment) | Vs (twoCompartment) ------------ | ------------- | -------------------- | ------------------ 0 | -0.03 | 0 | -0.03 1.00E-05 | -0.03001 | 1.00E-05 | -0.03001 2.00E-05 | **-0.030019997** | 2.00E-05 | **-0.03002** 3.00E-05 | **-0.03002999** | 3.00E-05 | **-0.030029997** 4.00E-05 | -0.03003998 | 4.00E-05 | -0.03003999 5.00E-05 | -0.030049967 | 5.00E-05 | -0.03004998 6.00E-05 | -0.03005995 | 6.00E-05 | -0.030059967 7.00E-05 | -0.03006993 | 7.00E-05 | -0.03006995 8.00E-05 | -0.030079907 | 8.00E-05 | -0.03007993 9.00E-05 | -0.03008988 | 9.00E-05 | -0.030089907 1.00E-04 | -0.03009985 | 1.00E-04 | -0.03009988 1.10E-04 | -0.030109817 | 1.10E-04 | -0.03010985 1.20E-04 | -0.03011978 | 1.20E-04 | -0.030119818 1.30E-04 | -0.03012974 | 1.30E-04 | -0.030129781 1.40E-04 | -0.030139698 | 1.40E-04 | -0.03013974 1.50E-04 | -0.03014965 | 1.50E-04 | -0.030149696 1.60E-04 | -0.0301596 | 1.60E-04 | -0.03015965 1.70E-04 | -0.030169547 | 1.70E-04 | -0.0301696 1.80E-04 | -0.030179491 | 1.80E-04 | -0.030179547 ... | ... | ... | ... 1.30E-01 | -0.0596046 | 1.30E-01 | -0.05960517 1.30E-01 | -0.05960473 | 1.30E-01 | -0.059605304 1.30E-01 | -0.059604865 | 1.30E-01 | -0.059605435 1.30E-01 | -0.059604995 | 1.30E-01 | -0.059605565 1.30E-01 | -0.059605125 | 1.30E-01 | -0.059605695 1.30E-01 | -0.05960526 | 1.30E-01 | -0.05960583 1.30E-01 | -0.05960539 | 1.30E-01 | -0.05960596 1.30E-01 | -0.05960552 | 1.30E-01 | -0.05960609 1.30E-01 | -0.059605654 | 1.30E-01 | -0.059606224 1.30E-01 | -0.059605785 | 1.30E-01 | -0.059606355 1.30E-01 | -0.059605915 | 1.30E-01 | -0.059606485 1.30E-01 | -0.05960605 | 1.30E-01 | -0.059606615 1.30E-01 | -0.05960618 | 1.30E-01 | -0.059606746 1.30E-01 | -0.05960631 | 1.30E-01 | -0.05960688 1.30E-01 | -0.05960644 | 1.30E-01 | -0.05960701 1.30E-01 | **-0.05960657** | 1.30E-01 | **-0.05960714**
gotthardp/lorawan-server
235425875
Title: Server Performance Simulation Question: username_0: Hi Petr, I am able to successfully connect a few gateways and few end nodes with the server. Till now the performance is pretty good. But what would happen if 100s of gateways and 1000s of end nodes are connected to it ? Is there way to simulate the server performance using some software code or some other tool ? One option could be sending lots of UDP packets to the server just as a gateway sends. Please suggest a solution. Answers: username_1: Hi. You could extend the test tools to simulate gateways and nodes. Currently the tests are run on the same machine. Alternatively one could use these modules to develop a load-test tool and run that from another machine. username_0: What are these test tools and how to develop a load-test ? Thanks username_2: @username_0 @username_1 I would like to run that from another machine. Is there are any available documentation? @username_0 Can we use Lorasim or lpwansim to send such massive number of UDP to lorawan-server? Many thanks. username_0: @username_2 : we currently dont have such a tool. We are also looking for it. Will share with you when i found/develop one such tool. username_1: @username_2 @username_0 I have a packet_forwarder simulator and a LoRa-mote simulator already (in https://github.com/username_1/lorawan-server/tree/master/test) and I am about to extend it towards load-testing, but it will take me few days. username_1: I don't think the lorasim nor lpwansim is what you're looking for. username_0: @username_1 : It will be really helpful to have the packet_forwarder simulator. Will be waiting for you to complete the module. Thanks username_1: @username_0 Do you have any wishlist of what features you'd need? (I don't say I implement it all.) I would need just a simple load-tester to send a high amount frames towards an application (likely the semtech mote), simulating traffic from multiple gateways and multiple devices. username_0: @username_1 : Thanks for asking. I will be waiting for you to compete the testing module. And will be sharing the wishlist soon :P username_3: This: https://gitlab.com/itk.fr/lorhammer May fulfil some of your needs :) username_1: Oh, this looks like the right tool. Thanks for letting us know! @username_2 @username_0, will you please try to run the tool against your lorawan-server installations? Status: Issue closed
MicrosoftDocs/azure-docs
303519441
Title: log.LogInformation Question: username_0: where does log.LogInformation log to if you are running locally, and does it "automatically" log to AI when you are not running locally? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 13672b67-22ea-3dda-8b6b-3332c24e6030 * Version Independent ID: fdc60aeb-9971-1d9f-a6ba-b3ad818cf0d5 * Content: [Monitor Azure Functions | Microsoft Docs](https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring#write-logs-in-c-functions) * Content Source: [articles/azure-functions/functions-monitoring.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-functions/functions-monitoring.md) * Service: **functions** * GitHub Login: @tdykstra * Microsoft Alias: **tdykstra** Status: Issue closed Answers: username_0: i actually read the doco properly ;)
PeterJCLaw/test
1115498339
Title: Prepare an incident response plan Question: username_0: We should ensure that our incident response plans are current & applicable to the selected venue; we should update them as needed. ### Original [comp/safety/incident-plan](https://github.com/srobo/recurring-tasks/blob/master/comp/safety/incident-plan.yaml) ### Dependencies * #647 Find and book venue for SR0002 * #702 Have a means to record an incident * #703 Update and publish risk assessments for the competition
confluentinc/confluent-kafka-go
299331053
Title: Using proper time durations Question: username_0: Description =========== At the moment all of your duration parameters are of type `int` (e.g. `func (p *Producer) Flush(timeoutMs int) int`). I guess this has been done as an effort to make the Go library as close as possible to the underlying library. Unfortunately, this is not very Go-friendly way of using time. In every single other project Go developers would use `time.Duration` so the above example would become `func (p *Producer) Flush(timeout time.Duration) int`. This would allow us to invoke it like this: ``` p.Flush(15 * time.Second) ``` instead of the way it is in your example in the `README.md`: ``` p.Flush(15 * 1000) ``` The underlying library would still receive milliseconds, but this wouldn't affect Go developers. As a Go developer I would recommend using `time.Duration` for better readability and sticking to the Go conventions. If you agree, I'd be happy to send a PR with this change. Answers: username_1: You are absolutely right, we should not have used int timeouts. There's been some discussion on whether we should move to `context`s which provides both timeouts and cancellations, but it is not currently clear how to make cancellations work with the underlying C library - there's definately room for some research in this area. As for your suggestion, I'm suspecting that would be a breaking change for existing applications, so it would need to be timed with a future 1.0.0 release. username_0: I'm willing to improve the Go client and would be happy to discuss some ideas with you and contribute to the project. Indeed, using contexts would be ideal. username_0: When you create a 1.0.0 branch, I'd be happy to contribute to it, even if it's not getting merged any time soon. Fixing the version of the Go client with the version of the C library makes it very hard to introduce changes like this. username_1: We very much welcome discussions and contributions for improving the client :+1: The version sync between Go and librdkafka is not strictly fixed, librdkafka is for example at 0.11.3 while Go is still at 0.11.0, and it is possible for the reverse to happen as well. But we aim to keep them somewhat in synch when possible. The upcoming 0.11.4 release of librdkafka will also be joined with 0.11.4 releases of the Go, Python and .NET clients. As for API breakage, we try our hardest not to break the API unless required as not to mess up existing users. However, we do plan to release 1.0.0 in Q2 and that would allow us to break the API. I think the best approach for this enhancement is to put the effort into making it based on context, since that is the end goal. librdkafka currently does not support cancellation of blocking calls, but there is demand for such an API from other venues as well (such as Ctlr-C in Python), so we'll add an API to librdkafka that let's you do this. It would be useful to have the Go context requirements when we do this. username_0: Well, adding context support can be a non-breaking change in the Go world. Many projects just add the same method(s) with `Context` suffix. For example initially there was only `Query` method in the `database/sql` package: ```Go func (db *DB) Query(query string, args ...interface{}) (*Rows, error) ``` and later `QueryContext` was added: ```Go func (db *DB) QueryContext(ctx context.Context, query string, args ...interface{}) (*Rows, error) ``` Since you already have existing consumers to support, I suggest the you introduce `*Context(ctx context.Context, ...)` methods where needed. This would be a non-breaking change (in Go). username_1: That looks good. Would you like to look into context:ualizing Pool(), figuring out what goes into the Go code and what is needed from the underlying C code? username_0: I'll take a look soon and will get back to you... username_0: I pushed #149 as an initial prototype. I'd need some requirements from you, before I can continue with other things. username_1: We will not change the current APIs as that would break backwards compatibility and that's more hassle than value for the existing user base. What we can do is add Context-versions of the most popular APIs, but it is not something we'll tackle during 2021.
scala/scala-parser-combinators
917309685
Title: .gitignore missing exclusions for recent sbt and for vscode/metals/bloop. Question: username_0: The current .gitignore file is missing exclusions for recent sbt: ``` /.bsp/ ``` and for vscode with metals and bloop: ``` .bloop/ /.metals/ /.vscode/ /project/**/metals.sbt ``` Answers: username_1: fixed in https://github.com/scala/scala-parser-combinators/pull/404. Status: Issue closed
urbit/arvo
301863858
Title: web talk filtering hangs Question: username_0: When I try to filter my web talk so it only displays one channel, it spins forever. I can click "clear" and normal operation resumes. This is the case even for channels with recent messages, so I don't think it's spinning as a result of being blank. I did see this error in the console, but couldn't find its line number for some reason: `Empty string passed to getElementById().` Answers: username_1: This seems related to urbit/talk#48. username_2: Is web talk still a thing? I'm going to assume it has been at effectively (if not entirely) deprecated by Landscape and close this one, but correct me if it's still a going concern somewhere. Status: Issue closed
Shoes3/exe-shoes
322182417
Title: Merging into Shoes Question: username_0: Long time, no action. I'm merging this code into Shoes 3.3.7. [Documentation](https://github.com/shoes/shoes3/wiki/Merge-Packaging-on-Windows) Answers: username_1: Great to hear! Let me know if you need anything from me.
kujian/githubTrending
771338818
Title: 成都新都公积金取现提现-2020-12-19 16:44:36 Question: username_0: 成都新都公积金取现提现 ███+vx【41530556】███秒回款,安全保证██【深交所副总经理唐瑞:推进公募REITS试点 稳步推出单市场ETF期权】深圳证券交易所副总经理唐瑞今天在第16届中国(深圳)国际期货大会上表示,2021年,深交所将建设高质量的固收市场,稳步推进公募REITS试点,培育具有深市特色的ETF产品,构建多层次的信息数据产品体系,特别是进一步拓展衍生品体系。同时,深交所将在证监会部署下稳步推出单市场ETF期权,加大研发投入,丰富期权品种,研究制定首批授权事项清单上明确要求的推出深市股票股指期货的相关方案,优化衍生品的交易结算制度。 【中央财办负责人解读强化反垄断和防止资本无序扩张】中央财经委员会办公室分管日常工作的副主任韩文秀说,近年来,我国平台经济迅速发展,互联网平台企业快速壮大,在满足消费者需求等方面做出积极贡献。但与此同时,市场垄断、无序扩张、野蛮生长的问题日益凸显,出现了限制竞争、赢者通吃、价格歧视、泄露个人隐私、损害消费者权益、风险隐患积累等一系列问题,存在监管滞后甚至监管空白。国家支持平台企业发展,增强国际竞争力。同时,要依法依规发展,健全制度规则,完善法律法规,加强对垄断行为的规制,提升监管能力。平台经济的许多业务具有金融性,有的钻了监管的空子。中央经济工作会议明确,国家支持平台企业创新发展、增强国际竞争力,支持公有制经济和非公有制经济共同发展,同时要依法规范发展,健全数字规则。(新华视点) 【深交所副总经理唐瑞:推进公募REITS试点 稳步推出单市场ETF期权】深圳证券交易所副总经理唐瑞今天在第16届中国(深圳)国际期货大会上表示,2021年,深交所将建设高质量的固收市场,稳步推进公募REITS试点,培育具有深市特色的ETF产品,构建多层次的信息数据产品体系,特别是进一步拓展衍生品体系。同时,深交所将在证监会部署下稳步推出单市场ETF期权,加大研发投入,丰富期权品种,研究制定首批授权事项清单上明确要求的推出深市股票股指期货的相关方案,优化衍生品的交易结算制度。 【嫦娥五号月球样品正式交接,初步测量重约1731克】中国国家航天局在北京举行探月工程嫦娥五号任务月球样品交接仪式,与部分参研参试单位一道,共同见证样品移交至任务地面应用系统。经初步测量,嫦娥五号任务采集月球样品约1731克。在样品安全运输至月球样品实验室后,地面应用系统的科研人员将按计划进行月球样品的存储、制备和处理,启动科研工作。(中新网) 【银保监会肖远企:对打着养老噱头、不具备养老功能的金融产品要进行彻底纠正】银保监会首席风险官兼新闻发言人肖远企19日在“中国社会科学院社会保障论坛暨《中国养老金发展报告2020》发布式”上表示,个人养老金融既要按市场化、商业化原则运作,但又是一项准公共产品。对那些打着养老噱头,不具备养老功能的金融产品要进行彻底纠正,防止劣币驱逐良币的现象。 今年岁上五年级识得很多字从走出小屋开始山娃就知道父亲的家和工地共有一个很动听的名字——天河工地的底层空空荡荡很宽阔很凉爽在地上铺上报纸和水泥袋父亲和工人们中午全睡在地上地面坑坑洼洼山娃曾多次绊倒过也曾有长铁钉穿透凉鞋刺在脚板上但山娃不怕工地上也常有五六个从乡下来的小学生他们的父母亲也是高楼上的建筑身 oyetiodiczncagkppizabnhqtyrrztbcmepihhzpfjtzikmzttsmnisarnzkxljjylqwqbiijjvsthhj 今年岁上五年级识得很多字从走出小屋开始山娃就知道父亲的家和工地共有一个很动听的名字——天河工地的底层空空荡荡很宽阔很凉爽在地上铺上报纸和水泥袋父亲和工人们中午全睡在地上地面坑坑洼洼山娃曾多次绊倒过也曾有长铁钉穿透凉鞋刺在脚板上但山娃不怕工地上也常有五六个从乡下来的小学生他们的父母亲也是高楼上的建筑身 13https://github.com/kujian/githubTrending/issues/2031 https://github.com/kujian/githubTrending/issues/2032 https://github.com/kujian/githubTrending/issues/2033 https://github.com/kujian/githubTrending/issues/2034 https://github.com/kujian/githubTrending/issues/2035
Azure/arm-template-whatif
795547935
Title: Front door properties - too much noise Question: username_0: Resource type "apiVersion": "2020-05-01", "type": "Microsoft.Network/frontDoors", Client PowerShell ARM template: `"properties": { "copy": [ { "name": "backendPools", "count": "[length(parameters('backendPools').backendPools)]", "input": { "name": "[parameters('backendPools').backendPools[copyIndex('backendPools')].name]", "properties": { "backends": "[parameters('backendPools').backendPools[copyIndex('backendPools')].backends]", "loadBalancingSettings": { "id": "[resourceId('Microsoft.Network/frontDoors/loadBalancingSettings', parameters('frontDoorName'), parameters('backendPools').backendPools[copyIndex('backendPools')].loadBalancingSettings)]" }, "healthProbeSettings": { "id": "[resourceId('Microsoft.Network/frontDoors/healthProbeSettings', parameters('frontDoorName'), parameters('backendPools').backendPools[copyIndex('backendPools')].healthProbeSettings)]" } } } }, { "name": "routingRules", "count": "[length(parameters('routingRules').routingRules)]", "input": { "name": "[parameters('routingRules').routingRules[copyIndex('routingRules')].name]", "properties": { "frontendEndpoints": "[parameters('routingRules').routingRules[copyIndex('routingRules')].frontendEndpoint]", "acceptedProtocols": "[parameters('routingRules').routingRules[copyIndex('routingRules')].acceptedProtocols]", "patternsToMatch": "[parameters('routingRules').routingRules[copyIndex('routingRules')].patternsToMatch]", "routeConfiguration": "[parameters('routingRules').routingRules[copyIndex('routingRules')].routeConfiguration]", "enabledState": "Enabled" } } }, { "name": "frontendEndpoints", "count": "[length(parameters('frontendEndpoints').frontendEndpoints)]", "input": { "name": "[parameters('frontendEndpoints').frontendEndpoints[copyIndex('frontendEndpoints')].name]", "properties": { "hostName": "[parameters('frontendEndpoints').frontendEndpoints[copyIndex('frontendEndpoints')].hostName]", "sessionAffinityEnabledState": "Disabled", "sessionAffinityTtlSeconds": 0, "webApplicationFirewallPolicyLink": { "id": "[parameters('frontendEndpoints').frontendEndpoints[copyIndex('frontendEndpoints')].webApplicationFirewallPolicyLinkId]" } } } }, { "name": "healthProbeSettings", "count": "[length(parameters('healthProbeSettings').healthProbeSettings)]", [Truncated] "name": "[parameters('healthProbeSettings').healthProbeSettings[copyIndex('healthProbeSettings')].name]", "properties": { "path": "[parameters('healthProbeSettings').healthProbeSettings[copyIndex('healthProbeSettings')].path]", "protocol": "Https", "intervalInSeconds": "[parameters('healthProbeSettings').healthProbeSettings[copyIndex('healthProbeSettings')].intervalInSeconds]", "healthProbeMethod": "[parameters('healthProbeSettings').healthProbeSettings[copyIndex('healthProbeSettings')].healthProbeMethod]" } } } ], "loadBalancingSettings": "[parameters('loadbalancingSettings')]", "backendPoolsSettings": { "enforceCertificateNameCheck": "Enabled", "sendRecvTimeoutSeconds": 240 }, "enabledState": "Enabled" } }` Most all the resources are returned as noise with deployment of the same template Answers: username_1: Any chance you can send us a screenshot/raw text of the noise you are seeing? This will help us quickly identify if the noise is from defaultValues, read or writeOnly properties, etc. username_2: @username_1 , I'm also getting a lot of noise with this - can I email it to you or something? LinkedIn might not want it public :) username_2: If I get time, I'll try to make reproducible example with my own domains :)
tmolitor-stud-tu/mod_push_appserver
394897677
Title: Push handler for type 'fcm' not executed successfully: handler not found Question: username_0: I'm testing mod_push_appserver with Conversations and I'm getting this error in prosody.log. Conversations is patched. I'm not sure if this is a bug or if I misconfigured something. ``` Dec 30 19:21:17 push.example.com:push_appserver info Firing event 'incoming-push-to-fcm' (node = '123a567b901c34d6', secret = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') Dec 30 19:21:17 push.example.com:push_appserver error Push handler for type 'fcm' not executed successfully: handler not found Dec 30 19:21:17 push.example.com:push_appserver warn Rate limit for node '123a567b901c34d6' reached, ignoring push request (and returning 'OK') Dec 30 19:21:17 push.example.com:push_appserver warn Rate limit for node '123a567b901c34d6' reached, ignoring push request (and returning 'OK') Dec 30 19:21:17 push.example.com:push_appserver warn Rate limit for node '123a567b901c34d6' reached, ignoring push request (and returning 'OK') ... ``` Answers: username_1: You have to load and configure the fcm submodule (push_appserver_fcm), too. username_0: I had configured it like this ``` push_appserver_fcm_key = "<key>" push_appserver_debugging = true Component "push.example.com" "push_appserver" ``` and changed it to that ``` Component "push.example.com" "push_appserver" push_appserver_fcm_key = "<key>" push_appserver_debugging = true ``` Is this the way to do it? It seems to work now. I'm running the p2 server as a separate component until all clients have updated their app. username_1: Well, it's strange if it works now, because you still didn't load the fcm submodule. Example: ``` Component "push.example.com" "push_appserver" push_appserver_fcm_key = "<key>" push_appserver_debugging = true modules_enabled = { "push_appserver_fcm" } ``` Don't forget to disable debugging if everything works as expected!! username_0: I did load the push_appserver and push_appserver_fcm module in the global config. If push_appserver_fcm have to be enabled in the Component config, then we should put it in the README. I haven't seen this kind of configuration before and there is no mention of it in the obvious places at prosody.im. The "handler not found" error is fixed now, but I still get "Rate limit" warnings, if the client is not reachable. If turn off the wifi at device 1, then send a message from device 2 to device 1, then after some times these warning appear in the log until device 1 is online again. Is this normal? username_1: Loading them globally is ok, I just didn't know you did it that way. To answer your question you should look at the debug log of the xmpp server device 1 uses (ideally a prosody server using mod_cloud_notify). You can send me the debug log for device 1 if you want me to check if everything is all right. Ideally accompanied by the debug log of the appserver showing the rate limit warnings. username_0: The rate limit warnings are gone, I'm not sure why and how to reproduce it. Unfortunately it seems mod_cloud_notify does not work at all on my server. I don't see any push messages delivered to the client. I tested it with Conversations and adb logcat. Using a conversations.im account push messages are arriving, with an account on my server there are now push messages that arrive. According to the prosody.log push notifications are enabled: `info Push notifications enabled for <EMAIL>/Conversations.s5cj (p2.siacs.eu<PBzsh37Ds)` username_1: Activate prosody's debug logs on your server. This will make mod_cloud_notify log a lot of useful information. If you paste the cloud_notify portion of this debug log here, I'll try to help :) username_1: Are you using the newest version of mod_cloud_notify? Status: Issue closed username_1: I'll close this now because mod_cloud_notify debugging is kind of offtopic here, feel free to contact me in private for assistance here.
bfirsh/funker-go
196852004
Title: funker.Handle() does not respond if it takes longer than (about) 60 seconds Question: username_0: `funker.Handle()` does not respond if it takes longer than (about) 60 seconds. The `funker.Handle()` itself seems returning no error. However, looking at the packets, the handler does not seem TCP-PSHing the response body. ## How to Reproduce Just a single Swarm node, just a single handler service replica, and just a single call is enough to reproduce this issue. The reproduction code is available at https://github.com/username_0/demo-funker-issue/tree/v20161221.0 . The handler code just does: ```go type data struct { X string `json:"x"` Time time.Time `json:"time"` } func callee(d time.Durationg) error { return funker.Handle(func(funkArgs *data) data { time.Sleep(d) ret := data{X: funkArgs.X, Time: time.Now()} return ret }) } ``` TravisCI log: https://travis-ci.org/username_0/demo-funker-issue/builds/185663029 You can see `test.sh 55s` (which injects 55s sleep to the handler) passes, but `test.sh 65s` is hanging. In this automated test, I'm using `funker.Call()` as a caller, but I confirmed the issue is even reproducible with just `echo '{"X":"hello"}' | nc demo-callee 9999`. So the issue is definitely on `funker.Handle()`, not on `funker.Call()`. Status: Issue closed Answers: username_0: The issue seems not specific to funker, so I opened another issue here: https://github.com/docker/docker/issues/29655 Sorry for confusing you :sweat_smile: username_0: opened workaround https://github.com/bfirsh/funker-go/pull/6
sveltejs/svelte
311227824
Title: Refactor — eliminate 'hidden classes' Question: username_0: [This Rollup issue](https://github.com/rollup/rollup/issues/2050) got me thinking. #992 moved away from the concept of 'visitors' to the current approach, whereby AST nodes are converted into classes that implement common methods like `build` alongside more specialised ones. It works, but: * it means we have to clone the AST, so that it can be included in the `svelte.compile` output * there are performance downsides to changing prototypes * we're locked in to the structure of the AST generated by `svelte.parse`, which isn't *necessarily* the best structure for later phases (e.g. bindings and event handlers are lumped in with attributes). I don't think there's any guarantee that the parser *could* create the optimal structure So what I propose, in essence, is that instead of augmenting the existing AST nodes we create entirely new objects. Instead of ``` Object.setPrototypeOf(node, EachBlock.prototype); ``` it would be ``` new EachBlock(node); ``` and the `EachBlock` constructor would be responsible for instantiating its children.<issue_closed> Status: Issue closed
cs340tabyu/cs340Winter2018
309584898
Title: Starting Game with incorrect destination cards Question: username_0: Received 0 destination cards at the beginning of the game (Selected 2 of them, received none of them). Another person selected 3 and received 2. (Duluth - Houston failed twice on game start) Answers: username_0: Addonexus had drawn three, thomas drew 3, artisan drew 2. ![image](https://user-images.githubusercontent.com/35548890/38065116-b6b82374-32be-11e8-8a70-5c3ffbda91a2.png) username_1: Problem is when we select some dest cards in one player, then select and accept in another player, then accept in first player. username_1: Fixed Status: Issue closed
JeffreySu/WeiXinMPSDK
338147304
Title: XXE安全漏洞何时修复? Question: username_0: 国外爆出的XXE安全漏洞,以及 微信支付团队的说明如下: https://pay.weixin.qq.com/wiki/doc/api/jsapi.php?chapter=23_5 看了一下,当前SDK也有XmlDocument使用,没有修复,建议修复,修复方法: var xmlDoc = new XmlDocument(); xmlDoc.XmlResolver = null; // 添加这一行代码 Answers: username_1: 你能让问题重现吗? username_0: 这个,百度或Google一下 XXE攻击,你就明白是啥问题了 username_1: Prior to .NET Framework version 4.5.2, System.Xml.XmlDocument is unsafe by default. The XmlDocument object has an XmlResolver object within it that needs to be set to null in versions prior to 4.5.2. In versions 4.5.2 and up, this XmlResolver is set to null by default. Status: Issue closed username_2: 已经发布,感谢 @username_0 @username_1
cms-sw/cmssw
259594226
Title: PathStatus modules show up incorrectly in StallMonitor Question: username_0: The internal modules which generated the PathStatus show up with incorrect stall amounts in the StallMonitor. This is probably because those modules do not call 'prefetch' signals correctly. Answers: username_0: assign core username_0: fixed in #20678 Status: Issue closed username_0: +1
kemi242/fubar
120291989
Title: Bonyolult: több Android verzió Question: username_0: Hogy oldjuk meg, hogy több Android verzión is fusson a Fubar? Tudatában lesz a játék ennek, és a képernyő méretének? Mi a maximális méret, amihez szabni kell a kereteket? Ha van ilyen, akkor ehhez szabom a képméreteket, rajzokat, és akkor csak kicsinyíteni kell őket a régebbi, kisebb képméretű Android verzióknál. Persze csak ha így működik a dolog. Lesznek kompatibilitási problémák a képernyőméreten túl is? Answers: username_1: Emiatt is jobb az Android Studio, ott ki tudod választani, mire akarod nézni a képernyőt - milyen telefonra, vagy milyen Android verzióra + képernyőfelbontásra. Egyébként van benne automatikus méretezés, és több különböző képet is fel lehet tenni egy objektumhoz. Lásd itt: http://developer.android.com/training/basics/supporting-devices/screens.html Ezt majd meg kéne beszélnünk részletesen azt hiszem. Eldönteni, mire fejlesztünk, és kiszámolgatni + letesztelni a különböző méreteket. username_0: Mi lenne akkor, ha arra az androidra terveznénk, ami nekem van (G2)? A kérdésem az, hogy ha megcsinálom a képeket vektorgrafikus PNG-kben, akkor a nagyobb képméret szét tudja-e őket minőségromlás nélkül húzni? Ha igen, akkor ki tudom számolni a méreteket G2-re, és már csak a telefon szélesség-hosszúság-aránya fog módosítani az elhelyezésen. username_1: Utánajártam kicsit jobban ennek a témának. Két fontos dolog van Androidnál: 1. Density (pixelsűrűség): alapból automatikusan módosítja az Android az adott képernyőre. De meg lehet adni több felbontást is. Pl: 3 PNG-t kis-közepes-nagy density-re. Szerintem ezzel egyelőre nem érdemes foglalkozni, rábízhatjuk az Androidra. Válasszunk egy adott dpi-t, amit minden PNG-nél használni fogunk. 2. Felbontás: itt 3 van igazából: 360x640, 384x640 és 400x640. Meg ennek a többszörösei. Pl. az 1080 x 1920-es G2 az 3*360 x 3*640. De első körben ez is mehet automatikusan, akkor vegyük a te telefonodat alapul (360x640). A min. felbontás, amit érdemes supportálni: 720x1280 (pl. Galaxy Nexus).
lervag/vimtex
114007847
Title: autocompletion for glossaryentries Question: username_0: a completion for \gls{myacronym} à la complete-cites or complete-labels would be pretty neat. Answers: username_1: I am not acquainted with the glossary stuff. Could you please provide a more thorough explanation of the feature you want. It would be very helpful with a minimal example tex file as well. username_0: Ok sorry, here it comes. Please remove the txt-endings, github won't allow me to upload them with endigs. [glossary_mwe.tex.txt](https://github.com/username_1/vimtex/files/49769/glossary_mwe.tex.txt) [.latexmkrc.txt](https://github.com/username_1/vimtex/files/49770/default.latexmkrc.txt) username_1: Ok, so, some initial thoughts: 0. A warning: I might not finish this until some time, as it seems it requires some clever parsing to get the completion right. 1. It seems that a nice way to handle this is the following: Parse the complete `tex` tree (i.e. all the project tex files) for any `\newglossaryentry` commands, add the first argument of each such command as completion candidates. This may be further improved by parsing the second argument for more info (e.g. description). 2. First, it is somewhat unclear: Do you use `\syc{...}` as well for the symbolslist in your examples? If that's the case, then I should also parse for the `\newglossary` commands to find patterns where the completion should activate. Do you always need the `\newglossary`? Does this seem right? Am I missing something here? Do you agree that all the necessary info can be parsed directly from the tex files? username_1: In the [LaTeX wiki](https://en.wikibooks.org/wiki/LaTeX/Glossary) it seems the `\newglossary` is never mentioned. Thus this can be viewed as an advanced command, I guess? That is, I propose a simple approach: 1. Parse tex files in project for all `\newglossaryentry` commands (this provides completion candidates with descriptions and context). 2. Use a vimtex option to define where the glossary completions should occur, by default it would be something like `g:vimtex_complete_glossaries = '\glc'`. username_0: 1.1: take your time ;) 1.2: I think that in one of the three produced files (see below) is all the information you need 1.3: I never used \syc{...}. \gls{...} works for all the glossaries. \newglossary is not necessary, if you need only the default glossary 2.2: I think \gls{...} is the only command that needs completion We have setup the glossaries by using this beginners guide: ftp://ftp.dante.de/tex-archive/macros/latex/contrib/glossaries/glossariesbegin.pdf We use option 2 (see p. 10), with makeindex and two glossaries (see p. 18). I think makeindex produces three files for each glossary. We have arbitrary chosen the file-endings (with \newglossary[gla]{gloss}{glb}{glc}) .gla, .glb, .glc respectively .sya, .syb, .syc. The default (with starred \newglossary* ) is .glg .gls .glo. The whole documentation is here: ftp://ftp.rrzn.uni-hannover.de/pub/mirror/tex-archive/macros/latex/contrib/glossaries/glossaries-user.pdf In Chapter 12, p. 147, they describe the meaning of the three files, maybe there is something useful for you. username_1: Thanks! I'll mark some tasks, then. - [ ] Activate completion upon `\glc{` - [ ] Parse `tex` files for `\newglossaryentry`s - [ ] Add completion candidates from first argument - [ ] Add more information from options in second argument When I finish the above, we can discuss whether or not it needs refinements and updates. username_1: I've added a simple version now. It only parses the first argument. Parsing the remaining is more tricky. I might do it later, though. username_0: 1. The glossary command is \gls not \glc 2. It seems to work only, if \newglossaryentry is in the same file. username_1: Fixed the command name now, sorry. It should work when `\newglossaryentry` is in a separate file. It works for me with the current completion test files (see `test/feature/completions` in the vimtex repo).
tensorflow/models
618907717
Title: tensorflow.python.framework.errors_impl.UnimplementedError: /content/drive/My; Operation not supported Question: username_0: <!-- As per our GitHub Policy (https://github.com/tensorflow/models/blob/master/ISSUES.md), we only address code bugs, documentation issues, and feature requests on GitHub. We will automatically close questions and help related issues. Please go to Stack Overflow (http://stackoverflow.com/questions/tagged/tensorflow-model-garden) for questions and help. --> Coding I used to exports the model specifed and inference graph: !python /content/export_inference_graph.py \ --input_type=image_tensor \ --pipeline_config_path=/content/ssd_mobilenet_v2_coco.config \ --output_directory={output_directory} \ --trained_checkpoint_prefix={last_model_path} Error I faced: Traceback (most recent call last): File "/content/export_inference_graph.py", line 162, in <module> tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/content/export_inference_graph.py", line 158, in main write_inference_graph=FLAGS.write_inference_graph) File "/content/drive/My Drive/object_detection/models/research/object_detection/exporter.py", line 510, in export_inference_graph write_inference_graph=write_inference_graph) File "/content/drive/My Drive/object_detection/models/research/object_detection/exporter.py", line 402, in _export_inference_graph tf.gfile.MakeDirs(output_directory) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/lib/io/file_io.py", line 438, in recursive_create_dir recursive_create_dir_v2(dirname) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/lib/io/file_io.py", line 453, in recursive_create_dir_v2 pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(path)) tensorflow.python.framework.errors_impl.UnimplementedError: /content/drive/My; Operation not supported After I trained the model and run the TensorBoard, then I need to export the graph. However, I faced this error, can someone tell me what happened to my coding? thanks in advanced, I am a new in this field.
jdc20181/SpeedTest
213505579
Title: [Security-All Versions, Excluding App ] Site can be accessed without HTTPS Question: username_0: I wrote an [article](https://github.com/username_0/SpeedTest/wiki/HTTPS-Security-Information) about this. And should help understand this issue. I didn't touch on the actual issue but its self explainatory. You can access the site via HTTP still. I switched to HTTPS for security. **Notice** This topic is "Closed" but open for informational purposes. I have locked this for this matter. If you have issues accessing the site that is another issue. Maintaince is done 2-8 times a month or as needed. Answers: username_0: **Closed** I have posted about the issue no need keeping it open now. Questions about this matter will be closed - As I am not dealing with it. Status: Issue closed
openid/AppAuth-iOS
583279045
Title: support client_secret_post Question: username_0: Support client_secret_post on request for access token in the Authorization Code flow, using OIDAuthorizationRequest and subsequently OIDTokenRequest Currently, there is support for client_secret_basic. client_id can be sent as a post parameter if client_secret is nil based on here: https://github.com/openid/AppAuth-iOS/blob/65ef95c6b38c7889fdda7a22f242fa0156fd43b2/Source/OIDTokenRequest.m###L277-L293 side note, we use AppAuth on Android, which has the client authentication options here: https://github.com/openid/AppAuth-Android#utilizing-client-secrets-dangerous warnings about client_secret in app are understood (less than ideal, no static, only dynamic). Answers: username_1: This is unnecessary as RFC 6749 mandates that servers support Basic Auth. ``` The authorization server MUST support the HTTP Basic authentication scheme for authenticating clients that were issued a client password. ``` AppAuth uses Basic Auth when the client has a secret, as allowed by RFC 6749. Status: Issue closed username_0: thanks @username_1 !
stephencelis/ghi
58716409
Title: What do the -p|--paginate|--no-pager options do? (add to README) Question: username_0: Hi, I couldn't find any doc on what these options do. The long options seem to be broken (I installed from the current `master`): Invalid option: --paginate Invalid option: --no-pager The `--no-truncate` option is also not referenced in the readme. Can @username_1 can you give me a short explanation of the first two? I'll send a Pull Request if needed. Answers: username_1: It's a top-level option that mirrors the same option on the `git` executable. _E.g._, the following won't feed items into `less`. ``` sh $ ghi --no-pager list -- rails/rails ``` Meanwhile, `--paginate` will. It's already documented if you run `man ghi`. Status: Issue closed username_0: Thanks @username_1 I guess I was forgetting the `list` command (since it's the default, ghi doesn't complain when there are no options but `--paginate` makes it fail). I can't use the manpage (because it's not installed, I used the install procedure described on your README :/). The only available help for me is `ghi --help`... you may want to update the help text. username_1: The `--help` should illustrate where the options go, too, although it's a bit obtuse: ``` usage: ghi [--version] [-p|--paginate|--no-pager] [--help] <command> [<args>] [ -- [<user>/]<repo>] ``` Note the positions of the given flags. They must come before the command to work. username_0: Yes! Thanks for your help. I should rtfm better. I see that the `--notruncate` option [has been deprecated](https://github.com/username_1/ghi/pull/139), and tty detection works as expected. Sorry for the noise.
matkoch/resharper-cognitivecomplexity
625434120
Title: Complexity does not show for expression-bodied methods Question: username_0: Although it may make some kind of sense, the complexity does not seem to be calculated for expression-bodied methods. It does not show in any event. Example: ``` public async Task<List<ServerTypeDto>> ListServerTypes(CancellationToken cancellationToken) { return await _unitOfWorkFactory.Create() .GetRepository<ServerType>() .Query .Select(item => item.ToDto()) .ToListAsync(cancellationToken); } ``` This shows a complexity of 0. Which, by the way, tells me that LINQ method chains may not be evaluated... But if I refactor this method to an expression body, like this: ``` public async Task<List<ServerTypeDto>> ListServerTypes(CancellationToken cancellationToken) => await _unitOfWorkFactory.Create() .GetRepository<ServerType>() .Query .Select(item => item.ToDto()) .ToListAsync(cancellationToken); } ``` No complexity is shown at all. Answers: username_1: As the documentation says, "complexity" is calculated based on the control flow operators. There are no operators here, so no "complexity". username_0: Ok. But how about any control flow inside a Linq method? Will they be counted? username_2: @username_0 I don't think it makes sense to show them for statement lambdas because those don't have any of the other CodeVision items shown, like last author, usages, or anything else. Status: Issue closed
ipfs/ipfs-docs
623490682
Title: [FEATURE REQUEST] Offline Search Question: username_0: The new Agolia search is awesome and really snappy. However, one of the key benefits of serving a website over IPFS is that it can work offline (I've actually used this to download our websites/tools for conferences when I don't trust the local network to work). It would be really awesome if we could find a way to do this offline. This issue is hardly critical given that the new docs site already works quite well offline and search is more of a nice to have than a necessity. I'd just like it to be on the radar of things that would be nice to have. One option would be to support both: keep the online search and gracefully degrade to an offline (less featured) version.
CARTAvis/carta-backend-ICD-test
814068374
Title: Improve ICD test reliability: session Question: username_0: Maybe we should simply remove it. Answers: username_1: If we ready need to test this the echoing server should be the same as the server where backend is running. Checking an external server does not help and introduce extra uncertainties. username_0: That is already getting done in the next sub-test: ✓ should connect to "ws://localhost:3002". (9ms) username_1: I mean set up an wss echoing service locally in the same backing running test server. username_0: I know we used to do the ICD tests via wss:// across different systems, but later switched back to ws:// as it was easier to manage. If we really want to test wss perhaps we could try connecting to the 'dev' branch running on our carta demo server as that already uses wss. However, that would not make the tests portable if anyone else wanted to deploy the tests at their own institution. username_1: Alternatively if we keep using an external service for wss checking, it would be good to try few times if the current trial failed. Status: Issue closed
EXXETA/sonar-esql-plugin
771197502
Title: Maven build failed Question: username_0: I have forked the github repository of the latest sonarqube plugin from Github. But the maven buld failed. Could not find out why the build has failed. Is there any other way to get the "esql-plugin-.jar"? Answers: username_1: Hi @username_0, you can just download the jars right here on github: https://github.com/EXXETA/sonar-esql-plugin/tags Status: Issue closed
dotnet/runtime
962158509
Title: Math/MathF.Truncate results isn't an intrinsic and results in inefficient codegen Question: username_0: ### Description Math/MathF.Truncate results in bad codegen that calls into native modf which is pretty slow. Instead, it should be an intrinsic for vroundsd/vroundss like Ceiling and Floor are (and clang also does do it for truncate). ### Configuration Sharplab Core CLR 5.0.721.25508 on amd64 ### Regression? No idea. ### Data [Sharplab for Math](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFmwAmEdsAA2A4VC5hsGGAAoVazY14BKGXOnU595gHZGAWX0ALAHTbd+o5YBua1kAX2DGcIZmVkYTDQEAYRgAS3VjVXjzKzsbcPtiZzcMLyTU5M4Ac0NA8LCc2UiFGLizADF1CGh00wFLcNsHOQLXD092zqhqiyD6xjqQoA=) [Sharplab for MathF](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKBuIGYACY8pRgVwDtdsAzGZqUYAVGLgw0A3jUazmTFm14AbCNgwioXMOpgAKFWo28AlDLnTqc68wDsjALLqAFgDEAdMK2cdGfaYBuc1kAX2DGcIZmVkZDdUYAYRgAS2UDVXjTcMsbOWJ7Jww3dyTU5M4AcwMTIKs5MLrZSIUYuI1XVWh0o1izRsYc3LtHFw8OiC7A8IaQoA=) [Godbolt clang for double](https://godbolt.org/z/frY1vhhn9) [Godbolt clang for float](https://godbolt.org/z/5YE4oxnjT) Answers: username_1: This should be a relatively simple fix but is going to be for 7.0.0 at the earliest. username_2: @username_0 any interest in offering a PR? username_0: I've never worked with the JIT so I'd prefer somebody else to do it since I don't really know how sadly. username_2: Fair enough. Still, if you have an interest I expect @username_1 could give pointers. 😄 Status: Issue closed
taikii/whiteplain
522111651
Title: can not use categories page Question: username_0: I add `categories=["type"]`in my front matter, it's in yaml it looks like: ```yaml +++ author= "luosuu" title= "title" date= 2019-11-12T10:30:46+08:00 discription="" categoires=["notes"] weight=1 tags=["some tags"] toc=true +++ ``` but in categories page there is nothing. I don't konw whether I should change something in congfig.toml or should add a html file somewhere.Thanks you Status: Issue closed Answers: username_0: i just did not correctly spell it
RobotWebTools/rosbridge_suite
494076849
Title: Client unregister while trying to publish a CompressedImage using ROSBridge Question: username_0: <!-- If you have a question about how to use rosbridge, please ask at https://answers.ros.org/ --> I am trying to publish a ROS sensor_msgs/CompressedImage using ROSBridge. When I put the JPEG image data (captured from Unity3D platform) in 'data' field of the ROS message and publish it, the ROSBridge silently disconnects the client as below. There are no errors. [INFO] [1568642603.085737]: Client connected. 1 clients total. [INFO] [1568642604.647151]: Client disconnected. 0 clients total. And when I listen to the topic, I see that I receive one ROS event before the client is disconnected. The text size of the event (captured after redirecting rostopic echo output is ~ 1MB) The compressed image is 640x480 resolution. Also, when I do not input any JPEG data, the client does not disconnect and the events are published. Does the size of the image data (~ 1MB) an issue here? - is there a workaround or fix for this issue? ## Expected Behavior The expected behaviour is that the ROSBridge client stays connected and ROS events are received. ## Actual Behavior The client disconnects and ROSBridge communication is no more possible. ## Steps to Reproduce the Problem 1. Capture the screenshot using Unity3D platform and compress using JPEG algorithm. 2. Update the ROS event parameters (header) and publish the JPEG data. ## Specifications - ROS Version (`echo $ROS_DISTRO`): kinetic - OS Version (`grep DISTRIB_CODENAME /etc/lsb-release`): DISTRIB_CODENAME=xenial - Rosbridge Version (`roscat rosbridge_server package.xml | grep '<version>'`): <version>0.9.0</version> - Twisted Version (`python -c 'import twisted; print twisted.version'`): [twisted, version 16.0.0] Answers: username_1: That is a very old version of rosbridge, you should try at least 0.11.3 which should be available on kinetic. username_0: Thanks for your response - I tried with 0.11.3 ROSbridge and the result is the same. Status: Issue closed username_0: I was able to solve the issue (though I do not know what was causing it). On the WebSocket client side, I was using .NET websocket package in Unity (which is System.Net.WebSockets). I migrated to websocket-sharp and the issue is not observed with 0.11.3 ROSBridge.
scala-exercises/exercises-monocle
818016374
Title: `sbt compile` fails on sbt.internal.inc.MappedVirtualFile (but works on sbt 1.3.x) Question: username_0: When I am trying to compile this project, it fails with ```[info] compiling 7 Scala sources to /home/ab/tmp2021/exercises-monocle/target/scala-2.13/classes ... [error] java.lang.ClassCastException: class sbt.internal.inc.MappedVirtualFile cannot be cast to class java.io.File (sbt.internal.inc.MappedVirtualFile is in unnamed module of loader sbt.internal.MetaBuildLoader @587e5365; java.io.File is in module java.base of loader 'bootstrap') [error] at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:285) [error] at scala.collection.Iterator.foreach(Iterator.scala:943) [error] at scala.collection.Iterator.foreach$(Iterator.scala:943) [error] at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) [error] at scala.collection.IterableLike.foreach(IterableLike.scala:74) [error] at scala.collection.IterableLike.foreach$(IterableLike.scala:73) [error] at scala.collection.AbstractIterable.foreach(Iterable.scala:56) [error] at scala.collection.TraversableLike.map(TraversableLike.scala:285) [error] at scala.collection.TraversableLike.map$(TraversableLike.scala:278) [error] at scala.collection.AbstractTraversable.map(Traversable.scala:108) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$13(ExerciseCompilerPlugin.scala:214) [error] at cats.syntax.EitherObjectOps$.catchNonFatal$extension(either.scala:338) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.invokeCompiler$1(ExerciseCompilerPlugin.scala:209) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$33(ExerciseCompilerPlugin.scala:247) [error] at cats.instances.ListInstances$$anon$1.$anonfun$traverse$2(list.scala:78) [error] at cats.instances.ListInstances$$anon$1.loop$2(list.scala:68) [error] at cats.instances.ListInstances$$anon$1.$anonfun$foldRight$2(list.scala:70) [error] at cats.Eval$.advance(Eval.scala:271) [error] at cats.Eval$.loop$1(Eval.scala:350) [error] at cats.Eval$.cats$Eval$$evaluate(Eval.scala:368) [error] at cats.Eval$Defer.value(Eval.scala:257) [error] at cats.instances.ListInstances$$anon$1.traverse(list.scala:77) [error] at cats.instances.ListInstances$$anon$1.traverse(list.scala:16) [error] at cats.Traverse$Ops.traverse(Traverse.scala:19) [error] at cats.Traverse$Ops.traverse$(Traverse.scala:19) [error] at cats.Traverse$ToTraverseOps$$anon$2.traverse(Traverse.scala:19) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$32(ExerciseCompilerPlugin.scala:247) [error] at scala.util.Either.flatMap(Either.scala:341) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$30(ExerciseCompilerPlugin.scala:246) [error] at scala.util.Either.flatMap(Either.scala:341) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$27(ExerciseCompilerPlugin.scala:243) [error] at scala.util.Either.flatMap(Either.scala:341) [error] at org.scalaexercises.plugin.sbtexercise.ExerciseCompilerPlugin$.$anonfun$generateExercisesTask$1(ExerciseCompilerPlugin.scala:240) [error] at scala.Function1.$anonfun$compose$1(Function1.scala:49) [error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:62) [error] at sbt.std.Transform$$anon$4.work(Transform.scala:68) [error] at sbt.Execute.$anonfun$submit$2(Execute.scala:282) [error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:23) [error] at sbt.Execute.work(Execute.scala:291) [error] at sbt.Execute.$anonfun$submit$1(Execute.scala:282) [error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:265) [error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:64) [error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [error] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [error] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [error] at java.base/java.lang.Thread.run(Thread.java:834) [error] (Compile-generated-exercises / generateExercises) java.lang.ClassCastException: class sbt.internal.inc.MappedVirtualFile cannot be cast to class java.io.File (sbt.internal.inc.MappedVirtualFile is in unnamed module of loader sbt.internal.MetaBuildLoader @587e5365; java.io.File is in module java.base of loader 'bootstrap') [error] Total time: 10 s, completed Feb 27, 2021, 9:47:18 PM``` However I can compile it with sbt 1.3.13
pnp/pnpjs
441448923
Title: [2.0.0] sub module: SP-regional-settings Question: username_0: This issue tracks the review and preparation of a sub module for the 2.0.0 project. To finalize a sub-module the follow steps should be performed: - [ ] Code review for TODO (do), commented out code (remove), ensure interfaces are prefixed with an "I", or not using await - [ ] Ensure every property/method has at least one test - [ ] Ensure all of the interface method/properties are commented - [ ] Remove comments from class implementation files - [ ] Ensure the docs page is updated to mention each method/property with at least a minimal example - [ ] Search in closed issue by the label "area: sample" to see if any apply to the module you are review and add those as appropriate to the documentation - [ ] Review/compare the code in the 1.x branch to ensure any fixes, updates, or changes are in 2.0 Once complete submit one PR per module for final review. Please make the title match the issue title and reference the issue in the body of the PR. It will then be reviewed so please check back for any feedback or questions. Answers: username_1: I'll take this one as the example for the call. username_1: Hey folks, PR submitted and ready for review! Status: Issue closed
SitecorePowerShell/Console
759775439
Title: How to uninstall\remove Poweshell extensions from Sitecore? Question: username_0: ### Expected Behavior _Please describe the expected behavior._ ### Actual Behavior _Please describe the actual behavior._ ### Steps to Reproduce the Problem _Please include the version number of SPE and Sitecore._ - [ ] Tested issue with clean install of Sitecore and the latest available version of SPE. - [ ] Asked questions on the Sitecore Slack Chat channel. - [ ] Reviewed questions and answers on the Sitecore Stack Exchange. Answers: username_1: Can you provide a list of the security concerns? username_0: Hi Michael, We are in process of moving our Sitecore to AWS and based on the company security policies it is not allowed to have the ability to restart app pool or any such access level for Sitecore users. It is either remove the PowerShell extension from our Sitecore or remove admin rights to the Sitecore. So we are trying to remove the PS. username_2: I believe these concerns are around having remote PowerShell access to a server. Are there limitations to the commands available to be executed from the console? I've noticed that the runas account is not configured as an administrator account, but if this is a full blown PowerShell console, malicious or not, it has the ability to be pretty dangerous on a production server. username_1: The module does not have more access than what you give to it. If the service account running the website has access to change system settings then you're already off to a bad start. Since SPE is only installed for the standalone or content management role the risk is limited to those servers. Ensure the service account has low privileges and you drastically reduce the problems it can cause. It's no more risky then writing malicious c# code. If someone can deploy their own DLL they could certainly do worse, and have unit tests to validate! You are way better off having the module installed, leveraging the capabilities, then stressing out about the kind of problems a junior dev could do in other ways. Read more about it in the book. username_0: Thank you for the information Michael. Will definitely try to follow the security guidelines. Right now we are very close to this migration to AWS and looking for every option to mitigate this issue. To remove the Sitecore powershell module. The only way I can think of doing it is to unpublish the /sitecore/system/Modules/PowerShell. Please suggest the right steps. username_0: @username_1 Please advice on how to remove the PS from Sitecore. Really appreciate any help. username_1: Removing follows the same process you would for any other module. You delete the configs, dlls, and any other files output by the package. Then you delete all of the items that were added by the package. username_0: Great! thank you Status: Issue closed
iovisor/bcc
386899083
Title: bcc_usdt_foreach and bcc_usdt_foreach_uprobe should allow for user data Question: username_0: `bcc_usdt_foreach` and `bcc_usdt_foreach_uprobe` do not currently accommodate passing user data through to the callback, making it complicated to bind these interfaces in languages where closures are more complicated. Answers: username_1: These two functions are supposed to be used in bcc python library and they are not supposed to expose to users. What is your use case for this two functions where callbacks need more user data than just probe itself? username_0: I was improving the `usdt` support in `bpftrace` by making the binary path optional, and adding support for listing of `usdt` probes, iovisor/bpftrace#280
ParisTypeScript/talks
427817318
Title: Migrer de Flow vers Typescript: la manière douce Question: username_0: # Proposition de talk - Paris TypeScript ## Description du talk * Titre : Migrer de Flow vers Typescript: la manière douce * Contenu (Décrivez brièvement le contenu de la présentation) : La migration d’une techno à une autre donne des sueurs froides à la plupart des développeurs. C’est donc avec un peu d’appréhension que nous l’avons entrepris sur un de nos projets, un générateur d’applications React de quelques milliers de lignes. Celui-ci fonctionne avec les outils indispensables à tous développeurs web: linter, tests, babel, webpack, etc. ainsi que plusieurs fonctionnalités clés en main telle que de l'authentification via JWT. Quelles sont les raisons qui nous ont amenés à changer d’outil de typage ? Comment avons nous surmonté les difficultés rencontrées, comme garder à tout moment une intégration continue fonctionnelle ? Et quel bilan pour cette migration ? Petit spoiler : on est hyper contents du résultat ! * Durée : - [ ] 10 minutes - [x] 20 minutes - [ ] 30 minutes - [ ] Autres, précisez: * Niveau : - [x] Débutant - [x] Intermédiaire - [ ] Avancé ## A propos de vous * Nom et prénom : <NAME> * Twitter : https://twitter.com/AlexandreBlo * Entreprise : Theodo Answers: username_0: Je suis preneur de feedbacks :) Que ce soit sur le titre, le contenu, etc. username_1: Salut Alexandre et merci pour ta proposition! C'est clairement un sujet qui va plaire, car les discussions autour de Flow & TypeScript sont nombreuses au meetup :) Le contenu donne envie, j'ai hâte de voir ça ^^ username_1: Salut Alexandre, es ce que tu serais dispo le 3 juin pour faire ton talk? On a du bousculer un peu le planning pour faire un event spécial le 23 avril, donc on va décaler le prochain meetup standard username_2: Hello désolé mais erreur de notre part, on va faire ca le 4 juin... Toujours dispo ? Surtout peut on décaler le host ? Meri et encore désolé username_0: Hello ! Toujours ok pour le host et mon talk ! :smiley: Status: Issue closed
HuangCongQing/Python
441548153
Title: Windows 系统下.sh脚本文件怎么运行? Question: username_0: ![image](https://user-images.githubusercontent.com/20675770/57350179-a12f3b80-718f-11e9-9b8b-c4e7867973fb.png) Answers: username_0: ![image](https://user-images.githubusercontent.com/20675770/57350179-a12f3b80-718f-11e9-9b8b-c4e7867973fb.png) username_0: 安装cygwin:https://blog.csdn.net/LucyGill/article/details/60345706 安装git:https://blog.csdn.net/weixin_42376686/article/details/82391410 username_0: [ Cygwin使用介绍- yanlaifan的博客- CSDN博客](https://blog.csdn.net/yanlaifan/article/details/60878406)
baidu/amis
1059054611
Title: 能否实现 dialog Action 修改上级数据域中的数据? Question: username_0: #### 实现场景: 通过combo实现一个对象数组编辑,每一项除了一些必填项,还有丰富的选填信息字段,能否在当前表单中只显示基本的必填项,把选填信息放在弹框中编辑? 具体如下: 通过combo实现了一个对象数组编辑: ![image](https://user-images.githubusercontent.com/3793855/142713551-cdb13283-8a3c-4924-9be4-e3bdf54af076.png) 想要通过dialog Action 编辑选填字段 ![image](https://user-images.githubusercontent.com/3793855/142713592-64e81312-fc87-493f-8ed7-c9545907e4d0.png) #### 存在的问题: dialog虽然能获取到上级数据域中的数据,但是不能将修改更新到上级数据域(默认情况下),是否存在某种选项可以允许dialog中的组件更新上级数据域中的数据呢? #### 当前方案: 请粘贴你当前方案的完整 amis schema 代码... ```json { "name": "items", "type": "combo", "items": [ { "body": [ { "name": "name", "type": "input-text", "label": "", "required": true, "placeholder": "计划项" }, { "name": "deadlinePeriod", "type": "input-text", "label": "", "placeholder": "截止时间" }, { "name": "executeDepartment", "type": "tree-select", "label": "", "options": [ { "label": "选项A", "value": "A", "children": [ { "label": "选项C", "value": "C" }, { "label": "选项D", "value": "D" } ] }, { "label": "选项B", "value": "B" } ], "required": true, [Truncated] "actionType": "dialog" } ], "type": "group", "label": false }, ], "label": false, "value": [], "messages": {}, "multiple": true, "noBorder": false, "scaffold": { }, "draggable": true, "multiLine": true, "joinValues": false, "draggableTip": "可通过拖动每行中的【交换】按钮进行顺序调整" } ``` Answers: username_0: 好的,回头我试下 Status: Issue closed username_0: 经测试,dialog里要有form,然后设置确认按钮的 mergeData: true 就可以了。 感谢答复.
SUNET/eduid-front
527967021
Title: DASHBOARD: Understanding the fetching functionality at startup of app Question: username_0: _reducers/DashboardConfig.js_ Are the fetching actions in the reducer related to the fetching context that disables buttons or is it more crucial to the actual startup of the app? - Is ‘const fetchingActions/unFetchingActions’ related to the FetchingContext component or actually fetching what is needed to load the app? Answers: username_1: Both things are related to the fetching context. username_1: What should be refactored here? Status: Issue closed
skylot/jadx
358258604
Title: Can't fix incorrect switch cases order Question: username_0: I get these type of errors: `ERROR - Can't fix incorrect switch cases order, method: bex.a(org.xmlpull.v1.XmlPullParser, bfa):bfa` What are they caused by? To replicate use the APK linked Version 10.39.6.0 for Android https://apkpure.com/snapchat/com.snapchat.android Answers: username_1: The APK produces many errors, that means Jadx still have issues. You can help, by trying to isolate small test cases, so they can be fixed one at a time. Sample case are [here](https://github.com/skylot/jadx/blob/master/jadx-core/src/test/java/jadx/tests/integration/switches/TestSwitchReturnFromCase.java) username_2: I'm also getting lots of this ERROR: Can't fix incorrect switch cases order, method: com.google.android.exoplayer.text.ttml.TtmlParser.parseStyleAttributes(org.xmlpull.v1.XmlPullParser, com.google.android.exoplayer.text.ttml.TtmlStyle):com.google.android.exoplayer.text.ttml.TtmlStyle ERROR: Can't fix incorrect switch cases order, method: com.google.android.exoplayer.text.ttml.TtmlParser.parseStyleAttributes(org.xmlpull.v1.XmlPullParser, com.google.android.exoplayer.text.ttml.TtmlStyle):com.google.android.exoplayer.text.ttml.TtmlStyle ERROR: Can't fix incorrect switch cases order, method: com.google.android.exoplayer.text.webvtt.WebvttCueParser.isSupportedTag(java.lang.String):boolean ERROR: Can't fix incorrect switch cases order, method: com.google.android.exoplayer.text.webvtt.WebvttCueParser.parseTextAlignment(java.lang.String):android.text.Layout$Alignment ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.cast.zzdh.zzn(java.lang.String):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.cast.zzdh.zzn(java.lang.String):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.cast.framework.media.MediaNotificationService.zza(android.support.v4.app.NotificationCompat$Builder, java.lang.String):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.common.api.internal.zal.onActivityResult(int, int, android.content.Intent):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.measurement.zzvd.zza(com.google.android.gms.internal.measurement.zzyq, java.lang.Object):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.measurement.zzwx.equals(java.lang.Object, java.lang.Object):boolean ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.measurement.zzwx.zza(java.lang.Object, com.google.android.gms.internal.measurement.zzxi, com.google.android.gms.internal.measurement.zzuz):void ERROR: Can't fix incorrect switch cases order, method: com.google.android.gms.internal.measurement.zzwx.zzae(java.lang.Object):int ERROR: Can't fix incorrect switch cases order, method: com.google.gson.stream.JsonReader.peekNumber():int ERROR: Can't fix incorrect switch cases order, method: com.google.gson.internal.bind.TypeAdapters.2.read(com.google.gson.stream.JsonReader):java.util.BitSet username_2: I looked closer at one of these errors when decompling my own app and it's a very trivial method with a switch (someString). very trivial code
docpad/docpad-plugin-ghpages
13892382
Title: Error: exited with a non-zero status code Question: username_0: When I run "docpad deploy-ghpages" I get this error: info: Welcome to DocPad v6.31.6 info: Plugins: cleanurls, ghpages, jade, livereload, stylus, uglify info: Environment: static info: Deployment to GitHub Pages starting... info: Generating... info: Generated all 50 files in 1.073 seconds error: Something went wrong with the action error: An error occured: Error: exited with a non-zero status code at ChildProcess.<anonymous> (/Users/will/Dropbox/web/signshop2.0/plugins/docpad-plugin-ghpages/node_modules/bal-util/out/lib/modules.js:114:17) at ChildProcess.EventEmitter.emit (events.js:98:17) at Process.ChildProcess._handle.onexit (child_process.js:784:12) Please help me!!! Answers: username_1: I have the same problem. This is my blog code https://github.com/username_1/username_1.github.com (`develop` branch). When I run `docpad deploy-ghpages`, I get this in the end: ``` info: Generated 37/43 files in 14.777 seconds error: Something went wrong with the action error: An error occured: Error: exited with a non-zero status code at ChildProcess.<anonymous> (/Users/username_1/WebDev/Personal/username_1.github.com/node_modules/safeps/out/lib/safeps.js:165:23) at ChildProcess.emit (events.js:98:17) at maybeClose (child_process.js:756:16) at Process.ChildProcess._handle.onexit (child_process.js:823:5) ``` Previously I used the plugin successfully. No satellite npm package was updated. Something got broken recently and I cannot figure out what. username_1: My mistake was that remote was renamed but I did not provide the corresponding changes to the config. However this was not clear from the error message above? Would it be possible to pass git respond with errors into stdout?
alfa-laboratory/core-components
1049595016
Title: Компонент Tabs рендерится некорректно с пропсом scrollablle Question: username_0: # Опишите проблему Компонент [`Tabs`](https://alfa-laboratory.github.io/core-components/master/?path=/docs/%D0%BA%D0%BE%D0%BC%D0%BF%D0%BE%D0%BD%D0%B5%D0%BD%D1%82%D1%8B-tabs--tabs) рендерится некорректно с пропсом `scrollable={true}`. А именно, появляется какой-то отступ снизу. Вероятно, это связано с попыткой скрыть полосу прокрутки. # Шаги для воспроизведения Пример можно посмотреть [по ссылке](https://codesandbox.io/s/admiring-pike-rgk47?file=/src/App.js). # Ожидаемое поведение Компонент всегда отображается одинаково, независимо от того, прокручиваются табы или нет. ## Десктоп (если данных нет оставьте блок пустым): - OS: MacOS - Browser: Google Chrome - Version: 95.0.4638.69 (Официальная сборка), (x86_64) Status: Issue closed Answers: username_2: :tada: This issue has been resolved in version 21.3.3 :tada: The release is available on: - [npm package (@latest dist-tag)](https://www.npmjs.com/package/@alfalab/core-components/v/21.3.3) - [GitHub release](https://github.com/alfa-laboratory/core-components/releases/tag/v21.3.3) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
knehez/beton-teka
478341634
Title: hátralévő feladatok Question: username_0: - [ ] - név helyett azonosító (kísérlet) - [ ] - dátum -> vizsgálat dátuma - [ ] - leírás editálás - [ ] - frissítés (kísérlet keresés(mérés)) - [ ] - id-t nem kell megjeleníteni - [ ] - azonosító unique(az alapján keressen) - [ ] - fájlok kezelése<issue_closed> Status: Issue closed
backdrop/backdrop-issues
89871176
Title: [UX] Block configuration: "dirty" from flag fired even if user cancels adding a condition. Question: username_0: Steps to reproduce: 1. Edit any layout that has blocks. 2. Click the "configure" button for one of the blocks. 3. Expand the "Conditions" fieldset and click on "Add condition". 4. Select any condition (for example "Front page") and click on "Add condition". Notice how the "dirty" from flag has been fired and there is a "The form has unsaved changes ..." message in the background. 5. Hit the "Cancel" button instead of the "Add condition". Notice how you return to the layout config page with the condition not saved (you've hit the "cancel" button in step 5 previously), but the "The form has unsaved changes ..." message is still there. 6. Refresh the page and see the message is still there. 7. Hit the "cancel" button at the bottom of the page in order to remove the "dirty" form flag. PS: This is one of the reasons why I think we should move all the steps required for adding a condition in the same dialogue form: #1020 Answers: username_1: The issue over in https://github.com/backdrop/backdrop-issues/issues/1020 and associated PR will solve this problem as a side-effect of the new implementation for adding conditions, as there won't be a separate dialog for configuring a condition any more, it'll be combined with the existing dialog for adding a condition. Let's close to consolidate issues. Status: Issue closed
tuna/issues
1063072981
Title: [tuna]404 at /homebrew-bottles/bottles/openldap-2.5.8.mojave.bottle.tar.gz Question: username_0: <!-- 请使用此模板来报告 bug,并尽可能多地提供信息。 Please use this template while reporting a bug and provide as much info as possible. --> #### 发生了什么(What happened) #### 期望的现象(What you expected to happen) #### 如何重现(How to reproduce it) #### 其他事项(Anything else we need to know) - 该问题是否被之前的 issue 提出过: #### 您的环境(Environment) - 操作系统(OS Version): - 浏览器(如果适用)(Browser version, if applicable): - 其他(Others): Answers: username_1: dup with #1392 Status: Issue closed
aerokube/ggr
503904119
Title: moon behind ggr Question: username_0: Hi, we try to setup moon behind our ggr instance that is using some IE instances but without success. You stated here https://github.com/aerokube/moon/issues/117#issuecomment-454457805 that this sholud work. For testing purpose I set up a moon instance in a kubernetes. This is reachable under http://mykubernetesnode:30774/wd/hub. My sample test is working against it, so moon on itself works. Now I put a ggr docker instance on my local machine in front of it adding this config: ``` our IE config ... <browser name="firefox" defaultVersion="64.0"> <version number="64.0"> <region name="selenoid-on-moon-kubernetes"> <host name="mykubernetesnode" port="30774" count="4" scheme="http"/> </region> </version> </browser> ``` Now if I run my test against GGR using IE, then it works. So GGR by itself is also working. Now, when using firefox I assumme it should connect to moon. But no session will be created. Only three lines of log displayed in ggr: ``` 2019/10/08 08:28:53 [7] [0.00s] [SESSION_ATTEMPTED] [test] [10.0.2.2] [firefox-64.0] [mykubernetesnode:30774] [-] [1] [-] 2019/10/08 08:28:53 [7] [0.12s] [SESSION_FAILED] [test] [10.0.2.2] [firefox-64.0] [mykubernetesnode:30774] [-] [1] [] 2019/10/08 08:28:53 [7] [0.12s] [SESSION_NOT_CREATED] [test] [10.0.2.2] [firefox-64.0] [-] [-] [-] [-] ``` No log entry appeared in moon. What am I doing wrong? Answers: username_1: @username_0 is your Ggr running in Docker container? Is it aware of `mykubernetesnode` host name? username_0: @username_1 yes, it is running in a docker container. I just did an exec into the ggr container and executed `ping mykubernetesnode` . No problem to ping it. I also did a `wget http://mykubernetesnode:30774/wd/hub/status` and it downloaded the moon status page, so this works, too. username_1: @username_0 if that's possible could you dump the traffic between ggr and moon with `tcpdump` and check whether any HTTP requests are being sent to Moon? username_0: Yes it is ;-) I get this: ``` Host: mykubernetesnode:30774 User-Agent: selenium/3.141.59 (java windows) Content-Length: 195 Accept-Encoding: gzip Authorization: Basic dGVzdDp0ZXN0LXBhc3N3b3Jk Connection: Keep-Alive Content-Type: application/json {"capabilities":{"firstMatch":[{"acceptInsecureCerts":true,"browserName":"firefox"}]},"desiredCapabilities":{"acceptInsecureCerts":true,"browserName":"firefox","platform":"ANY","version":"64.0"}} Content-Type: text/plain Www-Authenticate: Basic realm="Moon" Date: Wed, 09 Oct 2019 07:55:18 GMT Content-Length: 17 401 Unauthorized ``` Since using moon directly without ggr is working without "user@password" I assume that I actually dont need a this in ggr, too. I used your helm chart as moon installation without adding/changing any user/password. I also tried `<host name="mykubernetesnode" port="30774" count="4" scheme="http" username="test" password="<PASSWORD>"/>` as this is always in your examples, but didnt work. So any idea why I get a 401 using ggr although using moon directly doesnt? Might there be a different issue? username_1: @username_0 in your example request you are sending `test:test-password` as credentials. You have: ``` Authorization: Basic dGVzdDp0ZXN0LXBhc3N3b3Jk ``` ... that is to say ... ``` $ echo '<PASSWORD>' | base64 -D test:test-password ``` Do you have such credentials in Moon? username_0: Ah, so the user and pw from the call to ggr will also be used for moon? I thought the <host username... password...> is the one responsible for it. I didnt set this so I assumed that no user/pw will be sent to moon and since moon is working without user/pw right now I expected it to work. Will try to add user and pw to moon. username_1: @username_0 Ggr does not explicitly remove `Authorization` header and this works like a charm with Selenoid just because it does not have any authentication. Using Moon behind Ggr is a rare case. username_0: @username_1 ok, that was the issue. So ggr does not use guest quota if itselft requires a login. username_1: @username_0 correct. Status: Issue closed