repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
mcclayton/react-state-patterns
506952168
Title: Asynchronous E2E Test Question: username_0: Ensure library works for an Asynchronous E2E example by writing a test for it. i.e.: ```jsx const Counter = statePatterns( stateHook( (props) => ({ users: [], loading: false, error: null }), { fetchUsers: state => (sucessCb, errorCB) => { /* Make call and utilize callbacks */ }, fetchUsersSuccess: () => users => ({ users, error: null, loading: false }), fetchUsersError: () => error => ({ users: [], error, loading: false }) }, "userStore" ), ); ```
pymc-devs/pymc
1048896224
Title: Improve default seeds used for each chain Question: username_0: @ColCarroll mentioned that we might want to be more careful with how we seed different chains/processes. Right now this is our default: https://github.com/pymc-devs/pymc/blob/52a126dd31c68d8417495cd240af700800d2878b/pymc/sampling.py#L453 Whereas this seems to be the NumPy recommended strategy: https://numpy.org/doc/stable/reference/random/parallel.html#seedsequence-spawn I am not sure if this also applies to other places where we create seeds such as in https://github.com/pymc-devs/pymc/blob/bdd4d1992f11cab9202774b14b8044ddf0cb7674/pymc/model.py#L965 Answers: username_1: Upon closer inspection, it seems like NumPy is recommending us to move to `Generator`s instead of `RandomState`s: https://numpy.org/doc/stable/reference/random/legacy.html?highlight=randomstate#numpy.random.RandomState I'm also mentioning this because each `SeedSequence` can spawn `Generator` objects and I'm not sure if this would break our code if we were to use their recommended strategy for seeding parallel processes. I can get a PR started on this and hear what you think about all of this.
arsduo/koala
16862225
Title: Strange comment id format Question: username_0: I have several old posts, there are two as example: ``` 162805193749522_196070250430270 162805193749522_212568245424768 ``` When I request 'comments' connections ``` post_id = '162805193749522_196070250430270' fb.get_connections(post_id, 'comments', {fields: 'id,from,created_time,like_count', filter: :stream, date_format: :U}) } ``` I get comments ids in very strange format ``` '162805193749522:196070250430270:63_2715946' ``` I'm trying to get comments likes. This request return error ``` fb.get_connections('162805193749522:196070250430270:63_2715946', 'likes', { date_format: :U}) } ``` ``` type: OAuthException, code: 1, message: An unknown error has occurred. [HTTP 500] ``` Ok, let's modify id a little =) ``` '162805193749522:196070250430270:63_2715946' —> '162805193749522_196070250430270_2715946' ``` It's OK ``` fb.get_connections('162805193749522_196070250430270_2715946', 'likes', { date_format: :U}) } ``` ``` => [] ``` Graph API Explorer understands this strange comment id format ![screen shot 2013-07-17 at 16 33 42](https://f.cloud.github.com/assets/415928/811660/4d1467a4-eedd-11e2-8510-37b59484707e.png) But don't return likes ( "like_count": 1) =( ![screen shot 2013-07-17 at 16 34 15](https://f.cloud.github.com/assets/415928/811668/66a70208-eedd-11e2-915b-56fc9b8c5fcc.png) ![screen shot 2013-07-17 at 16 34 30](https://f.cloud.github.com/assets/415928/811669/718ee12c-eedd-11e2-98bb-9bc9f8a08d78.png) Not returning comments likes – it's FB bug. I understand. We can't do anything. But at least Koala should understand such strange comment id format and don't return error. Answers: username_1: Closing since it's been two years. Please reopen if this is still an issue. Status: Issue closed
jlippold/tweakCompatible
356320912
Title: `Priority Hub` partial on iOS 11.3.1 Question: username_0: ``` { "packageId": "com.kunderscore.priorityhub", "action": "working", "userInfo": { "arch32": false, "packageId": "com.kunderscore.priorityhub", "deviceId": "iPhone9,4", "url": "http://cydia.saurik.com/package/com.kunderscore.priorityhub/", "iOSVersion": "11.3.1", "packageVersionIndexed": true, "packageName": "Priority Hub", "category": "Tweaks", "repository": "Kunderscore's Repo", "name": "Priority Hub", "installed": "0.0.6-5", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Not working based on feedback from users in the community. The current positive rating is 25% with 1 working reports.", "id": "com.kunderscore.priorityhub", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.0", "shortDescription": "Organize your notifications!", "latest": "0.0.6-7", "author": "<NAME>", "packageStatus": "Not working" }, "base64": "<KEY> "chosenStatus": "partial", "notes": "Partially works - still in beta tho so okay" } ```
ESMValGroup/ESMValTool
295443392
Title: Provide error code/exit code if data set is not cmor compliant, etc. Question: username_0: Backend should provide error information/code on exit, why it failed. Examples: - File not found - File not readable - File empty - File not cmor compliant Answers: username_1: The stack trace you get when the program stops because it encounters an error should provide most of this information. If you're on Python 3 or only using a single process that is, Python 2 will only give you the stack trace of the main process. I agree that it would be good to work on making nicer error messages at some point, but it will be a lot of work if not impossible to cover all possible errors. username_0: I think the most important errors with a need for specification are those, a user can easily correct. And these usually are file related. All the mentioned errors need to be checked at some point and are necessary/required for some projects when ESMValTool should be automatized. E.g. If ESMValTool cannot read due to missing cmor standards, data should be cmorized. It is redundant to check within the automation process, if ESMValTool needs to check either way. username_2: I agree with Bouwe that it would be a nice feature, but at the moment we should focus on other issues. I'm adding this to the goals for the beta release of v2.0. username_2: Closing since already covered by #873 Status: Issue closed
kubernetes/kubernetes
307001430
Title: kubernetes web ui: Unable to access the dashboard Question: username_0: please suggest .... Answers: username_1: /sig ui username_0: please suggest .... username_0: i tried reading docs from different sites .. however not able to resolve the issue. can someone please give some pointers / direction please ?... username_2: Which network pluggin are you using? I have the same type of pb with calico but works with Flannel. Jeff
inducer/pudb
120102080
Title: Cross-platform Question: username_0: This module dosn't work on Windows, because it use some special code from linux. It must be written in system requements. Answers: username_1: Supposedly [urwid works in cygwin](http://lists.excess.org/pipermail/urwid/2006-August/000304.html) (I don't have Windows to test). Is there stuff specifically in pudb that assumes posix? username_2: Don't think so. username_3: Port to https://github.com/nsf/termbox API? username_2: Termbox would be something to consider at the urwid level, pudb. A quick and dirty solution would be an ssh server (like pudb.remote), at which point we could make use , say, Putty's terminal emulation. Status: Issue closed username_2: This module dosn't work on Windows, because it use some special code from linux. It must be written in system requements. username_2: Whoops. username_3: What high-level API is needed for `pudb`? I see that uses 'loop' that is provided by any TUI framework. Is `urwid` strictly necessary? Maybe it is possible to replace its API calls with something that could be easily implemented on top of existing `termbox` or even `curses` modules. username_2: Urwid isn't strictly necessary for pudb in the same way that Qt isn't strictly necessary for KDE. username_1: Urwid is a GUI framework. It provides abstractions like dialogue boxes and input cells and handles things like resizing and arrowing automatically. username_3: @username_1 dialog box abstraction that is POSIX-only seems very weird. Are there any abstractions that can work across different low level TUI backends? username_1: @username_3 I think what you are basically looking for is urwid support in Windows. I would contact them about it (I did find that thread I linked to above, but it's from 2006). If urwid supported Windows pudb support would be basically automatic at that point. username_3: @username_1 not sure guys want to mess with Windows internals - they have to port their inner loop and use Windows API. It is more effective to just build a system where event loop is decoupled loop from widget library. username_4: As this was never fixed, can the documentation please be updated to make it clear that this is a linux only tool out of the box? username_2: It isn't. Works on Macs for sure. WSL1/2 also. Might work on Windows by using a Telnet client with the remote debugging support. username_4: Apologies, you're correct. As a Windows user I can sometimes be a bit biased :/ I guess what I was trying to say was that I found it a little misleading trying to use this on Windows and couldn't find anything in the docs as to why a simple `import pudb; pudb.set_trace()` wasn't working. Requiring Windows users to either use telenet or cygwin rather than "just working out of the box" could at least be flagged in the documentation. This looks like a really cool tool and I was excited to use it but I can't see myself having the time to test and set it up correctly anytime soon. If I ever get around to that, I'd be happy to help you out with some Windows-specific documentation! username_2: https://en.wikipedia.org/wiki/Windows_Terminal says it supports ANSI escape sequences, which IMO should be all that's needed to support Urwid. (Note that "Windows Terminal" is different from the "Command Prompt" app (`cmd.exe`) that ships with Windows.) username_1: I suspect pudb is doing other things that won't work right on Windows. For example, the xdg stuff for the config is wrong for Windows.
denzilferreira/aware-client
553923840
Title: Fitbit Intraday granularity doesn't match dashboard/within app Question: username_0: I have the intraday setting on the dashboard set to 1 minute ("1min"). When I open the plugin it in the app it reads 1min on the main screen. If I tap on the setting though, "15 minutes" is selected. It is unclear which setting is actually active since the app feedback is inconsistent.
Krusty84/Teamcenter_ZABBIX_Templates
992309297
Title: Initial teamcenter setup Question: username_0: Hello, Thanks for your work identifying the queries to monitor a teamcenter. I would like to reproduce it on a prometheus + jmx exporter. Could you please help me to set it up? What services did you enable on teamcenter side? Could you give me back some of the jmx queries to type on the jconsole in order for me to understand what to write on the jmx exporter config. By the end, I will share with you my results and also with the jmx exporter team. BR <NAME> Answers: username_1: Hi, Do you have access to Teamcenter Documentation? username_0: Yes, of course, my teamcenteris under maintenance username_1: Okay, more details you can get from TC Docs (you should use **jmx** as keyword) **For FSC:** **bin\TcFSCService.exe -install** "%SERVICE_NAME%" %FSC_JVM% -Xms%FSC_MEM% -Xmx%FSC_MEM% -Dcom.sun.management.jmxremote **-Dcom.sun.management.jmxremote.port=9010** -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.class.path=%FSC_CP% -Djava.library.path=%FMS_HOME%/lib -Dfms.home=%FMS_HOME% -Dfms.config=fmsmaster_%FSCID%.xml -Dfsc.config=%FSCID%.xml -Dsun.net.client.defaultConnectTimeout=90000 -Dsun.net.client.defaultReadTimeout=90000 -Dcom.teamcenter.mld.logging.prefix=%FSCID% -start com.teamcenter.fms.servercache.FMSServerCache -stop com.teamcenter.fms.servercache.FMSServerCache -method stop -out %FSCID%stdout.log -err %FSCID%stderr.log -current %FMS_HOME% -path lib;%JAVA_HOME%\bin **For Pool Manager:** in mgrenv.bat or in Installmgr.bat (for installing Pool Manager as daemon): set JVM_OPTS=%SERVER% %JAVA_MEM_ARGS% %JacORB_ARGS% -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote **-Dcom.sun.management.jmxremote.port=9040** -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.teamcenter.jeti.util.log.category=ServerManager -Djava.class.path="%CLASSPATH%" -Djava.library.path="%REL_LOC%" **For Web-Tier:** Open the **webtierMonitorConfig.xml** from ....tc.ear\lib\**mldcfg.jar** in the <ApplicationConfig set mode as Normal Save this file and redeploy tc.war after first login in the TC you can get various metrics about WebTier and partial about Pool Manager for Web Logic **setDomainEnv.cmd** set SERVER_CLASS=weblogic.Server set JAVA_PROPERTIES=%JAVA_PROPERTIES% %WLP_JAVA_PROPERTIES% **set JAVA_OPTIONS**=%JAVA_OPTIONS% -Dcom.sun.management.jmxremote **-Dcom.sun.management.jmxremote.port=9020** -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false %JAVA_PROPERTIES% I hope that it will be helping you username_0: Hello, I just update my Teamcenter in order to enable the jmx . I look on the Windows registry in order to had the missing jvm parameters. all Teamcenter services are under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ the trick is to add a new jvm parameters for each and to renumber teh existing ones. For the picture below teh registry before, on the right the registry after. ![image](https://user-images.githubusercontent.com/10389749/133748612-07396519-ae89-4dc0-b2ca-77092fa3d115.png) Thanks a lot for your help. we can close this issue. Status: Issue closed username_0: hello, As follow up , please find : The Prometheus Jmx export http server config files I set for all the 4 part (FSCPOOL/SOLR/TOMCAT) and their cmd file (I define Windows services with nssm for it) The Grafana dashboard Some pictures of the Dashboard Hope it will help other BR [tomcat_full.yml.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233283/tomcat_full.yml.txt) [fsc_full.yml.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233284/fsc_full.yml.txt) [jmx_http_exporter_fsc.cmd.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233285/jmx_http_exporter_fsc.cmd.txt) [jmx_http_exporter_pool.cmd.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233286/jmx_http_exporter_pool.cmd.txt) [jmx_http_exporter_solr.cmd.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233287/jmx_http_exporter_solr.cmd.txt) [jmx_http_exporter_tomcat.cmd.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233288/jmx_http_exporter_tomcat.cmd.txt) [pool_full.yml.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233289/pool_full.yml.txt) [prometheus_teamcenter_monitoring.pdf](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233290/prometheus_teamcenter_monitoring.pdf) [grafana_dashboard_teamcenter_monitoring.json.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233291/grafana_dashboard_teamcenter_monitoring.json.txt) [solr_full.yml.txt](https://github.com/username_1/Teamcenter_ZABBIX_Templates/files/7233292/solr_full.yml.txt)
sigalor/whatsapp-web-reveng
1020825426
Title: decryption media message error Question: username_0: decryption media message error decryptedMessage = AESDecrypt(self.loginInfo["key"]["encKey"], messageContent[32:]); print("decryptedMessage :", decryptedMessage) try: processedData = whatsappReadBinary(decryptedMessage, True); print("data:",processedData) messageType = "binary"; if processedData[0] == "action": ms = processedData[1] if ms: rep_data = json.dumps({"message": processedData, "type": messageType}) except Exception as e: processedData = {"traceback": traceback.format_exc().splitlines()}; messageType = "error"; decryptedMessage is b'\xf8\x04\t\nh\xf8\x01\xf8\x024\xfd\x00\x01\xe8\nB\n\[email protected]\x10\x01\x1a 62531E6BA3BECAACCD755110E97539D1\x12\xda\x02B\xd7\x02\nMhttps://mmg.whatsapp.net/d/f/AvxVW2AmDcMumfYD7YJ4YC-VqBX6B1TKSNPSD0qhsKIo.enc\x12\x16audio/ogg; codecs=opus\x1a h\xbf\xe3&D^\xda\x94GM\xb1\xdb\xff\x01\x03\xfc\xf0i\xc1`\x11\xdf^J\xd9\x81\xbb\x8at\xd8\xdb\x90 \xabQ(\x050\x01: \x98\x86p\x840\xd8Q\xd4\xb6\xe7\x9f\xedz\xaf\xce\xda0\xca\xaf\x88$\x8aN\xf2l\x828\x18\xcd5\xa5\xadB \x17M\xa1t\xf3\xe0\x0bp\x8e\xa4\xa1P\xc2\xc0|\xec\xcaJ\xb8T\x1b\x18`g\xed\xf5+\xe6\x17\xe1\x94\x7fJ{/v/t62.7117-24/11752422_2084075465065903_2803483545252691965_n.enc?ccb=11-4&oh=12ffb0a3447fb70ac523f02118626856&oe=618448E5P\x86\xf7\xff\x8a\x06\x18\x86\xf7\xff\x8a\x06 \x01\xb2\x02<\n:Media/WhatsApp Voice Notes/202141/PTT-20211008-WA0006.opus'<issue_closed> Status: Issue closed
stanford-ppl/spatial-lang
209323799
Title: Fix testbench garbage collection issue Question: username_0: All test objects are in global space, meaning they're never garbage collected. Memory usage when running "sbt test" will therefore be the sum of the memory required for each test which has already been run. Should be able to fix just by creating objects in scalatest class instead of globally Status: Issue closed Answers: username_0: Sub-part of #36
hyb1996-guest/AutoJsIssueReport
242575533
Title: [147]java.lang.RuntimeException: An error occured while executing doInBackground() Question: username_0: Description: --- java.lang.RuntimeException: An error occured while executing doInBackground() at android.os.AsyncTask$3.done(AsyncTask.java:299) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) at java.util.concurrent.FutureTask.setException(FutureTask.java:219) at java.util.concurrent.FutureTask.run(FutureTask.java:239) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:838) Caused by: java.lang.NullPointerException at com.jecelyin.editor.v2.highlight.ModeObjectHandler.process(ModeObjectHandler.java:66) at com.jecelyin.editor.v2.highlight.SyntaxParser.loadMode(SyntaxParser.java:43) at com.jecelyin.editor.v2.highlight.jedit.Mode.loadIfNecessary(Mode.java:121) at com.jecelyin.editor.v2.highlight.jedit.Mode.getTokenMarker(Mode.java:97) at com.jecelyin.editor.v2.highlight.Buffer.setMode(Buffer.java:69) at com.jecelyin.editor.v2.ui.Document.onAsyncReaded(Document.java:155) at com.jecelyin.editor.v2.ui.Document$ReadFileTask.doInBackground(Document.java:257) at com.jecelyin.editor.v2.ui.Document$ReadFileTask.doInBackground(Document.java:238) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234) ... 4 more Device info: --- <table> <tr><td>App version</td><td>2.0.14 Beta</td></tr> <tr><td>App version code</td><td>147</td></tr> <tr><td>Android build version</td><td>eng.root.1383761317</td></tr> <tr><td>Android release version</td><td>4.2.2</td></tr> <tr><td>Android SDK version</td><td>17</td></tr> <tr><td>Android build ID</td><td>R829T_11_150124</td></tr> <tr><td>Device brand</td><td>OPPO</td></tr> <tr><td>Device manufacturer</td><td>OPPO</td></tr> <tr><td>Device name</td><td>R829T</td></tr> <tr><td>Device model</td><td>R829T</td></tr> <tr><td>Device product name</td><td>OPPO82_13065</td></tr> <tr><td>Device hardware name</td><td>mt6582</td></tr> <tr><td>ABIs</td><td>[armeabi-v7a, armeabi]</td></tr> <tr><td>ABIs (32bit)</td><td>null</td></tr> <tr><td>ABIs (64bit)</td><td>null</td></tr> </table>
greiman/SdFat-beta
359278211
Title: sharing SPI with another device Question: username_0: Sir, I am using low latency data logger code of yours. But I want to share the SPI with another device (2.7V 12-Bit A/D Converter with SPI™ Serial Interface). My requirement is to read from this ADC and log it to SD cad. But when I tie my MISO pins to share SPI, my read from SD and reads from ADC doesn't work. voltage on this Miso pin goes to some steady 3.3v, its not pulsing according to data. so my adc reads are showing 4096 (all high) and My SD cad reads are failing.This happens only when I tie MISO ins together. Is it because of clock speed issues? I understand that SD cad writes are happening at much faster rate, but for ADC this speed has to be reduced. how to solve this issue? I am sing Arduino Mega and a Micro Sd Card Module to connect SD card. Status: Issue closed Answers: username_1: Closed, this version of SdFat-beta is no longer supported.
gcivil-nyu-org/fall2019-cs-gy-6063-team-stellar
529021803
Title: Email notification for matching does not provide ability to respond to invitation. Question: username_0: Issue tracker is ONLY used for reporting bugs. New features should be discussed in its own thread on the #general channel of slack. The email notification did not give option to respond to the invite. I was able to respond to the email invitation when the feature was presented during the first demo. ## Expected Behavior Should give ability to respond to the email invitation ## Current Behavior Doe not show a calendar invitation. ## Ideas for Improvement Needs to send the notification as a calendar invite ## Steps to Reproduce 1. Open the email invitation. 2. Check if you can see an option to click yes/no/maybe anywhere on the email ## Context (Environment) Trying to respond to the email notification ![email notification 1.PNG](https://images.zenhubusercontent.com/5d8390f225cb27000166855e/e5c307e8-f931-4074-912a-387efdea078a)<issue_closed> Status: Issue closed
getsentry/sentry-php
425380865
Title: How to config error level to send in v2? Question: username_0: Previously I was using Sentry 1.* and notice level errors weren't reported to sentry. But just recently upgraded to 2.0.1 I'm getting a lot of notice level errors. The code I'm using is: ``` \Sentry\init([ 'environment' => 'production', 'dsn' => 'MY_DSN', ]); ``` Anyone know if/how I can suppress notice level errors? Answers: username_1: You have to use the `error_types` options, that accepts a bitemask, in the same way that the native `error_reporting` PHP function does. In your care, you'll want to use `E_ALL & ~E_NOTICE`. Thanks for reporting this, it seems that the option is not documented very well. username_0: Thanks for the quick reply. Can you please confirm this is what I should use? ``` \Sentry\init([ 'environment' => 'production', 'dsn' => 'MY_DSN', 'error_types' => E_ALL & ~E_NOTICE, ]); ``` username_1: Yes, that's correct. Status: Issue closed username_2: Thanks for spotting this, the option was probably removed during the development and then re-added later. We will update the document asap
ocsigen/lwt
238692771
Title: Running tests should not require package react Question: username_0: Currently, if one does `make test` with package `react` not installed, the result is ``` jbuilder runtest Error: External library "react" not found. -> required by "src/react/jbuild (context default)" Hint: try: jbuilder external-lib-deps --missing @runtest make: *** [test] Error 1 ``` I guess this is because we have tests for `lwt_react`, and the tests expect it to be present unconditionally. What's the right way to make this conditional? `(optional)` in `test/react/jbuild` didn't work (nor did I expect it to), neither in the alias nor in the executable. The other alternative I can quickly think of is giving `lwt_react` its own testing system, which seems dubious. @username_1, @username_2 any suggestions? Answers: username_1: It should work with ``` jbuilder runtest --only-packages lwt ``` username_0: Thanks, that works. However, it still isn't quite right – the way the build system and testing are set up now, we *do* want to run `lwt_react` tests if `lwt_react` is being built. It's relatively easy using an `ocamlfind query` command in the `Makefile`, but I wonder if there is a better way using Jbuilder itself. username_0: I suppose we could use the restricted command in `make test`, to match `make build`, and add another `make test-all` to match `make all`. username_1: I think you will have to do something conditional in the Makefile to test for lwt_react since it is a separate library. Or use separate commands as above :) PS I was never really happy with the `all` target. If you add `test-all`, I think it should be renamed `build-all` Status: Issue closed username_0: I went with the two targets, and renamed `all` to `build-all` as you suggested. My overall impression of this is of some kind of tension between configuration/detection steps, multiple packages in one repo with different dependencies, depopts, and so on. Not sure we have the ultimate solution to this problem yet. However, most Lwt development is focused on package `lwt`, so it's good enough for now. username_2: I suppose that tests could be conditional indeed. @hhugo wants this as well for jsoo
facebook/flipper
666956521
Title: iOS - Flipper Folly Typedef redefinition with different types ('uint8_t' (aka 'unsigned char') vs 'enum clockid_t') Question: username_0: ## 🐛 Bug Report Failed to compile project after adding Flipper Pods to iOS App ## To Reproduce 1. Add FlipperKit pods ``` platform :ios, '10.0' install! 'cocoapods', :disable_input_output_paths => true use_frameworks! .. .. def flipperkit_pods $flipperkit_version = '0.51.0' # need to declare all transitive dependency to make sure the configuration is debug only pod 'FlipperKit', '~>' + $flipperkit_version, :configuration => 'Debug' pod 'FlipperKit/FlipperKitLayoutComponentKitSupport', '~>' + $flipperkit_version, :configuration => 'Debug' pod 'FlipperKit/SKIOSNetworkPlugin', '~>' + $flipperkit_version, :configuration => 'Debug' pod 'FlipperKit/FlipperKitUserDefaultsPlugin', '~>' + $flipperkit_version, :configuration => 'Debug' pod 'Flipper-DoubleConversion', :configuration => 'Debug' pod 'Flipper-Folly', :configuration => 'Debug' pod 'Flipper-Glog', :configuration => 'Debug' pod 'Flipper-PeerTalk', :configuration => 'Debug' pod 'CocoaLibEvent', :configuration => 'Debug' pod 'boost-for-react-native', :configuration => 'Debug' pod 'OpenSSL-Universal', :configuration => 'Debug' pod 'CocoaAsyncSocket', :configuration => 'Debug' pod 'ComponentKit', '~> 0.30' end ``` 2. Pod Install 3. Compile 4. Error <img width="1672" alt="Screen Shot 2020-07-28 at 17 15 22" src="https://user-images.githubusercontent.com/10940190/88653172-ed98cb00-d0f5-11ea-8a64-561aa4ec40de.png"> ## Environment Mac Os 10.15.5 Xcode 11.5 iOS target 10 Flipper Version 0.51.0 Answers: username_0: My bad, this is because my project modify pod's ios target to 10.0 Status: Issue closed
z2oh/sexe
365658075
Title: Add log2 function Question: username_0: A common function that one may wish to use is the log2 function. The log2 function computes the logarithm base 2 of some number. For example: `log2(2) = 1` `log2(8) = 3` `log2(0.5) = -1` `log2(1.5) ~= 0.584963` Rust has a log2 function built in on all primitive numeric types. `x.log2()` will evaluate to the logarithm base 2 of `x`. You will need to * add log2 function to the expression evaluator * add support for parsing `log2` as a unary function Please note that log2 is not defined for non-positive numbers, so if the lower bound is 0 or below the program will crash. Don't worry about this for now, error handling will be added at a future date. As an example, check out this commit where I added the tan function: 865d2221e7d29af6db2d9e9c31377932712156d6 If you are participating in hacktoberfest, please comment on this issue claiming it so others don't try to do the same work! Thank you! Answers: username_1: Hi, can I claim this issue? username_0: @username_1 Go for it! Status: Issue closed username_0: Closed via #10
perifacode/conteudo-gratuito
608631974
Title: Incluir conteúdo sobre testes Answers: username_1: Conteúdo sobre teste com RSpec adicionado no #39 username_2: Conteúdo sobre automação de testes com Python + Selenium WebDriver. Add no PR [#44 ] username_3: Adicionado: Testes na Veia. E-book gratuito, e em Português - PR [#320](https://github.com/perifacode/conteudo-gratuito/pull/320)
gnembon/fabric-carpet
754396953
Title: Dynamically set events in player apps affect all other instances of the same app for other players Question: username_0: Events are shared between app instances in per player apps. This means that one player will affect handling of events of other player. Since handlers can be dynamically added / removed, that breaks apps isolation. Fixing it means all events should be keyed with host AND target and acted upon appropriately. This entails that creating a new player host for apps requires copying of all events to the child host. It probably makes sense that global host events in player apps are not executed as irrelevant. This also means that all 'global' events, like 'tick' should now be able to execute on player hosts, making global events somewhat more useful, with a warning that with multiple players on, that will execute multiple times, once for each player.<issue_closed> Status: Issue closed
TheThingsNetwork/lorawan-stack
1049233702
Title: Display gateway connection stats in the Console Question: username_0: <!-- Thanks for submitting a feature request. Please fill the template below, otherwise we will not be able to process this feature request. --> #### Summary <!-- Summarize the feature in a few sentences: --> It would be nice if the Console would show more gateway connection statistics, if they are available, in the gateway overview form #### Why do we need this? <!-- Please explain the motivation, how it will be used, etc. --> Greater gateway debugging capabilities #### What is already there? What do you see now? <!-- Please paste terminal output, upload logs (as .txt) or upload screenshots. Describe or link to related APIs, screen designs, packages, etc. --> Frame counters (up/down), and last activity meters #### What is missing? What do you want to see? <!-- Please add some examples or mock-ups if applicable --> - The protocol name - A small table with round trip times (min/max/median) - A small table with the sub band utilization quotas #### Environment <!-- Your environment: OS/Browser/Gateway/Device/...? Versions? IDs/EUIs? Paste the output of "ttn-lw-cli version" or "ttn-lw-stack version" if applicable. --> `v3.16` #### How do you propose to implement this? <!-- Please think about how this could be fixed. --> Add the fields to the overview page. #### How do you propose to test this? <!-- Please think about how this is verified as implemented. For example, you can add what commands need to be run via the CLI/Console to verify that this works. --> Check on `staging1` for the existing gateways how the feature looks. #### Can you do this yourself and submit a Pull Request? <!-- You can also @mention experts if you need help with this. --> Can review.
rossfuhrman/_why_the_lucky_markov
537606499
Title: THREE: BILLBOARDS, PART II** How about making fun of asthmatics directly? Man A man should not have to die again? Question: username_0: Toot: THREE: BILLBOARDS, PART II** How about making fun of asthmatics directly? Man A man should not have to die again? One comment = 1 upvote. Sometime after this gets 2 upvotes, it will be posted to the main account at https://mastodon.xyz/@_why_toots
apache/apisix
735871269
Title: bug: the latest version of the skywalking plugin always outputs error log Question: username_0: ### Issue description I pulled the latest code and found a persistent error level output in the logs/error.log ``` 2020/11/04 14:54:50 [error] 20481#20481: *28563 [lua] client.lua:92: reportServiceInstance(): Instance report fails, connection refused, context: ngx.timer 2020/11/04 14:54:53 [error] 20481#20481: *28739 [lua] client.lua:92: reportServiceInstance(): Instance report fails, connection refused, context: ngx.timer 2020/11/04 14:54:56 [error] 20481#20481: *28918 [lua] client.lua:92: reportServiceInstance(): Instance report fails, connection refused, context: ngx.timer ``` I found this to be the code for the skywalking module, even though I did not configure the `skywalking` plugin The function call stack is as follows: `skywalking.lua#init()` --> `client.lua#startBackendTimer()` --> `client.lua#reportServiceInstance()` It may be necessary to verify that the user has configured the `skywalking` plugin during the `init` phase. ### Environment * apisix version (cmd: `apisix version`): lastest * OS: centos7 ### Minimal test code / Steps to reproduce the issue just start apisix, then watch the error.log ### What's the actual result? (including assertion message & call stack if applicable) ### What's the expected result? Do not export any logs from the skywalking module without the skywalking plugin enabled. Answers: username_1: “just start apisix, then watch the error.log” ——Can you tell me how long need to wait? This is very helpful for design test cases. username_2: Current solution: You can copy the plugin you need from `config-default.yaml` to `config.yaml` without skywalking, like this: ```yaml plugins: ... # plugin you need # without - skywalking ``` So the skywalking background timer will not be triggered anymore. username_2: @username_4 Maybe we can make skywalking disable by default so the background timer won't be a problem. username_0: 3 seconds. https://github.com/apache/skywalking-nginx-lua/blob/cda47ae0a507ab86a378a298325c3c94d9a773c2/lib/skywalking/client.lua#L28-L48 username_3: I also suffer from this problem in my dev environment, which may distrubs the result of unit test (`no_error_log` or the analogues ). username_1: thanks username_4: agree ^_^ Status: Issue closed
l-lin/angular-datatables
136748826
Title: How to get the sorted index in angular way Question: username_0: Hello, I'm working on a project in which on first page i have angular-datatables rendered using the angular-way. Now, when the user re-sorts any columns, I need to grab the re-sorted index and navigate to next page where details of each row loads up. User is given the option to load "next item" based on the index. How to get this resorted index? Thank you. Answers: username_1: From this [answer](https://datatables.net/forums/discussion/26760/get-row-number-in-display-order-after-sort#Comment_72988): ```html <div ng-controller="AngularWayWithOptionsCtrl as showCase"> <table datatable="ng" dt-options="showCase.dtOptions" dt-column-defs="showCase.dtColumnDefs" class="row-border hover" dt-instance="showCase.dtInstance"> <thead> <tr> <th>ID</th> <th>FirstName</th> <th>LastName</th> </tr> </thead> <tbody> <tr ng-repeat="person in showCase.persons" ng-click="showCase.displayIndex(person.id, $event)"> <td>{{ person.id }}</td> <td>{{ person.firstName }}</td> <td>{{ person.lastName }}</td> </tr> </tbody> </table> </div> ``` ```js angular.module('showcase.angularWay.withOptions', ['datatables', 'ngResource']) .controller('AngularWayWithOptionsCtrl', AngularWayWithOptionsCtrl); function AngularWayWithOptionsCtrl($resource, DTOptionsBuilder, DTColumnDefBuilder) { var vm = this; vm.persons = []; vm.dtOptions = DTOptionsBuilder.newOptions() .withPaginationType('full_numbers') .withDisplayLength(2); vm.dtColumnDefs = [ DTColumnDefBuilder.newColumnDef(0), DTColumnDefBuilder.newColumnDef(1).notVisible(), DTColumnDefBuilder.newColumnDef(2).notSortable() ]; $resource('data.json').query().$promise.then(function(persons) { vm.persons = persons; }); vm.dtInstance = {}; vm.displayIndex = function(id, $event) { var index = vm.dtInstance.DataTable .rows({order: 'applied'}) .nodes() .indexOf($event.currentTarget); console.log(id, index); }; } ``` username_0: Hey Lin, Thank you for your response. Really appreciated. This solution logs the current row index. I'm seeking a way to get the entire re-sorted table. This solution gave me great head start and now I'm trying to use this solution to get what I need for the project. If you happen to know how to get it done then please let me know. Thank you so much. username_1: What about `vm.dtInstance.DataTable.rows({order: 'applied'}).nodes()`? username_0: I got it to work. This console logs the entire re-sorted table. ``` vm.displayIndex = function(id, $event) { var index = vm.dtInstance.DataTable.rows({order: "applied"})[0]; for(var i = 0; i < index.length; i++){ console.log(vm.persons[index[i]]); } }; ``` username_0: Thank you so much Lin for that headstart Status: Issue closed
matrix-org/synapse
1125656699
Title: None Question: username_0: This is due to mypy failing: ``` synapse/rest/media/v1/upload_resource.py:92: error: Argument 4 to "create_content" of "MediaRepository" has incompatible type "str"; expected "int" [arg-type] ```<issue_closed> Status: Issue closed
ruHaskell/ruhaskell
306903175
Title: Bump year Question: username_0: I have noticed that year in the footer is not up-to-date. <img width="673" alt="2018-03-20 17 58 10" src="https://user-images.githubusercontent.com/4660275/37662544-5a0be32e-2c68-11e8-9be5-ef7218ae7b3e.png"> Answers: username_1: Действительно. Может, вы хотите внести исправление? username_0: Нет, я не хочу. username_1: done Status: Issue closed
python/pythondotorg
636973841
Title: Hello everyone, I am making an exe file from python(.py) using pyinstaller. The code requires connection to SQL database through cx_Oracle. Exe file gets crated successfully and runs on my PC properly. However, when I share it with a team member of mine, it does not run on some other PC and gives this error: "cannot open self abc or archive abc.pkg" (abc being the name of exe). It would be very nice if someone could help on the same. Status: Issue closed Answers: username_1: This issue tracker is for issues related to Python dot org website, not for questions about using Python. You can probably ask those questions in forums like stack overflow or python-list mailing list. Also, in the future, it would be great if you could post to post a short and concise issue title, and leave the long details in the description instead of in the subject. Thanks.
testing-cabal/mock
566530601
Title: 4.0.1: 'ValueError: Sentinels must not start with _' Question: username_0: Mostly a placeholder for now, more details forthcoming. With 4.0.1 (didn't try with 4.0.0 yet) several Nova unit tests start failing with traces like: ``` b'Traceback (most recent call last):' b' File "/home/efried/openstack/nova/nova/tests/unit/virt/libvirt/test_host.py", line 589, in test_cpu_features_bug_1217630' b" with mock.patch('nova.virt.libvirt.host.libvirt') as mock_libvirt:" b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/oslotest/mock_fixture.py", line 171, in __enter__' b' _lazy_autospec_method(mocked_method, original_attr, eat_self)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/oslotest/mock_fixture.py", line 27, in _lazy_autospec_method' b' _lazy_autospec = mock.create_autospec(original_method)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/mock/mock.py", line 2679, in create_autospec' b' name=_name, **_kwargs)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/mock/mock.py", line 2076, in __init__' b' _safe_super(MagicMixin, self).__init__(*args, **kw)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/mock/mock.py", line 439, in __init__' b' self._mock_add_spec(spec, spec_set, _spec_as_instance, _eat_self)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/mock/mock.py", line 494, in _mock_add_spec' b' if iscoroutinefunction(getattr(spec, attr, None)):' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/mock/backports.py", line 34, in iscoroutinefunction' b" getattr(obj, '_is_coroutine', None) is _is_coroutine" b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/oslo_utils/fixture.py", line 82, in __getattr__' b" raise ValueError('Sentinels must not start with _')" b'ValueError: Sentinels must not start with _' ``` There are a couple of other error types. Other projects are seeing this as well. Switching from `import mock` to `from unittest import mock` fixes at least a couple of them. Answers: username_0: ``` b"2020-02-17 15:51:28,040 INFO [nova.console.websocketproxy] handler exception: Expected int or long, got <class 'mock.mock._AutospecMagicMock'>" ... b'Traceback (most recent call last):' b' File "/home/efried/openstack/nova/nova/tests/unit/console/test_websocketproxy.py", line 627, in test_tcp_rst_no_compute_rpcapi' b' self.assertIsNone(self.wh._compute_rpcapi)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py", line 426, in assertIsNone' b' self.assertThat(observed, matcher, message)' b' File "/home/efried/openstack/nova/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py", line 498, in assertThat' b' raise mismatch_error' b'testtools.matchers._impl.MismatchError: <nova.compute.rpcapi.ComputeAPI object at 0x7f865ddfa710> is not None' ``` permalink: https://opendev.org/openstack/nova/src/commit/e69dbfa0d34d6b3f51282ac0ab51cdab3c2115e4/nova/tests/unit/console/test_websocketproxy.py#L627 username_1: We are seeing something similar: Captured traceback: ~~~~~~~~~~~~~~~~~~~ Traceback (most recent call last): File "/home/zuul/src/opendev.org/openstack/octavia/octavia/tests/unit/certificates/manager/test_barbican_legacy.py", line 81, in setUp self.secret1 = mock.Mock(spec=secrets.Secret) File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 1084, in __init__ _spec_state, _new_name, _new_parent, **kwargs File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 439, in __init__ self._mock_add_spec(spec, spec_set, _spec_as_instance, _eat_self) File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 494, in _mock_add_spec if iscoroutinefunction(getattr(spec, attr, None)): File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 2899, in __get__ return self() File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 1100, in __call__ return _mock_self._mock_call(*args, **kwargs) File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 1104, in _mock_call return _mock_self._execute_mock_call(*args, **kwargs) File "/home/zuul/src/opendev.org/openstack/octavia/.tox/py36/lib/python3.6/site-packages/mock/mock.py", line 1161, in _execute_mock_call raise effect ValueError username_2: @username_0 Please add a note on the version of Python. mock 4.0 added support to detect coroutines to return AsyncMock. It's trying to detect it by getting _is_coroutine. oslotest tries to do autospec and has a restriction that getattr cannot be used with _is_coroutine. From the traceback if you are on Python 3.7 this is something that might affect your upgrade to 3.8 too. So in one way I see at as an upstream issue that is detected by the backport. username_2: @username_1 Please add a simplified reproducer for the issue on how to reproduce the failure. From the traceback yours seem to be a different issue. Thanks. username_2: The issue reported by @username_1 can be reproduced as below. Please correct me if I understood the issue incorrectly. ``` from unittest.mock import Mock, PropertyMock class Foo: prop = PropertyMock(side_effect=ValueError) mock = Mock(spec=Foo) ``` In upstream cpython mock with 3.7 there is no error. In Python 3.8 and master it triggers ValueError. The cause in both issues reported is that we rely on getattr to detect coroutine that might have side effects that were not present before. Both will be upstream issues when the projects upgrade to 3.8. I am not sure of a way to get the attribute without triggering the side effect of getattr that could possibly solve the issue. username_3: @username_0 / @username_1 - thanks for the reports, but these are separate issues and would have been better filed as such. @username_2 - thanks for the digging! Sounds like these are all upstream issues and so will be need to be reported and fixed there so they can be backported. @username_0 / @username_1 - please can you report these issues on https://bugs.python.org/, and file each one separately. Feel free to ping the bpo urls here when you have them, and and PRs, so that I can get a new backport release out as soon as the fixes land on cpython master. Status: Issue closed username_4: I think the original issue reported here ('ValueError: Sentinels must not start with _') caused by the violation of the __getattr__ protocol in oslo utils [1]. I filed a bug in oslo utils [2] and bushed a fix [3]. [1] https://github.com/openstack/oslo.utils/blob/b9938230f992935e8332b6e288937be890724cd2/oslo_utils/fixture.py#L82 [2] https://bugs.launchpad.net/oslo.utils/+bug/1885281 [3] https://review.opendev.org/#/c/738207/
pry0cc/axiom
930735651
Title: No final result output file on FFUF modul scan Question: username_0: I set up Axiom today and tried to do some test scans using standard scan modules ffuf.json and ffuf_base.json. Both modules run without any problems. The only problem is that the final output csv file is empty even though there are findings I see in the terminal output. I run my fleet named "test" (10 instances) with the following commands: axiom-fleet test -i=10 axiom-scan urls.txt -m ffuf_base -o ffuf-output.csv After the scan is done, terminal prints the following: zsh:1: no matches found: test09:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test01:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test07:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test08:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test03:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test05:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test02:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test06:/home/op/scan/ffuf_base+1624720626/output/* Generated 10 commands in total Repeat set to 1 zsh:1: no matches found: test04:/home/op/scan/ffuf_base+1624720626/output/* zsh:1: no matches found: test10:/home/op/scan/ffuf_base+1624720626/output/* 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 482.14it/s] Mode set to CSV, merging... Module [ ffuf_base ] | Time: [ 00h:08m:33s ] | Input: [ 20 targets ] However, the file ffuf-output.csv is empty. Did I do something wrong or is the problem somewhere else? Thanks for your help in advance! Answers: username_1: @username_0 can you show me the findings displayed to the terminal? I havent been able to recreate. Those error messages ` no matches found` seem to indicate there arent any results. Status: Issue closed username_1: closing, unable to reproduce. feel free to reopen if you think it was a mistake to close
MiguelCastillo/amd-resolver
54643547
Title: After switching to Grunt for tests, compilation no longer happens before testing. Question: username_0: The gulp version of the tests ran the compilation step before running tests. Grunt does not do this. Solutions: - Fix Gulp running tests so that tests work correctly. - Make Grunt run gulp compilation task. Answers: username_1: Yeah I had to switch back to grunt because the mocha-phantomjs integration in gulp is really shitty... XHR does not work. Once that's fixed, we can go back to an all gulp workflow again.
facebook/react-native
219973473
Title: ScrollView Inconsistency Between Platform's Question: username_0: Hi, i have a ScrollVIew with a TextInput , and when i click the TextInput the keyboard shows, the TextInput changes position to avoid to be under the keyboard. It would be ok if the iOS have the same reaction, but it doesn't have. I don't understand why this happen, especially when we have KeyboardAvoidingView component. ### Reproduction Steps and Sample Code Just create a ScrollView with a TextInput with enough margin to be under the keyboard. ### Additional Information * React Native version: 0.41.2 * Platform: Android * Development Operating System: MacOS Status: Issue closed Answers: username_1: Hi, on iOS the app goes behind the keyboard, and usually on Android the app resizes when the keyboard opens. So this isn't a difference with the RN scroll view as much as it is a difference between iOS and Android username_0: @username_1 thanks for the answer, do you know any way to disable this on android ? username_2: @username_0 try changing the value of `android:windowSoftInputMode` in your AndroidManifest.xml file. https://developer.android.com/guide/topics/manifest/activity-element.html#wsoft
cockroachdb/docs
713307567
Title: Convert the docs website to a Progressive Web App Question: username_0: ISSUE / FEATURE REQUEST: We could convert the website to a PWA (Progressive web app) by providing an "Add to Homescreen" button, which when clicked, can add it to our home screen to be read whenever needed. WOULD YOU LIKE TO WORK ON IT? Yes, I'd like to take this issue up and work on it. Answers: username_1: Can you elaborate on what value this will add to the user experience? Also, if you are looking for Hacktoberfest projects, the to-do apps repo might be of more interest to you: https://github.com/cockroachdb/cockroachdb-todo-apps username_0: It might not add much to the user experience, but since a PWA also works offline, the docs can be read by the user whenever they want and also since PWAs have access to a user's phone's push notifications, it can lead to higher reach. Talking about the to-do repo, yeah I've looked into it. I'll start working on it soon😊 username_1: Users can bookmark the site and access it when they need it. Not sure we need an app that pushes notifications to drive readership to the docs -- we don't want to spam the users. username_0: OK cool then. I'll work on the to-do app👍 Status: Issue closed username_2: Closing this issue for now!
roots/trellis-cli
765710984
Title: Discussion: How should we suppport using `trellis-cli` as a library? Question: username_0: #### Problem As of https://github.com/roots/trellis-cli/commit/5ec0cea57c35b130f76e0ea4976b3acc25703114 `trellis-cli` uses [`mitchellh/cli/ui`](https://github.com/mitchellh/cli/blob/5454ffe87bc5c6d8b6b21c825617755e18a07828/ui.go) everywhere. Third party developers shouldn't be forced to use `mitchellh/cli`, especially when they are using [`spf13/cobra`](https://github.com/spf13/cobra/) or not intended to print the outputs. In `itinerisltd/trellis-cyberduck`, I ended up rewriting [`Playbook`](https://github.com/roots/trellis-cli/blob/5ec0cea57c35b130f76e0ea4976b3acc25703114/cmd/playbook.go) and [`AdHocPlaybook`](https://github.com/roots/trellis-cli/blob/5ec0cea57c35b130f76e0ea4976b3acc25703114/cmd/ad_hoc_playbook.go) with [cobra's io writers](https://github.com/ItinerisLtd/trellis-cyberduck/blob/d1bb184abf70e69b55213979a48f525e4edb4d81/lib/io.go#L24-L29). See: https://github.com/ItinerisLtd/trellis-cyberduck/tree/d1bb184abf70e69b55213979a48f525e4edb4d81/lib #### Discussion How should we expose/re-structure/refactor `trellis-cli` code, so that developers can use it as a library with minimal dependencies? Answers: username_1: Counter point: why not? Consistency is a big part of a good developer experience, and that becomes way harder if plugins use different CLI libraries. In my opinion we should just ensure that the trellis-cli package is properly structured to be consumed in plugins so that they don't have to re-invent anything main functionality themselves. Status: Issue closed
opserver/Opserver
34200667
Title: Is there any wiki on adding new features to this? Question: username_0: I am trying to modify this to add in some more features that would be useful to our organization and I'm having trouble figuring out where all I will need to make changes. It's built to be pretty extensible but I'm just having a bit of trouble on where to get started. Answers: username_1: It's *mostly* built to be extensible, but it's not really friendly outside the core project yet. Plugins is a very complicated thing mostly because of the controllers, routes, and views. I'm working with the Microsoft side to improve the situation here. I'm also doing a very large overhaul of the entire codebase, views, and styles ([in the overhaul branch](https://github.com/opserver/Opserver/commits/overhaul)). The views and styles piece is one piece since being on bootstrap means a) themeable, and b) a full, established set of styles to make the view side much easier for anyone adding bits to core or plugins. Actually getting the views and controllers pluggable is a bigger problem though - I'm prodding to work on this but it'll likely be post-Core 1.0 MVC RTM before there's any good solution in place. Another *huge* piece is documentation, which is an artifact of lack of time. Along with the overhaul -> master branch I want to do a ton of setup and usage documentation, prepping for plugins there as well. Regardless of *actual* plugins though, I'll be working on eliminating enums, etc. to make the actual codebase more extendable easier in a branch. Right now it's doable, but there are many, many things that can be made better. I need to do it for plugins, but there's no reason to delay - I can improve the branch approach now. username_2: Any way we could help you? username_3: Any news on the plugin side ?
aloneguid/parquet-dotnet
841733689
Title: Creator metadata field contains placeholders instead of Parquet.Net version and commit hash Question: username_0: **Version:** Parquet.Net v3.8.1 **Runtime Version:** .Net 5. **OS:** macOS. #### Expected behavior The creator string embedded in Parquet files should include the Parquet.Net library name and version, like `Parquet.Net version 3.8.1 (build 7cc73e098ee08943c9a87d4eca4bba1795fbc1ba)` #### Actual behavior The creator string is literally `Parquet.Net version %Version% (build %Git.LongCommitHash%)` ### Steps to reproduce the behavior 1. Create a parquet file using Parquet.Net 2. Inspect this parquet file using e.g. `parquet-tools` #### Code snippet reproducing the behavior ```sh parquet-tools meta any-file-created-by-Parquet.Net.parquet 2>/dev/null | grep creator creator: Parquet.Net version %Version% (build %Git.LongCommitHash%) ``` <!-- Attaching a .parquet file showing the problem is a great way to help speeding this up --> Status: Issue closed Answers: username_1: fixed in the next release
jlippold/tweakCompatible
309131845
Title: `NoLockScreenCam` working on iOS 10.3.3 Question: username_0: ``` { "packageId": "com.i0stweak3r.nolockscreencam", "action": "working", "userInfo": { "arch32": false, "packageId": "com.i0stweak3r.nolockscreencam", "deviceId": "iPhone7,1", "url": "http://cydia.saurik.com/package/com.i0stweak3r.nolockscreencam/", "iOSVersion": "10.3.3", "packageVersionIndexed": false, "packageName": "NoLockScreenCam", "category": "Tweaks", "repository": "BigBoss", "name": "NoLockScreenCam", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "com.i0stweak3r.nolockscreencam", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.0.6", "shortDescription": "Disallows accessing camera from lock screen.", "latest": "1.0.3-1", "author": "i0s_tweak3r", "packageStatus": "Unknown" }, "base64": "<KEY>", "chosenStatus": "working", "notes": "Had to turn enable switch off and back on then worked flawlessly." } ```
kalexmills/github-vet-tests-dec2020
759181695
Title: projectatomic/atomic-enterprise: Godeps/_workspace/src/github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet/dockertools/manager.go; 55 LoC Question: username_0: [Click here to see the code in its original context.](https://github.com/projectatomic/atomic-enterprise/blob/fe3f07ee509de38099565693aecc74e555f971ee/Godeps/_workspace/src/github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet/dockertools/manager.go#L1318-L1372) <details> <summary>Click here to show the 55 line(s) of Go which triggered the analyzer.</summary> ```go for index, container := range pod.Spec.Containers { expectedHash := kubecontainer.HashContainer(&container) c := runningPod.FindContainerByName(container.Name) if c == nil { if kubecontainer.ShouldContainerBeRestarted(&container, pod, &podStatus, dm.readinessManager) { // If we are here it means that the container is dead and should be restarted, or never existed and should // be created. We may be inserting this ID again if the container has changed and it has // RestartPolicy::Always, but it's not a big deal. glog.V(3).Infof("Container %+v is dead, but RestartPolicy says that we should restart it.", container) containersToStart[index] = empty{} } continue } containerID := kubeletTypes.DockerID(c.ID) hash := c.Hash glog.V(3).Infof("pod %q container %q exists as %v", podFullName, container.Name, containerID) if createPodInfraContainer { // createPodInfraContainer == true and Container exists // If we're creating infra containere everything will be killed anyway // If RestartPolicy is Always or OnFailure we restart containers that were running before we // killed them when restarting Infra Container. if pod.Spec.RestartPolicy != api.RestartPolicyNever { glog.V(1).Infof("Infra Container is being recreated. %q will be restarted.", container.Name) containersToStart[index] = empty{} } continue } // At this point, the container is running and pod infra container is good. // We will look for changes and check healthiness for the container. containerChanged := hash != 0 && hash != expectedHash if containerChanged { glog.Infof("pod %q container %q hash changed (%d vs %d), it will be killed and re-created.", podFullName, container.Name, hash, expectedHash) containersToStart[index] = empty{} continue } result, err := dm.prober.Probe(pod, podStatus, container, string(c.ID), c.Created) if err != nil { // TODO(vmarmol): examine this logic. glog.V(2).Infof("probe no-error: %q", container.Name) containersToKeep[containerID] = index continue } if result == probe.Success { glog.V(4).Infof("probe success: %q", container.Name) containersToKeep[containerID] = index continue } glog.Infof("pod %q container %q is unhealthy (probe result: %v), it will be killed and re-created.", podFullName, container.Name, result) containersToStart[index] = empty{} } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: fe3f07ee509de38099565693aecc74e555f971ee
saltstack/salt
129520752
Title: pkg.get_locked_pkgs is not available Question: username_0: Hi, When using the pkg module to lock packages, I am not able to list the packages that are currently locked. Both pkg.hold and pkg.unhold are working as expected. ```` [root@paprika app]# salt --version salt 2015.5.5 (Lithium) [root@paprika app]# salt 'mulch*' pkg.hold httpd mulch: ---------- httpd: ---------- changes: ---------- new: hold old: comment: Package httpd is now being held. name: httpd result: True [root@paprika app]# salt 'mulch*' pkg.get_locked_pkgs mulch.e4bh.internal: 'pkg.get_locked_pkgs' is not available. ERROR: Minions returned with non-zero exit code [root@paprika app]# salt 'mulch*' pkg.list_holds mulch.e4bh.internal: 'pkg.list_holds' is not available. ERROR: Minions returned with non-zero exit code [root@paprika app]# salt 'mulch*' pkg.unhold httpd mulch: ---------- httpd: ---------- changes: ---------- new: old: hold comment: Package httpd is no longer held. name: httpd result: True ```` Answers: username_1: @username_0, `pkg.get_locked_pkgs` is only available on yum-based systems. username_0: @username_1 I am running Oracle Linux which uses yum as the package handler username_1: @username_0, thanks for the extra information. I am not sure why some `pkg` functions are available and others are not. If you do `salt-call pkg.get_locked_pkgs` directly on the minion, does it result in the same error. What happens if you upgrade to 2015.5.9 or 2015.8.4? username_0: @username_1 i was unsuccessful running `salt-call pkg.get_locked_pkgs ` from the minion ```` [root@mulch ~]# salt-call pkg.get_locked_pkgs [DEBUG ] Configuration file path: /etc/salt/minion [DEBUG ] Reading configuration from /etc/salt/minion [DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf' [DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf [DEBUG ] Including configuration from '/etc/salt/minion.d/schedule.conf' [DEBUG ] Reading configuration from /etc/salt/minion.d/schedule.conf [DEBUG ] The `dmidecode` binary is not available on the system. GPU grains will not be available. [DEBUG ] Initializing new SAuth for ('/etc/salt/pki/minion', 'mulch.e4bh.internal', 'tcp://172.19.33.11:4506') [DEBUG ] Decrypting the current master AES key [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG ] LazyLoaded jinja.render [DEBUG ] LazyLoaded yaml.render [DEBUG ] Error loading module.npm: npm execution module could not be loaded because the npm binary could not be located [ERROR ] You should upgrade pyOpenSSL to at least 0.14.1 to enable the use of X509 extensions [DEBUG ] Error loading module.nacl: libnacl import error, perhaps missing python libnacl package [DEBUG ] Error loading module.ipmi: No module named pyghmi.ipmi [DEBUG ] Could not LazyLoad pkg.get_locked_pkgs 'pkg.get_locked_pkgs' is not available. ``` When I updated to 2015.5.9 I had the same results as on 2015.5.5 (the command didnt work) When I updated to 2015.8.4 `salt-call pkg.get_locked_pkgs ` didn't work, but `salt-call pkg.list_holds` was successful, as was running `salt 'mulch*' pkg.list_holds` from the master. (From the docs - salt.modules.yumpkg.list_holds Changed in version Boron,2015.8.4,2015.5.10: Function renamed from pkg.get_locked_pkgs to pkg.list_holds.) Status: Issue closed username_2: Closing since this is working with the new(er) function name (`pkg.list_holds` instead of `pkg.get_locked_pkgs`) on updated versions.
taniman/profit-trailer-enhancements
300751415
Title: Restart on Error Question: username_0: Example Error Message from Log: There is something wrong with your Bot, if this happens 3 times in a row. restart your but. Would be nice if the bot restart its self. Answers: username_1: In my humble opinion we would need to attack the source, this is to say the actual reason why "something is wrong with your bot" rather than just create a loop of, restarting errors ... username_2: Hope this win and #264
TaiBIF/camera-trap-vueapp
455748746
Title: 3.1 "計畫管理"按鈕在1024寬的解析度下文字斷行 Question: username_0: "計畫管理"按鈕在1024寬的解析度下文字斷行 (如下圖所示) <img width="1011" alt="螢幕快照 2019-06-13 下午9 30 14" src="https://user-images.githubusercontent.com/9169534/59437051-64f09a00-8e23-11e9-95c4-427d5d83d854.png"> Answers: username_0: Fixed <img width="1010" alt="螢幕快照 2019-06-14 下午2 41 44" src="https://user-images.githubusercontent.com/9169534/59488671-c01d9d80-8eb2-11e9-971a-6ff6e14721f1.png"> username_0: Merged at commit 52562d433a1639428ed10295b309ee44fb219bdc [52562d4] Status: Issue closed
VATSIM-UK/UK-Sector-File
197498139
Title: SCO line display Question: username_0: In GitLab by @hsugden on Dec 31, 2015, 18:23 All SCO lines displaying to appropriate sectors Answers: username_0: In GitLab by @hsugden on Feb 3, 2016, 22:59 mentioned in commit 153757f6e2b4f3b9e826137dee8870ffffde48f6 username_0: In GitLab by @hsugden on Feb 3, 2016, 23:05 Milestone removed username_0: In GitLab by @hsugden on Feb 3, 2016, 23:05 Now an ongoing project to update line display! Status: Issue closed
Yelp/swagger_spec_validator
335900767
Title: No way to define empty array as default value Question: username_0: Hello! We had code working with v2.1.0 that got broken recently. Our spec had two array params (one CSV, one SSV) that are optional on the client side, with a default in the spec so that the server code (which gets parameters automatically validated / transformed according to the swagger spec) can work with a list (empty or not) without needing type checks or none checks everywhere. Spec: `type: array`, `collectionFormat: csv`, `default: ''` This used to be transformed into an empty list automatically. This seemed perfect: clients don’t have to send the param, the lib applies transforms to the default (split on comma or space), the server code always gets a list. After updating to 2.3.1, `default: ''` fails because it’s not a list, and `default: []` fails because a list does not have a split method. From a quick search, it seems like the default value should be of the specified type (so list and not string), which I think was done in #95, but then it should be allowed to specify an empty list as default. Thanks! Status: Issue closed Answers: username_1: Hello @username_0 thanks a lot for reporting this. Starting from version 2.2.0 swagger-spec-validator ensures that the default values are valid according to the specs (this was done il #82). In case of `type: array` the default value `''` is not valid as it is not an array, `collectionFormat` defines how the items will represented on _the wire_ and not how the objects should look like in your code. So I would say that if specs validation fails if `default: ''`. swagger-spec-validator>=2.2.0 also enhanced validation capabilities. According to the [swagger specs](https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md) in case of `type: array` the attribute `items` is required so also in this case I would say that the tool is doing is job. My suggestion would be to define your specs as: ```yaml type: array collectionFormat: csv default: [] items: type: string # string is just an example ``` I'm closing this issue, feel free to reopen it if you think that's the case username_0: I fully understand and agree with the first part, it makes sense that `default: ''` is rejected, like I said in my OP. I did not include the full spec for brevity but we do have `items: type: string` in both cases. `default: []` causes a type error from some part of swagger_spec_validator that tries to split a string. username_1: Hello! We had code working with v2.1.0 that got broken recently. Our spec had two array params (one CSV, one SSV) that are optional on the client side, with a default in the spec so that the server code (which gets parameters automatically validated / transformed according to the swagger spec) can work with a list (empty or not) without needing type checks or none checks everywhere. Spec: `type: array`, `collectionFormat: csv`, `default: ''` This used to be transformed into an empty list automatically. This seemed perfect: clients don’t have to send the param, the lib applies transforms to the default (split on comma or space), the server code always gets a list. After updating to 2.3.1, `default: ''` fails because it’s not a list, and `default: []` fails because a list does not have a split method. From a quick search, it seems like the default value should be of the specified type (so list and not string), which I think was done in #95, but then it should be allowed to specify an empty list as default. Thanks! username_1: Could you publish the whole specs or equivalent specs that are able to trigger the issue? Without having access to the problematic specs it's hard to identify where the issue could be username_0: ```yaml - name: include in: query type: array items: type: string enum: - first - second collectionFormat: csv default: [] ``` username_1: Tried to reproduce the issue on swagger-spec-validator (versions 2.1.0, 2.2.0, 2.3.0 and 2.3.1) with the following script ```python from swagger_spec_validator.__about__ import __version__ from swagger_spec_validator.validator20 import validate_spec from yaml import safe_load spec_yaml = """ swagger: '2.0' info: title: Example of Specs version: '1.0' paths: /endpoint: get: parameters: - name: include in: query type: array items: type: string enum: - first - second collectionFormat: csv default: [] responses: default: description: Any Response """ spec_dict = safe_load(spec_yaml) print('swagger-spec-validator=={}'.format(__version__)) validate_spec(spec_dict) print('Valid specs') ``` and specs are under all the versions ``` C02QJ1JPG8WL:Desktop maci$ virtualenv venv --python python3.6 Running virtualenv with interpreter /usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/bin/python3.6 Using base prefix '/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6' New python executable in /Users/maci/Desktop/venv/bin/python3.6 Also creating executable in /Users/maci/Desktop/venv/bin/python Installing setuptools, pip, wheel...done. maci-host:Desktop maci$ source venv/bin/activate (venv) maci-host:Desktop maci$ pip install pyyaml swagger-spec-validator==2.1.0 &> /dev/null && python t.py sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0) swagger-spec-validator==2.1.0 Valid specs (venv) maci-host:Desktop maci$ pip install pyyaml swagger-spec-validator==2.2.0 &> /dev/null && python t.py sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0) swagger-spec-validator==2.2.0 Valid specs (venv) maci-host:Desktop maci$ pip install pyyaml swagger-spec-validator==2.3.0 &> /dev/null && python t.py sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0) swagger-spec-validator==2.3.0 Valid specs (venv) maci-host:Desktop maci$ pip install pyyaml swagger-spec-validator==2.3.1 &> /dev/null && python t.py sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0) swagger-spec-validator==2.3.1 Valid specs (venv) maci-host:Desktop maci$ ``` username_0: Thanks for the checks! The error I observed comes from bravado_code, which does additional processing after validate_spec is done. Status: Issue closed username_2: I am having this same issue. Swagger is returning a null or corrupt data. username_1: @username_2 could you publish your input and the received stacktrace? Having such info might help us to provide some guidance
phan/phan
567149473
Title: Infer that DOMNodeList is a collection of DOMElements Question: username_0: Currently, Phan doesn't do this (e.g. for `foreach`), and can't represent it in the internal stubs Answers: username_1: DOMNodeList can also contain DOMText (which is not a subclass of DOMElement). username_0: I forgot about that: Methods such as getElementsByTagName() should return `DOMNodeList<DomElement>` (which phan should infer contains DomElements, properties such as DOMElement->childNodes should return a regular `DOMNodeList` (which phan should infer contains DOMNodes)
dokku/dokku
734917911
Title: HTTPS Deployment Question: username_0: Does dokku support pushing to git over HTTPS? If not, are there any plans to add support in the future? I thought of creating a container of git with Smart HTTP setup, that has the home directory of dokku mounted as the repositories directory on the container. Are there any security concerns about this setup other than maintaining the credentials and serving over HTTPS? And how can dokku deployment be triggered considering the git server is containerized? Answers: username_1: There is a project we've contributed to [here](https://github.com/dokku/git-http-backend) that implements the git-http-backend in golang (well, really its just a wrapper to the git-cli, but good enough) that you can use as the basis of your own work (and is the basis of similar functionality in my own, internal UI project). Regarding mounting the path into a container, you'd still need to have a way to execute the `dokku` binary (and all the things it depends on) which run outside of that container. You could probably use our Docker image as a base though, and just have all the directories be mounts, which would work fine. That said, this isn't something we're going to do in the Dokku core. There are lots of unanswered questions around authentication and git-server functionality that I'm not well equipped to perform. If others want to investigate this, they are more than welcome to, but it's not going to be something I can do for free. Hope that helps. Feel free to jump on slack with further questions. Status: Issue closed
ManageIQ/manageiq-api
272240824
Title: bulk query don't work if model guid is implemented as a virtual attribute. Question: username_0: problem exposes itself when trying to add support for bulk post "query" for /api/container_images - see guid is an alias method to docker_id. Answers: username_1: @username_0 is this still a valid issue? If yes, please remove the stale label. If not can you close. If there's no update by next week, I'll be closing this issue.
Hexworks/zircon
758822283
Title: Add the option to keep the Component's properties when it is attached. Question: username_0: Currently if a `Component` is attached to a `Container` its properties (`ComponentProperties`) will be updated from the `Container`'s. This is rather confusing sometimes. We should add a `updateOnAttach` flag to le the user choose the update strategy (update when a component is attached or not).<issue_closed> Status: Issue closed
ruricolist/serapeum
503399870
Title: Guix package compilation error Question: username_0: ```lisp ; compiling (DEFUN FLOAT-PRECISION-CONTAGION ...) ; file: /gnu/store/81mwbkhih0v063r2cbs2b9kp70dk4l39-sbcl-serapeum-0.0.0-0.9cc0f9c/share/common-lisp/sbcl-source/serapeum/numbers.lisp ; in: DEFUN FLOAT-PRECISION-CONTAGION ; (SERAPEUM:OP ; (+ SERAPEUM::_ SERAPEUM::ZERO)) ; ; caught ERROR: ; during macroexpansion of ; (OP ; (+ _ ZERO)). ; Use *BREAK-ON-SIGNALS* to intercept. ; ; The function SERAPEUM/OP::EXTRACT-OP-ENV is undefined. ; (MAPCAR ; (SERAPEUM:OP ; (+ SERAPEUM::_ SERAPEUM::ZERO)) ; SERAPEUM::NS) ; --> LET ; ==> ; (SB-KERNEL:%COERCE-CALLABLE-TO-FUN ; (SERAPEUM:OP ; (+ SERAPEUM::_ SERAPEUM::ZERO))) ; ; note: The first argument never returns a value. ; (SERAPEUM:OP ; (* SERAPEUM::_ 0)) ; ; caught ERROR: [...] ``` Any idea where that could come from? Answers: username_0: A clue: Guix uses `compile-bundle-op` and since `extract-op-env` is used by the `op` macro while unexported, it does not exist at compile time for the other files using `op`. username_0: Possible fixes: - Either export `extract-op-env` (at compile time). - Or turn it into a macro. username_1: I've pushed a potential fix; would you mind trying again? username_0: It worked, thanks! Can you explain why this works? I thought that `serial t` was enough to ensure all preceding files were a dependency of the current file. username_1: Different modules in a system can be serial or not serial. In Serapeum, the `:serial t` in the system definition only applies to the top level (the ordering of modules) -- the individual modules have `:serial nil`, just to avoid needless recompilation. But evidently there were some dependencies I missed. username_0: Oh, just notcied the `serial nil` now, makes total sense, thanks! Status: Issue closed
NRGI/rgi-assessment-tool
191725617
Title: Move assessment to reviewer failures Question: username_0: 1. Resubmit assessment * Country: Algeria * Issue: the reviewer could not see the flags * Solved: the assessment status was change to `reviewer_started` 1. Manual change of the assessment status * Country: Cuba * Issue: the assessment did not appear in the list * _Solved_: Reassign the reviewer to the assessment
codeamp/panel
325477555
Title: Inherit environment Question: username_0: We should have an option to create a new environment that inherits secrets, services... If you define a secret with the same name as the parent it overwrites it. This would be useful for staging environments or PR branches Status: Issue closed Answers: username_1: Closing this in favor of #244. This one is sort of confusing because it makes it seem like one env might be dependent on another's changes. The other is more clear we only want this to occur when a new environment is created.
project-koku/koku
479121576
Title: Include CLI generation in Koku build/release pipeline Question: username_0: ## User Story As a Koku CLI user, I want a new version of the CLI to become available for use whenever changes are made to Koku. This means calling https://github.com/project-koku/koku/issues/979 during the build/release pipeline(s) of koku itself. ## Assumptions and Questions - End users are comfortable with downloading zip/tarball files from GitHub. - We rely on GitHub to provide downloads automatically on the Releases page as indicated by git tags. - Do we build and commit new bash CLI code upon every master commit? - Do we automatically tag the CLI repo for release whenever koku is tagged/released? - Do we manually tag the CLI repo for release? ## Acceptance Criteria - [ ] Tox or Jenkins runs the openapi-generator build image automatically. - [ ] Automatically tag the CLI repo upon release.<issue_closed> Status: Issue closed
testdouble/testdouble.js
214453904
Title: add `withContext` option Question: username_0: Proposal: allow test double stubbing/verification configuration to specify the value of `this` that's bound when the test double function is called. ```js td.when(myTd(), {withContext: window})).thenReturn(5) myTd() // undefined myTd.bind(window)() // 5 ``` And for verify: ```js myTd() td.verify(myTd(), {withContext: window})) // blows up myTd.bind(window)() td.verify(myTd(), {withContext: window})) // passes ``` Also, argument matchers should be supported: ```js myTd.bind(new Promise())() td.verify(myTd(), {withContext: td.matchers.argThat(arg => arg instanceof Promise)})) // passes ``` Thoughts? Answers: username_1: I'm wondering, if function under test depends on `this` being some value, wouldn't it fail if called with wrong context?
hermidalc/nci-ctd2-dashboard
137383445
Title: some references to mouse genes versus human genes? Question: username_0: *Originally reported by*: @paulclemons Looks like some of the genes in this page: http://cbio.mskcc.org/ctd2-dashboard/#submission/2175456 refer to mouse genes in the website, but I'm not sure that was the submitter intent (Ken in one case, Karin in others) Benjamin and I talked about a couple of these on the phone, but I'm not sure all were resolved, possibly due to case-sensitive versus insensitive matches?: **from the telephone discussion (and subsequent email): ** 3. corrected gene symbols -- these three stand as corrected: 3a: FLJ32065 -> AMZ2P1 is correct per genenames.org (occurs in columbia_marina_analysis) 3c: FGFR -> FGFR1 is correct per genenames.org (occurs in cshl_tier4_fgf19_story) 3d: WNT -> WNT1 is correct per genenames.org (occurs in dfci_tier4_beta-catenin_story) this one also is a correct re-assignment, but I think it happens in a different submission than you report: 3b: C/EBP -> CEBPA is correct per genenames.org (actually occurs in columbia_mra_fet_analysis rather than columbia_joint_mr_shrna_diff) and as discussed these 2 'dropped' ones you were going to add back (since they correctly occur in columbia_tier4_glioma_story): C/EBPb -> CEBPB is correct per genenames.org (actually occurs in columbia_mra_fet_analysis rather than columbia_joint_mr_shrna_diff) C/EBPd -> CEBPD is correct per genenames.org (actually occurs in columbia_mra_fet_analysis rather than columbia_joint_mr_shrna_diff) **here's what i find investigating the same genes in the website:** FLJ32065 -> AMZ2P1 observation: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2167434 appears to be human: http://cbio.mskcc.org/ctd2-dashboard/#/subject/1486486 FGFR -> FGFR1 observation: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2177503 appears to map to mouse: http://cbio.mskcc.org/ctd2-dashboard/#/subject/1595693 rather than human: http://cbio.mskcc.org/ctd2-dashboard/#subject/1387397 WNT -> WNT1 observation: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2177514 appears to (correctly, I think) map to human here: http://cbio.mskcc.org/ctd2-dashboard/#/subject/1413389 and not to mouse: http://cbio.mskcc.org/ctd2-dashboard/#subject/1623075 C/EBP -> CEBPA observation: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2175514 appears to map to human: http://cbio.mskcc.org/ctd2-dashboard/#/subject/1381519 and not to mouse: http://cbio.mskcc.org/ctd2-dashboard/#subject/1590156 C/EBPb -> CEBPB both observations: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2177492 http://cbio.mskcc.org/ctd2-dashboard/#/observation/2175647 appear to map to human: http://cbio.mskcc.org/ctd2-dashboard/#/subject/1381524 and not to mouse: http://cbio.mskcc.org/ctd2-dashboard/#subject/1590169 C/EBPd -> CEBPD both observations: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2177492 http://cbio.mskcc.org/ctd2-dashboard/#/observation/2175533 appear to map to mouse: http://cbio.mskcc.org/ctd2-dashboard/#subject/1590180 and not to human: http://cbio.mskcc.org/ctd2-dashboard/#subject/1381533 maybe this will resolve automatically with a future update of background data and submission data -- are the gene symbol matches case-sensitive in general? Status: Issue closed Answers: username_0: *Original comment by*: @paulclemons possible explanation is that gene symbols are using case-insensitive match (and therefore getting mouse) while Entrez Gene ID (which can only be one or the other) pulls the human. check out this example: http://cbio.mskcc.org/ctd2-dashboard/#/observation/2175838 and note potential cross-reference of this issue with ISSUE 53: https://bitbucket.org/cbio_mskcc/ctd2-dashboard/issue/53/submitter-uses-redundant-columns-to-same username_0: *Original comment by*: @paulclemons based on quick non-systematic inspection of some of the other observations, it seems that sometimes both human and mouse are mapped (though i think only one is meant by the submitter). Compare: http://cbio.mskcc.org/ctd2-dashboard/#subject/1377285 http://cbio.mskcc.org/ctd2-dashboard/#subject/1586365
nodes-ios/bitrise-step-nodes-custom-script
440423593
Title: iOS Add code obfuscation step in CI Question: username_0: Originally created by @username_1 in a different project and now migrated here. ## History _comment by @nickskull on 26.03.2019_ @username_1 could you please add some more information to the issue? I think we need to setup some rules about titles, body, labels. _commnet by @username_1 on 26.03.2019_ @nickskull Of course, here it is. ### The Problem and the Solution: So we have a client security requirement that requires source code obfuscation. To do so we have found the SwiftShield framework that can perform the obfuscation for us before archive step. By doing so it will rename classes, variables etc to something random so that it will deter attackers from trying to reverse engineer the app and/or try to access API keys from the project. ### Risks: After the obfuscation step the code might not be archivable if there were errors with obfuscating the code. The framework's github page contains a lists of do and don't to help with the headaches. I would recommend though that the obfuscation is done locally first so we can spot the possible errors before they reach our CI. ### Alternatives: https://github.com/Polidea/SiriusObfuscator ### Next Step: Research and implement<issue_closed> Status: Issue closed
conan-io/conan
735404683
Title: [question] how to force compiler in required packages? Question: username_0: <!-- What is your question? Please be as specific as possible! --> some packages such as elfutils does not support clang, but I need to use clang to build most of my other packages. I know I can use `-s elfutils:compiler=gcc -e elfutils:CC=gcc` But I would like programmatically do this in conanfile.py during configure() or requirements() How can I do this? - [ ] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md). Answers: username_1: Hi @username_0 The "configuration" cannot be defined in recipes, it needs to be defined in profiles. Using profiles (and not command line arguments) is the recommended approach for using Conan in production. They can be composed and included, in case you don't want to repeat yourself. Status: Issue closed username_1: This response still holds. Adding settings values for dependencies in recipes is not a good idea, it creates a lot of problems, configuration conflicts, etc. The correct place to define configurations and settings is profile files, not recipes. Profiles have kept improving, they can be composed, included, and now they can also be templatized with jinja2 templates, and make them conditional to the platform, env-vars, etc. So I am closing this question as responded, let us know if you have any other question, thanks.
ionic-team/capacitor
442298485
Title: Override capacitor config from the CLI Question: username_0: Currently, my `webDir`option in `capacitor.config.json` is set to `src`: in this way I can develop on native using live-reload. The problem is when I actually want to build the app. Because the output folder is `www`, I would like to change the `webDir` to `www` as well before running `ionic capacitor run ios`. Apart from this specific use case, I think it would be generally good to have the option to override the configuration of `capacitor.config.json` from the cli. The actual workaround would be to update the file on the CI for the specific use case. Answers: username_1: You don't have to set the `webDir` to `src` to use live reload. When using live reload, a new server object is added to the `capacitor.config.json` with an url pointing to the live reload server url. We don't plan to add new commands, at least not in short term. You can use `npx cap init` with `--web-dir` option to set the `webDir` again, but it's not optimal as it will also ask you for the app name and id again. Status: Issue closed
curiouslearning/workshop_pick_one_1
228330189
Title: User testing reading Question: username_0: @username_0 commented on [Fri May 12 2017](https://github.com/curiouslearning/workshop_drag_into_place_2/issues/17) Please read through the following [Google Doc](https://docs.google.com/document/d/1LwflWZRQkhxMZbiI1<KEY>0nmV55tWbUQCzU64/edit?usp=sharing) for this week's required reading on user testing.<issue_closed> Status: Issue closed
awslabs/aws-service-catalog-puppet
445461812
Title: Stuck: Need to wait for stack completion Question: username_0: When deploying an update to a product that was a dependency to another product, the build hung on the below log line: `Need to wait for stack completion` The build waited on this step for 10 minutes before I stopped it, even though the stack finished deploying in about 30 seconds. A subsequent run of the pipeline (after stopping the initial hung build) succeeds. Looks like the waiter never returns? Answers: username_1: Thanks for reporting this. I will try to recreate it. username_1: Hi @username_0 I have not been able to recreate this. I followed these steps: - product a is v1 - product b is v2 - product a depend on b - change b version from v1 to v2 What sort of resources where changing? Was it adding resources, removing resources or changing existing resources? Status: Issue closed username_0: Can't replicate this again so calling it a one-off and closing for now.
coveooss/platform-client
482311540
Title: Do not use vapor for diff (too heavy) Question: username_0: ![image](https://user-images.githubusercontent.com/12199712/63269012-6f844100-c263-11e9-8613-e173ac6ce060.png) It causes the page to crash some times Answers: username_0: Should replace `<script src="http://react-vapor.surge.sh/assets/bundle.js"></script>` with `<link rel="stylesheet" href="https://coveo.github.io/vapor/dist/css/CoveoStyleGuide.css" />` Status: Issue closed
ansible/galaxy
439351370
Title: setup.py build drops hidden galaxy data files (e.g. .travis.yml) Question: username_0: Note: I have not examined the `package_data` in `setup.py` for any other potential missing hidden file globs. There may be other issues, I was just trying to figure out what was causing `.travis.yml` to be omitted from the galaxy skeleton. Status: Issue closed Answers: username_1: @username_0 Thanks for bringing this to our attention. This should actually be filed at [the Ansible project](https://github.com/ansible/ansible/issues).
swissbib/searchconf
102728460
Title: Verbesserung der Phrasensuche Question: username_0: Aus einen user feedback Guten <NAME> xxxx vielen Dank für Ihr feedback und merci, dass Sie, trotz der zwischendurch auftretenen Unschönheiten in der Suche, immer noch gerne den swissbib service benutzen, obwohl ich Ihnen zwischendurch immer wieder antworten musste, warum die Suche in speziellen Fällen nicht so gut funktioniert. Aus meiner Sicht das positive an den Erklärungen: In der Regel wissen wir, warum etwas nicht so gute Ergebnisse liefert und wir, vor allem weil wir durch den Einsatz von Open Source Technologien hohe Gestaltungsmöglichkeiten, *grundsätzlich* Verbeserungsmöglichkeiten haben. Zur Zeit haben wir einfach zu wenig Hände, weswegen wir uns manchmal mit nur durchschnittlichen Ergebnissen zufrieden geben. Aber ich will nicht jammern und Ihnen ein paar Hinweise zu geschilderten Suche geben: Das speziell von Ihnen gesuchte Dokument wurde nicht gefunden, weil Sie nach dem Autor mit einer Phrase gesucht haben. "<NAME>". Eine Suche nach den Termen K<NAME> wäre erfolgreich gewesen, allerdings ist die Treffermenge dann grösser. Was in Ihrem Fall auch funktionieren würde, wäre die Angabe einer sog. adjacency (Abstandsgrösse) der Terme innerhalb der Phrase. "<NAME>"~100 Das heisst: die Suchmaschine vertauscht insgesamt hundertmal die Positionen der Terme im Dokument und versucht dadurch ein Ergebnis für die Phrase zu finden Eine Suche "Inflation der Menschheit" "<NAME>"~100 liefert exakt das Dokument Verkleinert man die Angabe der Abstandsgrösse (Anzahl der Vertauschungen), erhalte ich präzisere Ergebnisse "Inflation der Menschheit" "<NAME>" => 0 Treffer "Inflation der Menschheit" "<NAME>" => exakter Treffer (da die Terme in dieser Reihenfolge indexiert wurden) "Inflation der Menschheit" "<NAME>"~2 => exakter Treffer (die Maschine hat zweimal versucht, die Positionen zu vertauschen und ist fündig geworden. Diese Adjanceny Suche ist schlecht dokumentiert. Sie ist für mich auch nur *eine* Möglichkeit. Es gibt noch eine Reihe weiterer, deren Potential wir noch nicht ausgenutzt haben. Vor allem, ohne dass BenutzerInnen ein Spezialwissen über Suchstrategien haben muss. In diesem Feld ist Google mein (technisches) Vorbild. Wenn wir uns dem annähern können und gleichzeitig, anders als Google, unsere Suchstrategien offenlegen, sind wir nach meiner Ansicht auf dem richtigen Weg. Ich bin guter Hoffnung, dass wir hier noch einiges liefern können. Einen guten Tag! <NAME> Sehr geehrte Damen und Herren Vorhin habe ich das Buch "Inflation der Menschheit" von <NAME> gesucht und, da ich den Titel nicht im Kopf hatte, einfach nach "<NAME>" gesucht. Erstaunlicherweise habe ich das Buch nicht gefunden, obwohl ich es vor einiger Zeit in einer Bibliothek ausgeliehen hatte. Bei genauerer Prüfung stellte ich fest, dass der Autor im betreffenden Katalogisat als "Fiedler, Kuno" aufgeführt ist und dass die interne Suchmaschine von Swissbib den Titel deswegen nicht gefunden hat. Vielleicht lässt sie sich so umbauen, dass sie auch in solchen Fällen die Bücher findet? Freundliche Grüsse xxxxxx, Basel (ein an sich über swissbib s e h r glücklicher Benutzer) Answers: username_1: Nein: das funktioniert normalerweise prima - wenn der Name denn in der Titelaufnahme so vorkommt. Autorenangabe $$c im Feld 245 fehlt. https://www.swissbib.ch/Record/260175773 username_0: @username_1 <NAME>, dessen war ich mir bewusst und gleichzeitig ist es der Kern meiner Aussage. Mir ist es ziemlich egal, was und wie die Katalogisierung gemacht wird. Mein Anspruch: Die Suchmaschine hat etwas zu finden, auch wenn, wie in diesem Fall halt kein Eintrag in 245c gemacht wurde. Da ist Google um Meilen besser - die interessiert so etwas nicht. Die finden einfach... Günter username_1: ok :-) username_2: I don't think it is necessary to solve this <NAME> <NAME> -> this is fine "<NAME>" "<NAME>" don't deliver the same results. Which is intended with the ". Google also delivers different results for such cases. Status: Issue closed
JuliaLang/julia
162926289
Title: `identity(;kw...)` should work. Question: username_0: The `identity` function should accept keywords: `identity((2,4,6)...;Dict(:odd=>1,:odd2=>3)` #=should return=# `(2,4,6,Semicolon,(:odd,1),(:odd2,3))` ...for some value of `Semicolon`, probably `Symbol(";")`. If people don't want to do this (because it would break `f(#=stuff=#)==f(identity(#=stuff=#)...)`), then there should be something like `identity_with_kw`. Answers: username_0: On gitter, it was pointed out that this shouldn't be called `identity`, because `identity(1,2)` shouldn't equal `identity((1,2))`. So `argvalues` or (if "arg" sounds as if it means just positional arguments) `paramvalues` seems like a better name. Status: Issue closed username_1: `identity` shouldn't accept multiple arguments. This is instead describing the `tuple` function, where the kwargs form would need to return a structural tuple.
microsoft/onnxruntime
662952023
Title: Access to custom metadata of an .onnx model when creating an InferenceSession in a C# WPF application. Question: username_0: **Is your feature request related to a problem? Please describe.** No. **System information** - ONNX Runtime version (you are using): Microsoft.ML.OnnxRuntime Nuget Package v1.4.0 - .Net Framework 4.7.2 **Describe the solution you'd like** I want to access the custom metadata of my .onnx file in the C# application. It is important for my application to know the model type, which version, numerical range of the input data, etc. **Describe alternatives you've considered** If the custom metadata is not accessible from the model file I will be forced to create a separate file that contains the needed information. This would be very annoying to always load two files and harder to keep track of. **Additional context** I am working with image segmentation and the user will be able to choose with model to use for his problem. So I need to provide additional info about the different models. From the `InputMetadata` of the `InferenceSession` I can get the image size and dtype that is needed but the `ModelMetadata `property is always empty. I populate the .onnx model with custom metadata in python with the onnxmltools (version 1.6.1) library: ``` import onnxmltools model = onnxmltools.load_model("../model.onnx") meta = model.metadata_props.add() meta.key = "version" meta.value = "0.0.1" onnxmltools.utils.save_model(model, "../model_1.onnx") ``` After saving the model again I can see the custom properties e.g. in [Netron](https://lutzroeder.github.io/netron/) but the C# api does not show them to me. Is there a way to have the custom data I put into the .onnx model read when creating an InferenceSession? Thanks in advance! Daniel Answers: username_0: I found the empty `ModelMetadata` class with a TODO in the InferenceSession.cs: ``` internal class ModelMetadata { //TODO: placeholder for Model metadata. Currently C-API does not expose this. } ``` Is there currently someone working on this? username_1: You’re welcome to contribute. The C API (now) exposes methods to fetch metadata associated with a model. username_0: So the TODO is outdated and it would be possible wrap the C methods in C#? Digging into the `NativeMethods.cs` class I found that the C# api is using the `OrtApi` version 1. [NativeMethods.cs](https://github.com/microsoft/onnxruntime/blob/master/csharp/src/Microsoft.ML.OnnxRuntime/NativeMethods.cs#L157) <img width="745" alt="NativeMethods" src="https://user-images.githubusercontent.com/68106736/88262453-8380c000-ccc8-11ea-8535-631c0ee54d2b.png"> In the C api there are plenty of methods regarding Metadata. But they all have this `ORT_API2_STATUS`: <img width="698" alt="C_api" src="https://user-images.githubusercontent.com/68106736/88262690-f4c07300-ccc8-11ea-8c1d-16bd54ab98c0.png"> [C api](https://github.com/microsoft/onnxruntime/blob/v1.4.0/include/onnxruntime/core/session/onnxruntime_c_api.h#L766) Do I need to update the whole C# api to work with this version 2? What would happen if I replace this 1 with a 2 in the OrtGetApi call? What if I want to import just these few methods, is version 1 and compatible in this regard? How does the import work? (I am not really good at C...) Tanks for your help! username_1: `Digging into the NativeMethods.cs class I found that the C# api is using the OrtApi version 1`: We have some work to do wrt to C# API versioning, but the good news is we get access to the same Api struct whatever version we request (for now) and so no change required- https://github.com/microsoft/onnxruntime/blob/ace41b80647a627805c766ef6b05e78a1e5cd206/onnxruntime/core/session/onnxruntime_c_api.cc#L1625 `Do I need to update the whole C# api to work with this version 2?` You need to bring in methods (but they don't need delegates) until the metadata methods in the Api struct. You can use this PR for reference - https://github.com/microsoft/onnxruntime/pull/3934 If you are still having issues, I can get to this sometime next week. username_0: Hi! I now forked the repository and created a branch regarding this feature. I was able to write the wrapping functions for C methods. I can get most of the information of the model while debugging one of the unittest. [First commit with metadata methods](https://github.com/username_0/onnxruntime/commit/e521d8f185915447b86cab4f4922309b87406876) I encountered several problems: 1. The custom key-value pairs are not there or maybe the C methods do not return the right things. More likely is that I am using them wrong. 2. The unittests which tests inference of the squeezenet.onnx fails: The output of the network are all zero. I also need help with the ModelMetadata class. I want to treat it the same as NodeMetadata but I am not sure if that is the right thing. username_0: I just wanted to check if you had time to look into this problem. I was not able to get any further. Thanks for your efforts! username_1: Hi @username_0, Thanks for checking in. Sorry I missed your earlier message. I will definitely have this in the next release. Hopefully sooner. So just to confirm - you only want what is available via the C API right ? username_0: Hi @username_1, That are some good news! Yes. Mainly the access to the `ModelMetadataLookupCustomMetadataMap `and `ModelMetadataGetCustomMetadataMapKeys`. Those I need to provide custom information to the user. The rest like `ModelMetadataGetProducerName`, `ModelMetadataGetGraphName`, etc. worked when I was trying this myself. username_1: ok- I am unable to provide a timeline right now - but I ll make sure it goes into the next release (hopefully will be in master much earlier than that) username_1: Hi @username_0 - I have a PR for this now username_0: Great! Thanks a lot man I appreciate the work 👍 If I want to use this before the next release I have to build it from the master after the PR gets accepted right? Status: Issue closed
webjars/webjars
66590002
Title: Jqm-autocomplete Question: username_0: Hi there, here's a repository for jqm-autocomplete: https://github.com/username_0/jqm-automplete Status: Issue closed Answers: username_1: Thanks! Here is the fork: https://github.com/webjars/jqm-automplete This will be released shortly. username_0: Hi James, sorry for the trouble, but seems like I messed this one :( The upstream.url points to: https://github.com/commadelimited/autoComplete.js/blob/master/jqm.autoComplete-1.5.2.js which in an html page, instead of: https://raw.githubusercontent.com/commadelimited/autoComplete.js/master/jqm.autoComplete-1.5.2.js Sorry about that.. do you want me to change it in my repo, or are you going to change it in the webjars repo? username_1: Can you send a pull request? username_0: Done.
flyteorg/flyte
991338921
Title: [BUG] Differentiate between node overrides and platform default resources Question: username_0: **Describe the bug** The introduction of https://github.com/flyteorg/flytepropeller/pull/310 means that platform resource defaults are now applied as task resource defaults in the absence of task decorator resource specifications. For container tasks, this is fine. For pod tasks this results in the pod spec primary container definition resource spec being overwritten by the platform defaults. The latter really ought to be a default in the absence of a spec and not the primary value, which leads to pod spec attrs not being respected. **Expected behavior** Node resource overrides and static task resource definitions should be treated as separate specs which get applied as necessary by plugin handlers when creating a pod spec to execute a task. **[Optional] Additional context** ***To Reproduce*** Steps to reproduce the behavior: 1. 2. ***Screenshots*** If applicable, add screenshots to help explain your problem.
Azure/azure-sdk-for-python
373958828
Title: Azure Service Bus Proxy Broken? Question: username_0: Hi, i'm trying to use the Azure Service Bus SDK behind a Proxy, but no matter what i do i keep getting the exception below. I know that I don't have any problems with my proxy service, since it works fine with other Azure services such as the Azure BlockBlobService. I'm setting the proxy as follows sb.set_proxy('corp.xxx', '8080', 'username', '<PASSWORD>') But when I try to connect to the service i get this error. Any help would be much appreciated. Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 2309, in __call__ return self.wsgi_app(environ, start_response) File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 2295, in wsgi_app response = self.handle_exception(e) File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 1741, in handle_exception reraise(exc_type, exc_value, tb) File "C:\Program Files\Python37\lib\site-packages\flask\_compat.py", line 35, in reraise raise value File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "C:\Program Files\Python37\lib\site-packages\flask\_compat.py", line 35, in reraise raise value File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "C:\Program Files\Python37\lib\site-packages\flask\app.py", line 1799, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "C:\Users\username_0m\git\converter-dsr-to-ulog\main.py", line 16, in parse_dsr send_to_sb(parser.device_id, parser.ulogs) File "C:\Users\username_0m\git\converter-dsr-to-ulog\main.py", line 45, in send_to_sb sb.send_ulog_batch(device_id, ulogs) File "C:\Program Files\Python37\lib\site-packages\draeger_ulog\azure.py", line 33, in send_ulog_batch self.create_topic(topic_name) File "C:\Program Files\Python37\lib\site-packages\azure\servicebus\servicebusservice.py", line 344, in create_topic self._perform_request(request) File "C:\Program Files\Python37\lib\site-packages\azure\servicebus\servicebusservice.py", line 1228, in _perform_request resp = self._filter(request) File "C:\Program Files\Python37\lib\site-packages\azure\servicebus\_http\httpclient.py", line 181, in perform_request self.send_request_body(connection, request.body) File "C:\Program Files\Python37\lib\site-packages\azure\servicebus\_http\httpclient.py", line 143, in send_request_body connection.send(request_body) File "C:\Program Files\Python37\lib\site-packages\azure\servicebus\_http\requestsclient.py", line 81, in send self.response = self.session.request(self.method, self.uri, data=request_body, headers=self.headers, timeout=self.timeout) File "C:\Program Files\Python37\lib\site-packages\requests\sessions.py", line 512, in request resp = self.send(prep, **send_kwargs) File "C:\Program Files\Python37\lib\site-packages\requests\sessions.py", line 622, in send r = adapter.send(request, **kwargs) File "C:\Program Files\Python37\lib\site-packages\requests\adapters.py", line 507, in send raise ProxyError(e, request=request) requests.exceptions.ProxyError: HTTPSConnectionPool(host='ulogdispatcher.servicebus.windows.net', port=443): Max retries exceeded with url: /8400 (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 authenticationrequired'))) Answers: username_1: @username_2 could AMQP help? Or do you know what's happening? username_2: Hi @username_0, @username_1 Unfortunately this is not yet supported in the new preview SDK either, for two reasons: - The HTTP control plane operation has not been changed - so the above error will still apply. - For AMQP operations this is not supported until we have websocket support, which is open issue #4250. Hopefully both of these will be resolved in the next preview release. username_3: Hey, @username_1, We are using version 0.2.11 of the service bus and we want to move to the AMQP implementation, but we need to use an http_proxy for all the operations, is this still an issue for version 0.50.2? username_4: Hello @username_0, @username_3, apologize for the delayed response. We will do a patch release for servicebus 0.50.2 which would bring in the amqp http proxy support. Apart from that, we have released a brand-new version of servicebus -- [azure-servicebus 7.0.0b2](https://pypi.org/project/azure-servicebus/7.0.0b2/) which already integrates http proxy support. It would be great if you could take a try on our latest preview version which is of our efforts to create a user friendly client library. Status: Issue closed
cyberark/secretless-broker
341957790
Title: Clean up socket files after exit Question: username_0: Some handlers/listeners (like ssh-agent) do not clean up sockets after secretless exits and don't try to recreate them either so you can only effectively run Secretless once in some cases. AC: - [ ] Secretless can be started multiple times with all socket-based listeners<issue_closed> Status: Issue closed
anvaka/panzoom
186518288
Title: is it possible to zoom on click? Question: username_0: Instead of zoom on mousewheel is it possible to zoom on double click? Answers: username_1: Yup, I think I can do it. Do you want to completely disable mouse wheel for zoom? Or have both mouse wheel and double click? username_0: I guess having the option to do both/disable one or the other would be cool, but if it was just adding the option to double click that would be great. username_1: Oops, forgot to update here. Double click works for zoom. Status: Issue closed
pulibrary/cicognara-catalogo
139136037
Title: Automatic "bibl" and "note" tagging stopped Question: username_0: When I reached item 271 (ish), the items are no longer tagged with "bibl" and "note". This is the case for as far as I can see in the rest of Catalog 1. Is there a reason for this? Should I continue to add them manually? Answers: username_1: I'm not sure why I stopped tagging the "bibl" and "note" elements at that point. Yes: please do continue to add them by hand. Status: Issue closed
laurb9/rich-traceback
87405394
Title: Multiline tracebacks in syslog Question: username_0: A traceback multiline log message gets recorded as a single line with newlines replaced by #012. Setting this in rsyslog.conf could prevent this but see if there's a better way, maybe convert to multiple log records individually tagged. $EscapeControlCharactersOnReceive off
recharts/recharts
1100256172
Title: add a new language and there is a grammar mistakes in the docs Question: username_0: - [ ] I have searched the [issues](https://github.com/recharts/recharts/issues) of this repository and believe that this is not a duplicate. ### What problem does this feature solve? Developers who do not speak english and chinese will be able to understand properly, and also there are a few words in the docs which I found a little bit confusing. ### What does the proposed API look like? Language / Internationalization? <!-- generated by reccharts-issue-helper. DO NOT REMOVE -->
spring-projects/spring-boot
113844972
Title: RequestContextFilter Ordering Question: username_0: We attempted to fix RequestContextFilter ordering so that Spring Session (and other APIs) have a chance to wrap the HttpServletRequest before RequestContextFilter sets the RequestContextHolder. This is good, but we now have a bit of a chicken and the egg problem. For example, Spring Security OAuth typically uses session scoped beans which require the RequestContextHolder to be populated. Honestly, at this point I'm not sure what we want to do in this instance, but wanted to ensure we got this logged and discussed. An example stacktrace: ``` 2015-10-28 10:11:27.033 ERROR 29039 --- [nio-8080-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request. at org.springframework.web.context.request.RequestContextHolder.currentRequestAttributes(RequestContextHolder.java:131) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.context.request.SessionScope.get(SessionScope.java:91) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:339) ~[spring-beans-4.2.2.RELEASE.jar:4.2.2.RELEASE] ... 63 common frames omitted Wrapped by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'scopedTarget.standardGitHubClient': Scope 'session' is not active for the current thread; consider defining a scoped proxy for this bean if you intend to refer to it from a singleton; nested exception is java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request. at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:354) ~[spring-beans-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:196) ~[spring-beans-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.aop.target.SimpleBeanTargetSource.getTarget(SimpleBeanTargetSource.java:35) ~[spring-aop-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:187) ~[spring-aop-4.2.2.RELEASE.jar:4.2.2.RELEASE] at com.sun.proxy.$Proxy61.getEmails(Unknown Source) ~[na:na] at com.gopivotal.cla.security.AdminEmailDomainFilter.isValidAdminUser(AdminEmailDomainFilter.java:58) ~[classes/:na] at com.gopivotal.cla.security.AdminEmailDomainFilter.doFilter(AdminEmailDomainFilter.java:50) ~[classes/:na] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.oauth2.client.filter.OAuth2ClientContextFilter.doFilter(OAuth2ClientContextFilter.java:60) ~[spring-security-oauth2-2.0.7.RELEASE.jar:na] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:122) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:169) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:48) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:120) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:96) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:91) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:53) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:213) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:176) ~[spring-security-web-4.0.2.RELEASE.jar:4.0.2.RELEASE] at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:87) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:85) ~[spring-web-4.2.2.RELEASE.jar:4.2.2.RELEASE] [Truncated] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:217) ~[tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:673) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1500) [tomcat-embed-core-8.0.28.jar:8.0.28] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1456) [tomcat-embed-core-8.0.28.jar:8.0.28] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_51] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.0.28.jar:8.0.28] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] ``` cc @dsyer @philwebb Answers: username_1: I attempted to fix this in d9d4cc2. username_0: @username_1 Thanks for the response. The problem with d9d4cc2ef5ab40f55e7a0cde68b1a5e78aefb4b3 is that any Filter that is registered after `RequestContextFilter` will no longer register a wrapped version of the `HttpServletRequest` and `HttpServletResponse`. This means anyone accessing the the `HttpServletRequest` and `HttpServletResponse` will no longer get a wrapped version. More concretely, setting the `RequestContextHolder` too early will break things like Spring Session integration as illustrated in #2637 This happens because Spring Session will wrap the `HttpServletRequest` to replace the `HttpServletRequest.getSession()` (and related) methods. If `RequestContextHolder` is not using the wrapped request, the overridden `HttpSession` will not be used. Spring Security also wraps the `HttpServletRequest` and the `HttpServletResponse` to provide things like access to the current user on `HttpServletRequest.getRemoteUser()` and `HttpServletRequest.getUserPrincipal()`. These are two concrete examples of why Filters that wrap requests must be placed before the `RequestContextHolder` is populated and why the changes in 88cc883e9499805f90fe2c5ef82ba3816dc25930 were made. username_1: I understand all of that. I (hopefully) chose an order which means that `RequestContextFilter` goes after all filters that wrap the request but before Spring Security's filter. Note the changes to the tests that I made in the commit which now verify much of the ordering. What I've neglected to do is to rework the `REQUEST_WRAPPER_FILTER_MAX_ORDER` constant. Ideally, `RequestContextFilter` needs to used that order +1 and Spring Security's filter needs to use that order +2 (or more). We can use this issue to do that. username_1: If people want to be able to access Spring Security's wrapped request via `RequestContextHolder` then I guess we need to register `RequestContextFilter` with Spring Security's filter chain somehow. Correct? username_0: Sort of. More concretely, for the user to be accessible through the `RequestContextHolder` the `RequestContextFilter` must be after `SecurityContextHolderAwareRequestFilter`. That means it could be registered with Spring Security after `SecurityContextHolderAwareRequestFilter`. However, I should point out that Spring Security also wraps the `HttpServletRequest` in other places. For example, `RequestCacheAwareFilter` uses `SavedRequestAwareWrapper`. Generally, it should be considered that Spring Security is going to wrap the `HttpServletRequest`. This means it is likely a better approach to place `RequestContextFilter` after Spring Security's `FilterChainProxy` within the servlet container itself. I think it is important to keep in mind that this is a general problem (i.e. MultipartFilter) and the reason that the constant `REQUEST_WRAPPER_FILTER_MAX_ORDER` exists. This is why we have a bit of a chicken and the egg problem. username_1: Unfortunately, we can't do that due to `OAuth2ClientContextFilter`. It runs as part of Spring Security's filter chain and uses `RequestContextHolder`. username_0: Agreed. Hence the chicken and the egg problem. username_1: I've added an integration test that verifies the current ordering. Without a change being made to Spring Security OAuth, or a reliable position in Spring Security's filter chain where we know that all the request wrapping will have occurred that's before OAuth2ClientContextFilter, this feels like the best that we can do. username_2: Please can we have an update here? username_1: @username_2 As I said above, we believe we've done the best we can in Spring Boot at this point. I believe that we need a change in Spring Security or Spring Security OAuth to do any better. Have you encountered a specific problem? username_2: @username_1 - Thanks for the clarification. Yes, i have a specific problem. Written a Spring Security / Spring Boot library for a Business - and we have had reports from users that Spring Session integration does not work. Took a look today and set up a simple sample (spring boot at that uses spring session and redis as session store) - discovered that indeed in the Spring Security AuthenticatorProvider implementation for some reason we are not getting back the Redis aware session but a standard Session. In Redis, I can see in the hash, all the correct values, it is just that Spring Security is not receiving the wrapped Session - it looks like a Filter ordering issue. Is there something I can do here? For instance, we do reference `RequestContextHolder.currentRequestAttributes();` in that same Provider class but I am less clear if this is cause. I don't mind referencing a Spring Bean declaration for `RequestContextFilter` etc - just need a work around solution that users can implement when they are going to be using Spring Session. Makes sense? username_1: IIRC, it should work if all you're using is Spring Session and Spring Security. Can you provide a small sample that mimics your setup and shows the problem you're having? username_2: @username_1 - Spring Boot, Spring Session, and Spring Security all used together. Thanks for offering to assist here. Re. code snippet - Sure, [here](https://github.com/auth0/auth0-spring-security-mvc/blob/master/src/main/java/com/auth0/spring/security/mvc/Auth0AuthenticationProvider.java#L60) is the library itself which ordinarily works unless `Spring Session` involved. The line that link points at is where the failure arises because the incorrect session is being retrieved. I can see the Auth0User value in Redis, but it is incorrectly retrieving from a standard session. [Here](https://github.com/tawawa/Spring-Session-Issue) is a hacky sample I just put together that is broken because of aforementioned issue. Put some Docker capabilities in there so it is easy to get it running without installing anything locally. However, bear in mind it integrates with auth0 using oauth2 / openid connect so in src/main/resources/auth0.properties you would need to have legit settings to actually get this running.... if you are really keen - [please set up an account](https://auth0.com/)! username_2: Tried adding [RequestContextFilter](https://github.com/tawawa/Spring-Session-Issue/blob/master/src/main/java/com/auth0/example/AppConfig.java#L22) to the sample config out of desperation having just read several issues related to not doing so - some from @username_0 He talks about a possible solution of sorts [here](https://github.com/spring-projects/spring-session/issues/129#issuecomment-71909453) - not tried it. username_1: @username_2 I don't think you're suffering from the exact same problem as is described in this issue. Spring Boot will, by default, auto-configure a `RequestContextFilter` for you that runs after Spring Session's filter and before Spring Security's filter. This is exactly what you want as you attempt to use the holder within `Auth0AuthenticationProvider` that's called by Spring Security. However, this doesn't happen in your sample. The problem is that `Auth0Config` has registered a `RequestContextListener` bean. This switches off the auto-configured `RequestContextFilter` bean. Your attempt at explicitly adding one almost fixed the problem, but it was unordered so it wasn't in the right place in the filter chain. You can fix the problem by using Boot's `OrderedRequestContextFilter` instead: @Bean public OrderedRequestContextFilter requestContextFilter() { return new OrderedRequestContextFilter(); } With this change made to your sample, I can log in. username_2: @username_1 - Thank you for this - several customers will be happy see this fix :D Genuinely, thanks - and let me know if Auth0 ever appeals as a career option in the future. [Interesting times](http://www.geekwire.com/2016/auth0-raises-15m-trinity-ventures-others-boost-identity-platform-add-new-security-features/) and Spring / Spring Boot is becoming an increasingly important SDK / Library choice for our customers. Status: Issue closed username_3: According to the latest status, it looks like this should have been closed. Let me know if I missed anything. username_4: We were suffering the same problem. Spring Boot registering its own RequestContextFilter at the beginning of the chain (pos 3), but we are wrapping the HttpServletRequest at "Filter_A" (pos 5), and pulling from RequestContextHolder at "Filter_B" (pos 7). We were hoping to have our own RequestContextFilter at pos 6 (that is the order we set in FilterRegistrationBean) updating the new wrapped request, but it was not at that pos (in fact it was not at all in the chain, only the one from Spring Boot), and therefore Filter_B did not see the wrapped HttpServletRequest from Filter_A. I discovered that I was registering our RequestContextFilter at pos 6 with the filter name "requestContextFilter" (same name as the one from Spring boot). Changing the filter name to "requestContextFilter2" fixed the problem. Apparently now the first RequestContextFilter registered by Spring Boot is updating the response wrapper from Filter_A. But I do not see my requestContextFilter2 in the populated filters chain. The funny thing is that if I completely remove our requestContextFilter2, then the first requestContextFilter from spring boot does not update my wrapped request from Filter_A Can someone explain why is this happening? username_1: @username_4 It's hard to say what's happening without seeing a complete example. I can say, however, that if you declare your own `RequestContextFilter` bean, the one that's auto-configured by Spring Boot should back off. There shouldn't be any need for a filter registration bean as you can use Boot's `OrderedRequestContextFilter` and `setOrder(int)`. If you're observing behaviour that doesn't match what I've described above and believe you've found a bug, please open a new issue with a complete minimal example of the problem. If you have any follow-up questions, please ask on Stack Overflow or Gitter because, as mentioned in [the guidelines for contributing](https://github.com/spring-projects/spring-boot/blob/master/CONTRIBUTING.adoc#using-github-issues), we prefer to use GitHub issues only for bugs and enhancements. username_4: @username_1 if you use OrderedRequestContextFilter, I understand you are moving the autoconfigured one from Spring Boot to a specific position? If I move it too far off, Spring Security will not work right? username_1: There's no mention of Spring Security in your description of your situation or the order that you've assigned to its filter. As I said above, please ask follow-up questions on Stack Overflow or Gitter. If you do so, please take the time to describe everything that's involved (a minimal, complete, and verifiable example is the best way to do that) so that someone can try to help you as efficiently as possible.
oracle/weblogic-deploy-tooling
514054202
Title: CredentialEncrypted in Security configuration is not encrypted after createDomain Question: username_0: CredentialEncrypted in Security configuration is not encrypted after createDomain. topology: Server: AdminServer: ListenPort: 7001 SecurityConfiguration: CredentialEncrypted: 'some-password' In the config.xml under the security configuration: <credential-encrypted>some-password</credential-encrypted> Answers: username_0: This is an offline WLST bug, and not a WDT bug. See bug 30275024 in OTN. Status: Issue closed username_1: Hi @username_0 I ran into same issue as you've describe here. But i was not able to find any info on support.oracle.com side regarding bug 30275024. Can you please provide more info on that? Is there a patch available from oracle? Please advise. username_0: @username_1 You may need to file an SR with Oracle Support so that they will notify you when it becomes available. Or, you can wait a few days, and check again. Sorry, it looks like this bug was only fixed recently and hasn't made its way through all the channels, yet.
mjordan/islandora_workbench
1028160623
Title: Have Workbench check the version of Islandora Workbench Integration is running Question: username_0: #312 introduces a major breaking change in the way that Workbench interacts with Drupal. To prevent this change from happening, it's very important that sites are running the correct version of the Integration module. This type of breaking change may occur in the future as Workbench becomes more optimized. Workbench already checks the Drupal Core version, so a pattern for checking the version of the Integration module already exists. Answers: username_0: Resolved with 1b39f45e2307f5cb69a873393769cccbc0182c81. Status: Issue closed username_0: 4e5f52b2cc43a0269778234224604f73f07fbca5 added a check for this outside of `--check`.
NVIDIA/nvidia-docker
151210910
Title: Wiki manual installation instruction missing a step Question: username_0: This section of the wiki: https://github.com/NVIDIA/nvidia-docker/wiki/Installation#building-from-sources Is missing this step after the manual installation: sudo ./nvidia-docker volume setup Yes, it is in this doc section: https://github.com/NVIDIA/nvidia-docker/wiki/Using-nvidia-docker#standalone-version As as this is a requested step to run the samples, I think it should be in the installation instruction, as many people like myself would jump from the installation section to the sample section to test the installation. Then later, take the time to read how to use it in more detail. Answers: username_1: Actually, `nvidia-docker volume setup` is only required if you don't want to rely on the volume plugin, see: https://github.com/NVIDIA/nvidia-docker/wiki/Using-nvidia-docker#with-the-nvidia-docker-plugin An alternative is therefore to either install the deb (after `make deb`) or to launch the plugin with `sudo -b nohup nvidia-docker-plugin` (after `make install`). We recommend the plugin approach because volume plugins are now part of the Docker ecosystem. username_0: Ok. What about adding a link or a note in the section https://github.com/NVIDIA/nvidia-docker/wiki/Installation#building-from-sources To tell about this step to get something working? Status: Issue closed username_1: I added a note to the documentation, thank you!
rancher/rancher
167719833
Title: etcd service stuck in updating active state Question: username_0: **Rancher Version:**:1.1.2 **Docker Version:**:1.10.3 **Environment Type: (Cattle/Kubernetes/Swarm/Mesos)**: Kubernetes **Steps to Reproduce:** 1. Create a Kubernetes environment with 3 hosts using https://github.com/LLParse/testing-catalog 2. Create pods/services 3. Turn off two hosts and bring back one host **Results:** etcd recovers and Kubernetes is functional. etcd service is stuck in "Updating active" state. <img width="1141" alt="screen shot 2016-07-26 at 2 40 31 pm" src="https://cloud.githubusercontent.com/assets/18536626/17156615/9f1faa94-533f-11e6-9f2e-74e1af04dbd4.png"> Answers: username_1: @username_0 please test and reopen if you still see it on the latest. username_0: Tested with 1.2.0-pre4-rc4. I still see that the etcd service is in updating-active state after executing the steps as described above: ``` { "id": "1s1", "type": "service", "links": { "self": "…/v2-beta/projects/1a5/services/1s1", "account": "…/v2-beta/projects/1a5/services/1s1/account", "consumedbyservices": "…/v2-beta/projects/1a5/services/1s1/consumedbyservices", "consumedservices": "…/v2-beta/projects/1a5/services/1s1/consumedservices", "instances": "…/v2-beta/projects/1a5/services/1s1/instances", "networkDrivers": "…/v2-beta/projects/1a5/services/1s1/networkdrivers", "serviceExposeMaps": "…/v2-beta/projects/1a5/services/1s1/serviceexposemaps", "serviceLogs": "…/v2-beta/projects/1a5/services/1s1/servicelogs", "stack": "…/v2-beta/projects/1a5/services/1s1/stack", "storageDrivers": "…/v2-beta/projects/1a5/services/1s1/storagedrivers", "containerStats": "…/v2-beta/projects/1a5/services/1s1/containerstats" }, "actions": { "update": "…/v2-beta/projects/1a5/services/1s1/?action=update", "remove": "…/v2-beta/projects/1a5/services/1s1/?action=remove", "setservicelinks": "…/v2-beta/projects/1a5/services/1s1/?action=setservicelinks", "removeservicelink": "…/v2-beta/projects/1a5/services/1s1/?action=removeservicelink", "addservicelink": "…/v2-beta/projects/1a5/services/1s1/?action=addservicelink", "deactivate": "…/v2-beta/projects/1a5/services/1s1/?action=deactivate" }, "baseType": "service", "name": "etcd", "state": "updating-active", "accountId": "1a5", "assignServiceIpAddress": false, "createIndex": 82, "created": "2016-10-17T22:36:11Z", "createdTS": 1476743771000, "currentScale": 3, "description": null, "externalId": null, "fqdn": null, "healthState": "started-once", "instanceIds": [ 6 items "1i1", "1i2", "1i4", "1i5", "1i118", "1i119" ], ``` Status: Issue closed username_0: Tested with v1.2.0-pre4-rc4. If we start a Kubernetes set up with 3 hosts, turn off 2 hosts and turn back only one, then etcd will always try to reach its max scale which is 3. It is expected that it can be stuck in "updating active" state. The fix done above is to ensure the below, which has been tested and working: If we start Kubernetes with a one host or a 2 host setup, etcd is not stuck in "updating active state".
dzhw/zofar
654005594
Title: MDM-Export instruments: Korrekturen Question: username_0: eigentlich brauchen wir diese images (pages) `"\\faust\Abt4\Austausch\MDM\instrument_template"` - [ ] getrennte Dokumente für - [ ] de - [ ] en - [ ] nur Desktopauflösung - [ ] Inhaltverzeichnis: Seitenzahlen nicht überlappen - [ ] mit page-Überschriften (z. B. "A05") - [ ] kein ODT mehr, solange Answers: username_1: \\faust\Abt4\Austausch\MDM\instrument_template war leider nicht mehr verfügbar. Änderungen werden im nächsten Export realisiert sein username_2: Ich habe es an WeGe getestet. Sieht gut aus, deswegen mache ich hier zu. Status: Issue closed
dominoanty/Pi-Car
260772256
Title: How Many batteries? Question: username_0: both motors don't run at the same RPM even when powered by 2 6V + 1 9V batteries all connected in parallel. However, during MOVE_LEFT_FORWARD , MOVE_RIGHT_FORWARD etc when only one motor is under operation, significant RPM is observed.
coenvalk/TablingScheduling
374511504
Title: Participation at table for longer times Question: username_0: Currently everything is compared by half hour. I would like to only compare by every hour and a half or other longer time periods as well. - [ ] Change functionality to be more extensive for duration - [ ] Implement duration as a function argument. Default can be 1.5 hrs.
fostermadeco/ansible-roles
312251281
Title: Error w/ role elasticsearch-2.x when running Siphon Question: username_0: Trying to stand up Siphon locally. Received the following error after running vagrant up. default: Running ansible-playbook... PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/Users/shawnmaida/Sites/siphon/.vagrant/provisioners/ansible/inventory -v ansible/provision.yml No config file found; using defaults ERROR! the role 'elasticsearch-2.x' was not found in /Users/shawnmaida/Sites/siphon/ansible/roles:/etc/ansible/roles:/Users/shawnmaida/Sites/siphon/ansible The error appears to have been in '/Users/shawnmaida/Sites/siphon/ansible/provision.yml': line 8, column 9, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - node - role: elasticsearch-2.x ^ here Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. Answers: username_0: Thanks @likeuntomurphy. I should have caught that in the install docs before posting. I'll close this. Status: Issue closed
dotnet/samples
616827846
Title: Move sample from winforms folder to windowsforms sample. Question: username_0: This issue is here to track the steps involved in moving the Formatting Utility sample from the `winforms` folder to the `windowsforms` folder. - [x] Instead of moving the content, you need to clone the content into the new place and update the `urlFragment` in the metadata for both the VB and CS readme files. Change the part that is `winforms` to `windowsforms` (#2881) - [ ] Publish the sample (@username_0 will handle) - [ ] File an issue in the dotnet/docs repo to update links to the sample to the new location (@thraka will handle) - [ ] File an issue in the dotnet/dotnet-api-docs repo to update links to the sample to the new location (@thraka will handle) - [ ] Delete the sample from the winforms folder (@username_1 or @Youssef1313 can do that) Answers: username_0: @username_1 You can open the PR to delete the sample, just note that it shouldn't be merged until the rest of the steps on this issue are completed. username_1: Thanks @username_0. PR #2887 created to remove the files Status: Issue closed
openshift/origin
140229681
Title: Create [conformance] tags for extended.test Question: username_0: We should have a standard way of running downstream conformance tests , like we do upstream ones (`Conformance`). The current convention in openshift is to have lower case tags to avoid colliding w/ upstream upper case tags. So, following this convention, I think some tests (i.e. `openshift can execute jobs` or `openshift routers`) should have a `[conformance]` label added on. Open to other ideas on how to filter core conformance. Answers: username_0: cc @username_1 username_1: Can't not state enough how much this is needed... Validating a large installation takes days atm b/c running the entire suite takes days. /cc @jeremyeder username_2: Being addressed in #13551 username_2: Addressed by the linked PR /close
electron/electron
805822002
Title: Preload not working in BrowserViews Question: username_0: <!-- As an open source project with a dedicated but small maintainer team, it can sometimes take a long time for issues to be addressed so please be patient and we will get back to you as soon as we can. --> ### Preflight Checklist <!-- Please ensure you've completed the following steps by replacing [ ] with [x]--> * [ X ] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [ X ] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [ X ] I have searched the issue tracker for an issue that matches the one I want to file, without success. ### Issue Details * **Electron Version:** v11.2.3 * **Operating System:** macOS 10.14.6 * **Last Known Working Electron version:** Unknown... ### Expected Behavior BrowserView should inherit BrowserWindow properties. I am able to preload a script when using only 1 Browser window. If I set 1 Browser window and then add BrowserViews using addBrowserView method the webPreferences options don't seem to be respected. I can't have the preload to work and even in my local html render js file node require is not available and even if the js file only has a console.log or alert that preload.js is not loaded. Even when I set nodeIntegration to true. ### Actual Behavior BrowserView options, specifically webPreferences should be respected. One should be able to use preload option in any BrowserWindow or BrowserView ### To Reproduce https://github.com/username_0/electron-quick-start/blob/master/main.js https://github.com/username_0/electron-quick-start/blob/master/preload.js <!-- If you provide a URL, please list the commands required to clone/setup/run your repo e.g. ``` # Go into the repository cd electron-quick-start # Install dependencies npm install # Run the app npm start ``` --> ### Additional Information This issue was reported previously and closed due to issues in the preload file reference location. But I confirmed that is not the issue since the exact same options definition works in a project with only 1 BrowserWindow. But to me never works when set in one attached BrowserView, or multiple added BrowserViews. Answers: username_1: gist of @username_0's testcase: https://gist.github.com/username_1/d467fb8d1c884b306b0c5fc0f4d36fd5 username_0: Thanks @username_1 ! What are the next steps from this? Do you think this bug can be addressed soon? Currently my work around in my app is to use two distinct `BrowserWindows` that open separately but I really would like to have it all in just one application window using the layout split with `BrowserViews`... Any expectation to when the dev team can pick this up and when should I expect it to be solved in a new electron release? username_2: My browser view doesn't work either, only the `partition` param works: ```typescript const view = new BrowserView({ webPreferences: { partition: `persist:${args.id}`, // this works nodeIntegration: true, // not work contextIsolation: false, // not work preload: '/tmp/preload.js', // now work }, }); ``` username_3: I'm confused about the example, which includes this snippet: ``` mainView.setBounds({ x: 0, y: 0, width: 961, height: 800, minWidth: 665, webPreferences: { preload: path.join(__dirname, '/preload.js'), //this does not work! nodeIntegration: true //This does not seem to work either } }); ``` [`setBounds()`](https://www.electronjs.org/docs/api/browser-view#viewsetboundsbounds-experimental) does not take `webPreferences` as an option, so it's not surprising that this has no effect.
material-components/material-components-web
425774888
Title: [email protected] crashes when no item is active Question: username_0: ## Bugs MdcDrawer (modal variant) without any active item crashes when opened: https://codepen.io/username_0/pen/wObyWR But dismissable variant works fine: https://codepen.io/username_0/pen/WmBMwW Got it in console: Error: You can't have a focus-trap without at least one focusable element index.js:198 y index.js:198 b index.js:268 n index.js:72 trapFocus component.ts:147 opened_ foundation.ts:39 handleTransitionEnd foundation.ts:146 handleTransitionEnd_ component.ts:100 ### What MDC Web Version are you using? 1.1.0 ### What browser(s) is this bug affecting? Firefox 66.0.1 Chrome 73.0.3683.86 ### What OS are you using? Linux Mint 19.1 ### What are the steps to reproduce the bug? 1. Open codepens above. 2. Open console. 3. Open drawer. 4. See behavior. ### What is the expected behavior? No errors in console, no crashes ### What is the actual behavior? App crashed, console has error message ### Any other information you believe would be useful? Possible workaround - adding empty anchor to menu top: `<a href="#"></a>` Answers: username_1: This might have already been reported @username_2 can you remember was this about drawer or dialog? username_2: This seems to be dup of #762 Please see this comment https://github.com/material-components/material-components-web/issues/762#issuecomment-448918913 for explanation. username_0: I read the comment, and example of dismissible drawer you provided: https://stackblitz.com/edit/mdc-drawer-demo-ytwosc?file=index.html This still not clear for me. I notice that drawer in your example has no item with class `mdc-list-item--activated` or `mdc-list-item--selected`. I have the same drawer content in my _modal_ drawer, it crashes if no item has a class `mdc-list-item--activated`, if i add it to entry, problem disappears, but i really need menu in my app without already selected item(s). I has `mdc-list` with items like `<a class="mdc-list-item" href="#">...<a>`, why this anchors is not focusable, but empty is? `aria-selected="true"` does not help as well. username_2: Thanks for spotting it! The list expects at least one item to have `tabindex="0"`. I fixed your example by simply adding `tabindex="0"` to first list item. See [MDCList's Accessibility section](https://github.com/material-components/material-components-web/tree/master/packages/mdc-list#accessibility). Here is the [updated codepen](https://codepen.io/username_2/pen/pBaKXm) that has fix. Status: Issue closed username_0: @username_2, thanks, this helped.
Azure-Samples/cognitive-services-speech-sdk
413973272
Title: speech_recognize_continuous_from_file() Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. ... 2. ... **Expected behavior** A clear and concise description of what you expected to happen. **Version of the Cognitive Services Speech SDK** Which version of the SDK are you using. **Platform, Operating System, and Programming Language** - OS: [e.g. Windows, Linux, Android, iOS, ...] - please be specific - Hardware - x64, x86, ARM, ... - Programming language: C#, C++, Java, JavaScript, Objective-C, Python - Browser [e.g. Chrome, Safari] (if applicable) - please be specific **Additional context** - Error messages, log information, stack trace, ... - If you report an error for a specific service interaction, please report the SessionId and time (incl. timezone) of the reported incidents. The SessionId is reported in all call-backs/events you receive. - Any other additional information Answers: username_1: For continuous recognition, the recognition result is not delivered as the result of a function call (to start_continuous_recognition, or any other function), but as an event in the callback functions for the `recognizing` and `recognized` function, as demonstrated in the example you mention ([this line](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/258949599ec32c8a7b03fb38a53f2033010b6b26/samples/python/console/speech_sample.py#L203)). Does this solve the issue? username_0: I saw you have mentioned `recognizing` and `recognized` function,and i can also see the recognized result, but i need to get the result and save it to the database, can this happen? I cant save it username_0: @username_1 username_1: Try replacing the current callback `lambda evt: print('RECOGNIZING: {}'.format(evt))`, which just prints the result, with a function that handles the result (available as `result` property on the event, like `evt.result`) in the way you intend to use it. You can also register multiple callbacks for a signal by calling `connect` multiple times, once for each callback registration. username_0: thanks a lot @username_1 Status: Issue closed username_0: thanks
c-base/meta
569852327
Title: Greenboard weißen Question: username_0: Das Greenboard soll auf der Rückseite einen weißen, beamerfähigen Stoff bekommen und so umgebastelt werden, dass es mit vertretbarem Aufwand jeweils gedreht und an die Wand gehängt werden kann. Dies erhielte das Greenboard und schüfe gleichsam eine Projektionsfläche für den Beamer.
GMAP/NPB-GPU
982326326
Title: IS updates Question: username_0: Hi I built and ran the IS program for the class D problem size, but the verification fails. It is quite fast to run the programs with smaller problem sizes on a p100 gpu. Thank you for your updates. Answers: username_1: Hello. The verification routine is failing because an overflow error is occurring with class D (int type is not large enough for this class). We will fix it as soon as possible. Thank you for using the NPB-GPU, and thank you for reporting this bug. username_0: Thank you for explaining the error. username_1: Hello. Yes, in the practice, there is an upper bound. However, the NPB documentation does not explain which is the maximum amount of iterations supported (it may vary according to the hardware). Our implementation followed the original NPB version, and the default value of MAX_ITERATIONS set by NASA is 10. On my machine, I tested other values, and the benchmark worked with up to 24 iterations. With an amount of iterations larger than 24, the IS benchmark started failing in the correctness verification, even when using long int type. username_0: Using long int type implies the class D problem. Have you updated your program to support the class D problem ? username_1: We haven't had time to fix this issue yet.
COPCSE-NTNU/thesis-NTNU
478835570
Title: Make the inclusion of papers more flexible Question: username_0: The author should be able to format the separating page before the included papers more to his own liking. One possible way to achieve this, is to make the paper an environment, \begin{paper} ... \end{paper}, containing whatever information, graphics etc. the author would prefer to show on the separating page.<issue_closed> Status: Issue closed
fabric8io-images/java
507541092
Title: Is there a docker image for fabric8/java-alpine-openjdk11-jdk published to docker hub ? Question: username_0: I looked for fabric8/java-alpine-openjdk11-jdk in docker hub and I can't seem to find it. However I did find an image of centos flavor for jdk11 fabric8/java-centos-openjdk11-jdk. Is there known issues jdk11 on alpine ? Answers: username_1: The PR introducing the alpine image has just been merged and no new release has been scheduled in the meantime. I hope I can make a new release until the weekend, including alpine. username_2: @username_1 any updates on pushing the alpine tags? username_1: Apologies that I haven't pushed recently. I just updated all packages and created a 1.6.4 tag/release so that the Docker automated builds have been triggered. They should be finished soon. username_2: Hey @username_1, looks like it still didn't create the new images in Docker Hub. Is it still under /u/fabric8? Meaning https://hub.docker.com/r/fabric8/java-alpine-openjdk11-jdk would be an example. username_1: @username_2 thanks for the heads up. It was just that the repository has not been created on Docker Hub :) So I did this and created 1.6.5 tag so that this release should show up soon at the URL you mentioned. Sorry ... username_2: No problem, thanks for building this out! username_1: 👍 Status: Issue closed
godotengine/godot
952129166
Title: Godot's language server report functions as... interfaces???? Question: username_0: ### Godot version 3.3.2 stable ### System information OpenSUSE Tumbleweed(other stuff deemed unnecessary) ### Issue description I was making a custom vscode extension for Godot stuff and then noticed that functions show up as interfaces. I hopped over to `godotengine/godot-vscode-plugin` and they're also reported as interfaces as well. ![Screenshot_20210725_014919](https://user-images.githubusercontent.com/25323231/126878460-824c747e-7158-44d0-bce6-740c2b791940.png) And this is not just icons because VSCode explicitly say they're interfaces if I hover my cursor on them in the outline panel. ### Steps to reproduce Basically using Godot's language server. ### Minimal reproduction project [TestScript.gd.zip](https://github.com/godotengine/godot/files/6872936/TestScript.gd.zip) Answers: username_1: cc @username_2 username_2: Looks like the SymbolKind constants were put in wrong, or got updated between when the LSP was written and updated. Will write up a fix. Status: Issue closed
LambdaNote/professional-ipv6-feedbacks
284104661
Title: 71 ページに対するフィードバック Question: username_0: 問題を指摘してください。代案は不要です。なお、マンパワーの都合でissue上では無反応の可能性が高いです。ごめんなさい。 * 具体的に指摘をいただきたいポイント - 内容の間違い - 解説がほしいトピック - 意味が取れない段落 * 下記の指摘も歓迎です(grepをかけるので存在の指摘だけで十分です) - 技術用語の不統一 - スペルミス、誤字、脱字 * 下記については、指摘は不要です(3月までには自然に直ります) - 日本語表記(漢字や送り仮名)の不統一 - レイアウトの不備 - その他、局所的な日本語の問題<issue_closed> Status: Issue closed
EOL/tramea
129541112
Title: Test the load of batch API use and restrict sizes if needed Question: username_0: If nobody pipes up with a more alarmist view, (cough @jrice cough) this sounds like a great test to me... Answers: username_1: I'm about to code up a site that asks for info on batches of (maximum) 50 pages, just to get the common name and best pd|cc-by|cc-by-sa image available. E.g. http://eol.org/api/pages/1.0.json?batch=true&id=1178406%2C1178405%2C1178408%2C1178407%2C1178399%2C1178400%2C1178403%2C1178402%2C1178401%2C663227%2C1178398%2C1178416%2C1178414%2C1178418%2C1178415%2C1178412%2C1178417%2C1178378%2C1178386%2C1178384%2C1178383%2C1178377%2C1178385%2C1178379%2C1178388%2C1178397%2C1178381%2C1178387%2C1178396%2C1178392%2C1178395%2C1178380%2C1178389%2C1178390%2C1178393%2C1178382%2C1178394%2C1178391%2C1178409%2C1178411%2C1178410%2C1178372%2C1178376%2C1178373%2C1178374%2C1178375%2C1178371%2C1178370%2C130164%2C&images=1&videos=0&sounds=0&maps=0&text=0&iucn=false&subjects=overview&licenses=pd%7Ccc-by%7Ccc-by-sa&details=true&common_names=true&synonyms=false&references=false&taxonomy=false&vetted=1&cache_ttl=10000 Is this too much for the API? I'm guessing not, but will rein in the numbers if it's a problem. Note that this batch request isn't called repeatedly in a loop, only a single time when someone visits a particular page (although there are lots of potential pages to visit). username_0: If nobody pipes up with a more alarmist view, (cough @jrice cough) this sounds like a great test to me... username_2: I say Go For It™. We'll notice pretty quickly if you've ruined our lives. Sorry for the cop-out answer; limited time to run my own tests. :S username_1: Not a cop out at all. Perfectly reasonable. I'll let you know on this issue when I start testing it seriously. username_0: @username_1 just checking- if you've tried this yet, how is it behaving for you? Thanks! Status: Issue closed
6thsolution/EasyMVP
275170813
Title: Injection is broken with the latest Dagger 2 release Question: username_0: EasyMVP works fine with Dagger 2.11. It is broken when migrating to Dagger 2.12. You can easily replicate this by compiling the sample tvProgram_android project against this Dagger version. When the view tries to inject the presenter you will get an error like: `java.lang.NullPointerException: Attempt to invoke interface method 'java.lang.Object javax.inject.Provider.get()' on a null object reference at easymvp.loader.SupportPresenterLoader.onForceLoad(SupportPresenterLoader.java:32) at android.support.v4.content.Loader.forceLoad(Loader.java:329) at easymvp.loader.SupportPresenterLoader.onStartLoading(SupportPresenterLoader.java:26) at android.support.v4.content.Loader.startLoading(Loader.java:272) at android.support.v4.app.LoaderManagerImpl$LoaderInfo.start(LoaderManager.java:270) at android.support.v4.app.LoaderManagerImpl.doStart(LoaderManager.java:770) at android.support.v4.app.FragmentHostCallback.doLoaderStart(FragmentHostCallback.java:243) at android.support.v4.app.FragmentController.doLoaderStart(FragmentController.java:386) at android.support.v4.app.FragmentActivity.onStart(FragmentActivity.java:566) at android.support.v7.app.AppCompatActivity.onStart(AppCompatActivity.java:177) at com.leonard.www.tvprog.feature.channelList.view.ChannelListActivity.onStart(ChannelListActivity.java:0)` This is because the providerFactory passed to the view delegate class is null. I suspect this may be caused by a change to the way Dagger is generating its internal classes. Therefore the Dagger2Extension.apply() method is failing to correctly detect the Dagger classes. The simply fix would be to update this class to support the new class format, however a better solution would be to rewrite the class such that its detection is based on the public annotations, etc rather than inferring information from the internal Dagger implementation. Answers: username_1: Are there any updates?
TA2k/ioBroker.vw-connect
844106932
Title: Adapter Funktioniert für ID nicht mehr Question: username_0: Hallo, seit 1-2 Tagen funktioniert der Login über den Adapter nicht mehr. ``` 2021-03-30 07:55:33.096 - info: host.rpi4htr stopInstance system.adapter.vw-connect.0 (force=false, process=true) -- 2021-03-30 07:55:33.102 - info: host.rpi4htr stopInstance system.adapter.vw-connect.0 send kill signal 2021-03-30 07:55:33.103 - info: vw-connect.0 (23431) Got terminate signal TERMINATE_YOURSELF 2021-03-30 07:55:33.105 - info: vw-connect.0 (23431) cleaned everything up... 2021-03-30 07:55:33.107 - info: vw-connect.0 (23431) terminating 2021-03-30 07:55:33.109 - info: vw-connect.0 (23431) Terminated (ADAPTER_REQUESTED_TERMINATION): Without reason 2021-03-30 07:55:33.821 - info: host.rpi4htr instance system.adapter.vw-connect.0 terminated with code 11 (ADAPTER_REQUESTED_TERMINATION) 2021-03-30 07:55:36.315 - info: host.rpi4htr instance system.adapter.vw-connect.0 started with pid 23757 2021-03-30 07:55:40.301 - info: vw-connect.0 (23757) Plugin sentry Sentry Plugin disabled for this process because sending of statistic data is disabled for the system 2021-03-30 07:55:40.367 - info: vw-connect.0 (23757) starting. Version 0.0.30 in /opt/iobroker/node_modules/iobroker.vw-connect, node: v10.23.3, js-controller: 3.2.16 2021-03-30 07:55:42.973 - error: vw-connect.0 (23757) Login was not successful, please check your login credentials and selected type 2021-03-30 07:55:42.974 - error: vw-connect.0 (23757) TypeError: Cannot read property 'split' of undefined 2021-03-30 07:55:42.978 - error: vw-connect.0 (23757) TypeError: Cannot read property 'split' of undefined at Request.request.post [as _callback] (/opt/iobroker/node_modules/iobroker.vw-connect/main.js:469:105) at Request.self.callback (/opt/iobroker/node_modules/request/request.js:185:22) at Request.emit (events.js:198:13) at Request. (/opt/iobroker/node_modules/request/request.js:1154:10) at Request.emit (events.js:198:13) at IncomingMessage. (/opt/iobroker/node_modules/request/request.js:1076:12) at Object.onceWrapper (events.js:286:20) at IncomingMessage.emit (events.js:203:15) at endReadableNT (_stream_readable.js:1145:12) at process._tickCallback (internal/process/next_tick.js:63:19) 2021-03-30 07:55:42.979 - error: vw-connect.0 (23757) Login Failed ``` Answers: username_0: Erledigt: Löschen aller Datenpunkte unter Objekte hat das Problem behoben. Status: Issue closed
knausb/vcfR
696883748
Title: Error in extract.gt(vcf) - ID column contains non-unique names Question: username_0: Hi, I am trying to extract the GT from a vcf file for few specific populations and everytime, I am getting error as Error in extract.gt(vcf10pop1, element = "GT", as.numeric = TRUE, IDtoRowNames = TRUE) : ID column contains non-unique names How do we make the ID column unique - is there any easy R code to assign unique name for ID so that I wont get this error please. I am not an expert in R and starting to work with vcf file since this summer only. command was: vcf1 <- extract.gt(vcf, element = 'GT', as.numeric = TRUE, IDtoRowNames = TRUE) error: Error in extract.gt(vcf10pop1, element = "GT", as.numeric = TRUE, IDtoRowNames = TRUE) : ID column contains non-unique names data: is from 1000genome file for 2 populations from each super population, between the snips from 17500001-20000000 Answers: username_1: This appears highly redundant to #170 . The [VCFv4.3 Specification](http://samtools.github.io/hts-specs/) in section 1.6.1 states that the ID should contain unique identifiers. The ERROR you're reporting is trying to tell you that there are issues with your VCF file. This does not appear to have anything to do with vcfR. I've previously (#170) showed you how to identify these issues. I feel the real concern is "why do you have non-unique names". Is it because of processing steps you mentioned in #170? We've invested a lot of time and effort providing documentation for new users such as yourself. https://username_1.github.io/vcfR_documentation/ http://grunwaldlab.github.io/Population_Genetics_in_R/index.html https://username_1.github.io/vcfR_documentation/reporting_issue.html Please take the time to work through these documents. They appear to address many of your issues. Status: Issue closed username_0: hi @username_1 I do understand I saw similar issue in #170 , the query here is - how to address if we see non-unique names in rows error. I do need to make the row names unique and I was struggling to get the R code to make these rows unique by appending with chrom ID, position. The suggestion was to add subscript to the duplicates, but I am a little unfamiliar in finding and replacing the snp values. The query sort(table(getID(VCFm)), decreasing = TRUE)[1:10] returned duplicates as below, but I am unable to proceed further to append them with _1, _2 or _3. So I believe, I need to append first 4 SNPs with _1,_2 rs141796829 rs11471553 rs202131091 rs71329353 esv3644940 esv3644941 esv3644942 esv3644943 esv3644944 3 2 2 2 1 1 1 1 1
theusaf/kahoot-session
792095903
Title: this.gameid errors Question: username_0: In the "message" function, "this.gameid" is mentioned, but returns an error stating "cannot find variable gameid of undefined". Answers: username_1: This can be solved by calling "bind(this)" after "this.message", For example: "this.message.bind(this)" username_2: I have solved this issue in a different way, but thanks anyway! Status: Issue closed
Robinzhao69/F1M2ONT
789214206
Title: Opdracht 4 verkeerd ontwikkeld Question: username_0: les 4 Display types https://blanken5.home.xs4all.nl/webSlidesPresentaties/display-types.html#slide=1 bovenstaande instructie leert, dat je de opdracht ontwikkelt vanuit les 2. Je start dus met Stedelijk Museum en MIET met Cobra!!. Graag alsnog en na elk filmpje graag een commit.
pouchdb/pouchdb
105012192
Title: 'active' and 'paused' events not firing during live replication PouchDB 4.0.1 Question: username_0: I upgraded from 4.0.0 to 4.0.1 and the 'active' and 'paused' events stopped firing on live two-way sync between a memory adaptor and HTTP adaptor. I downgraded back to 4.0.0 and they started working again. Tested with Safari 8.0.8 and Chrome 45 on OS X Yosemite. Answers: username_1: Yup we are looking into this, going to make as a dupe of https://github.com/pouchdb/pouchdb/issues/4251 since that has the start of tests attached, cheers Status: Issue closed
betagouv/recosante-mail
1049738719
Title: Affichage des épisodes de pollution dans le mail indicateur Question: username_0: #### Back - [ ] Ajouter un attribut dans Sendinblue pour les recommandations liées à l'épisode de pollution (`RECOMMANDATION_EPISODE` ?) - [ ] Bien envoyer la bonne recommandation selon l'épisode en cours - [ ] Afficher les recommandations classiques pour l'indice ATMO - [ ] Changer l'attribut `polluant` pour afficher les polluants au format `PM10, O3 et NO2` #### Front - [ ] Afficher la recommandation liée à l'épisode dans le bloc rouge - [ ] Enlever le bouton `En savoir plus` Answers: username_0: Sauf dans ce cas : https://github.com/betagouv/recosante-api/issues/182 Il faut donc choisir entre : - soit ajouter un nouvel attribut avec la zone de l'épisode de pollution - soit afficher toujours la commune (parce que si c'est valable au niveau du département, c'est valable au niveau de la commune ?) username_2: Sur le bouton "En savoir plus", je vois pas en quoi il est gênant sachant qu'il me permet d'avoir plus d'info sur Airparif. Mais j'ai pas suivi tous vos échanges donc si vous trouvez pertinent de le supprimer, je vous fais confiance. username_0: Pareil, je ne pense pas que ce soit nécessaire de le supprimer (et en plus ATMO sera content) username_1: Ok pour garder le bouton "en savoir plus". Et pour commune vs département => le plus simple serait effectivement de mettre commune mais sachant qu'on sait déjà que ATMo souhaite qu'on mettre département est-ce que ce n'est pas se rajouter du travail pour rien (vu que Delphine va probablement revenir vers nous) username_0: @username_4 `RECOMMANDATION_EPISODE` est le même que `RECOMMANDATION_RAEP` pour l'instant username_0: @username_1 Du coup on intègre un nouvel attribut spécifique à la zone de validité de l'épisode de pollution ? Département pour la plupart des régions, ville pour Auvergne Rhône Alpes (https://github.com/betagouv/recosante-api/issues/182) username_1: Oui ça me convient username_1: Du coup il reste : - ajouter un nouvel attribut spécifique à la zone de validité de l'épisode de pollution => département pour la plupart des régions, ville pour Auvergne Rhône Alpes => Pour le moment on laisse département pour pouvoir passer en prod. - ajouter `RECOMMANDATION_EPISODE` dans le bloc rouge @username_4 s'en occupe puis passe en prod Status: Issue closed username_0: #### Back - [x] Ajouter un attribut dans Sendinblue pour les recommandations liées à l'épisode de pollution (`RECOMMANDATION_EPISODE` ?) - [x] Bien envoyer la bonne recommandation selon l'épisode en cours - [x] Afficher les recommandations classiques pour l'indice ATMO - [x] Changer l'attribut `polluant` pour afficher les polluants au format `PM10, O3 et NO2` #### Front - [x] Afficher la recommandation liée à l'épisode dans le bloc rouge username_0: `RECOMMANDATION_EPISODE` est le même que `RECOMMANDATION_RAEP` (reçu sur mon mail `<EMAIL>` aujourd'hui). Ce qui semble être le même problème que https://github.com/betagouv/recosante-mail/issues/63#issuecomment-971313856. Je ne retrouve pas de trace de l'affichage de `RECOMMANDATION_EPISODE` dans le template mail. L'ajout a été fait directement dans Sendinblue ? Et le mot "département" est toujours en dur dans le template username_3: 1) On n'a pas l'historique pour un contact côté SIB mais actuellement `RECOMMANDATION_EPISODE` a strictement la même valeur que `RECOMMANDATION_RAEP` : `<ul> <li>En cas de gêne respiratoire ou cardiaque, prendre conseil auprès d’un professionnel de santé... privilégier les activités modérées.</li> </ul>`. Côté admin, je vois ceci pour dans un cas d'épisode de pollution au dioxyde d'azote, au dioxyde de soufre ou aux particules fines mais pas dans le cas d'alerte aux pollens : <img width="941" alt="reco_episode" src="https://user-images.githubusercontent.com/23194091/149896774-9246299d-7f36-46b8-966b-5e80b5303d4b.png"> Côté API, si on se remet dans les conditions du direct, il y avait 2 recommandations différentes : ``` "episodes_pollution": { "advice": { "details": "", "main": "<ul>\n<li>En cas de g\u00eane respiratoire ou cardiaque, prendre conseil aupr\u00e8s d\u2019un professionnel de sant\u00e9.</li>\n<li>Privil\u00e9gier des sorties plus br\u00e8ves et celles qui demandent le moins d\u2019effort.</li>\n<li>R\u00e9duire, voire reporter, les activit\u00e9s physiques et sportives intenses (dont les comp\u00e9titions).</li>\n</ul>\n<p>Si vous \u00eates une personne sensible ou vuln\u00e9rable :</p>\n<ul>\n<li>prendre conseil aupr\u00e8s de votre m\u00e9decin pour savoir si votre traitement m\u00e9dical doit \u00eatre adapt\u00e9 le cas \u00e9ch\u00e9ant ; </li>\n<li>\u00e9viter les zones \u00e0 fort trafic routier, aux p\u00e9riodes de pointe ;</li>\n<li>privil\u00e9gier les activit\u00e9s mod\u00e9r\u00e9es.</li>\n</ul>" }, }, "raep": { "advice": { "details": "<p>\u2139\ufe0f Les cheveux retiennent les pollens et gramin\u00e9s qui vont se d\u00e9poser sur\nl'oreiller, ce qui peut g\u00eaner la respiration pendant le sommeil.</p>\n<p>\ud83d\udca1Apr\u00e8s une douche ou un bain ne pas oublier d'a\u00e9rer la pi\u00e8ce.</p>", "main": "<p>En saison pollinique, brosser ou rincer ses cheveux avant de se coucher le\nsoir.</p>" }, } ``` Donc vraisemblablement un souci avec la mise à jour de la variable RECOMMANDATION_RAEP sur nos contacts. 2) Concernant commune / département, l'API renvoie ceci pour ARA. Qu'est-ce qui n'est pas OK ? ``` "sources": [{ "label": "Atmo Auvergne-Rh\u00f4ne-Alpes", "url": "https://www.atmo-auvergnerhonealpes.fr/" }], "validity": { "area": "le bassin d\u2019air Bassin Grenoblois", "end": "2022-01-16T12:29:59", "start": "2022-01-15T12:30:00" } ``` 3) De ce que je comprends, il y a des modifications différentes qui ont été apportés entre template SIB et ici https://github.com/betagouv/recosante-mail/blob/master/src/pages/indicateurs.html ? username_0: Yes, je ne retrouve pas l'affichage de `RECOMMANDATION_EPISODE`. S'il y a d'autre modification qui ont été faite directement dans Sendinblue j'ai besoin de le savoir avant de travailler dessus pour ajouter la vigilance météo. Status: Issue closed username_0: #### Back - [x] Ajouter un attribut dans Sendinblue pour les recommandations liées à l'épisode de pollution (`RECOMMANDATION_EPISODE` ?) - [x] Bien envoyer la bonne recommandation selon l'épisode en cours - [x] Afficher les recommandations classiques pour l'indice ATMO - [x] Changer l'attribut `polluant` pour afficher les polluants au format `PM10, O3 et NO2` #### Front - [x] Afficher la recommandation liée à l'épisode dans le bloc rouge Status: Issue closed username_0: Sur le mail d'alerte (épisode de pollution) reçu pour Grenoble le 30/01 : On me prévient d'un épisode de pollution au PM10 et dans l'indice ATMO c'est PM2.5 qui est en mauvais. <img width="510" alt="Screenshot 2022-02-02 at 11 20 04" src="https://user-images.githubusercontent.com/8115933/152135683-a2ad1bff-515b-4ddb-978b-1b287ced6b50.png"> Après vérification j'ai aussi un problème de cohérence pour les mails du 28/01, du 16/01, du 15/01 et du 23/12 username_0: #### Back - [x] Ajouter un attribut dans Sendinblue pour les recommandations liées à l'épisode de pollution (`RECOMMANDATION_EPISODE` ?) - [x] Bien envoyer la bonne recommandation selon l'épisode en cours - [x] Afficher les recommandations classiques pour l'indice ATMO - [x] Changer l'attribut `polluant` pour afficher les polluants au format `PM10, O3 et NO2` #### Front - [x] Afficher la recommandation liée à l'épisode dans le bloc rouge username_1: Qu'est ce qu'on voit dans l'APi de notre côté @username_3 ou @username_4 ? username_3: On renvoie ce qui est affiché. Je partage l'incompréhension utilisateur en tout cas. C'est un problème plus profond. La source transmet un code_pol qui précise la nature de l'épisode et qui vaut 5. Peut-être y-a-t'il une signification légèrement différente selon les AASQA, du genre particules fines au sens large et pas PM10 ici ? username_3: Pour info, c'est bien documenté comme Particules PM10 dans le libellé du fournisseur : https://data-atmoaura.opendata.arcgis.com/datasets/17db246020cc4c1c8b2556543e36299f_0/explore?filters=eyJjb2RlX3pvbmUiOlsiMjAwMCJdfQ%3D%3D&showTable=true username_3: Peut-être un point à éclaircir avec l'AASQA du coup @username_1 username_0: Je pense qu'il y a aussi un problème d'indice ATMO pas à jour au moment de l'envoi. Par exemple pour le mail d'alerte du 28/01 j'ai reçu un indice ATMO moyen par email alors que l'API renvoie un indice mauvais (https://api.recosante.beta.gouv.fr/v1/?insee=38185&date=2022-01-28&show_raep=true) username_3: L'API renvoie le dernier diffusé pour la journée donc tout à fait possible et je confirme que c'est conforme à ce qu'il y a en base. La dernière publication était plus une correction qu'une prévision Difficile de présager de la véracité de l'indice pour retarder l'email
argoproj/argo-events
393094735
Title: Add annotations to Webhook Gateway Question: username_0: We are using Ambassador as API gateway and we are now using Argo events to trigger workflows via the webhook. Ambassador uses annotations on services to configure them, allowing developers to determine their endpoints themselves. To be able to use this in the Gateway definition we would have to be able to pass Annotations specifically for the Service created by the Gateway. This is similar to https://github.com/argoproj/argo-events/issues/125 I think but I'm not sure if applying this solution to this issue is the best option. An alternative would be to just add a field that allows you to define annotations to things other than the Gateway itself. The only alternatives at the moment are either creating an additional service linking to the Webhook Gateway or manually adding the Annotations after it is live, both of them seem hacky. Status: Issue closed Answers: username_1: this is added in v0.7
Ensembl/ensembl-vep
750967113
Title: Are vep variant categories redundant? Question: username_0: I've annotated some vcfs and noticed some variants are annotated using overlapping/redundant categories, for example: ``` intron_variant intron_variant,non_coding_transcript_variant ``` or ``` missense_variant missense_variant,splice_region_variant missense_variant,stop_retained_variant ``` Is the same variant annotated multiple times? If I count how many variants annotated `intron_variant `for given gene, am I counting the total of intronic variants, or I should sum up the occurrence of anything annotated `intron_variant + intron_variant,non_coding_transcript_variant`, and so on to have that total? Answers: username_1: Hi, could you please send me your VEP command? It is possible to choose options which allow you to summarise consequence annotations across all transcripts that overlap your variant. Taking a look at the command should hopefully help me to explain your results better. Best wishes, Anja username_0: Hi this is the command I used vep -i vcf --gff ZFgenomic_tabixprep_nomiRNA.gff.gz --fasta ZFgenomic.fa -o vep.gz --fork 4 --symbol --tab --compress_output gzip --force_overwrite But I would prefer not to have to annotate it again, as it consumes a lot of time. I comfortable with text processing. No problem if the only way it's annotating again though. thank you! username_1: Thank you for providing the command. I just wanted to make sure that you are not using --summary which outputs only a comma-separated list of all observed consequences per variant. It is possible that you get different consequence annotations for the same variant. The consequence is calculated per overlapping transcript and if a variant overlaps several transcripts you can end up with different consequence annotations for each variant transcript pair. For your question: "If I count how many variants annotated intron_variant for given gene, am I counting the total of intronic variants, or I should sum up the occurrence of anything annotated intron_variant + intron_variant,non_coding_transcript_variant" -- if you count by gene you need to take into account that you have several transcripts for a gene -- if you want to count all intron variants you need to count all occurrences of intron_variant including the ones from e.g. intron_variant,non_coding_transcript_variant Best wishes, Anja username_0: Hello Anja, thanks for the reply! I manually filtered my annotations to only contain the longest transcript. So, yes I'm analyzing one transcript per gene. In this case, would the same variant be annotated with different terms, for example intron_variant and intron_variant,non_coding_transcript_variant? Or different intronic variants will have non-overlapping labels, specific to them? Thanks again, -mdz username_1: I hope I understand you correctly but if you only annotate the longest transcript you should only get one annotation for your variant. The reason why you see for example intron_variant and intron_variant,non_coding_transcript_variant is that: if you only get intron_variant it means the the variant overlaps the intron region of a protein coding transcript. If you see intron_variant,non_coding_transcript_variant it means that the intron variant is overlapping a non-coding transcript. A variant can have more than one consequence if the criteria for the consequence annotation are fulfilled. We do explain all the different consequences [here](https://www.ensembl.org/info/genome/variation/prediction/predicted_data.html). username_1: Hello, I will close the issue now. But please reopen it if you have any related questions. Best wishes, Anja Status: Issue closed