repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
coronasafe/care
1170059140
Title: Update `/add_user` endpoint to return status code instead of user object Question: username_0: **Describe the bug** Update `/add_user` endpoint to return status code instead of user object **To Reproduce** Steps to reproduce the behavior: 1. call `/add_user` api 2. inspect the success response from the api **Expected behavior** The API should respond with a success status if the user is created, else return error messages.
izderadicka/audioserve-android
618528620
Title: feature request: google cast support? Question: username_0: I'm not sure it is possible with a truly open source package, but google cast support would be great. It would be great if I could easily cast the audio my google home(s). Currently my work around is to 'mirror' the audio with the speaker, but true cast would also require server support.
Anuken/Mindustry-Suggestions
655329206
Title: Multiplayer Pause Question: username_0: **The Multiplayer Pause : [ Its a Pause for Lan or Community servers ] Mechanics: [ A vote for pause the game, can be pressed like in single player (or space bar), then under the map will be a voting for pause] Voting Mechanics : [ 2 players : [ both need to vote ] 3 players : [ 2/3 players need to vote ] 4 players : [ 3/4 players need to vote ] and so on..]** Can improve the issues of losing right when you enter in a server/wait for someone in Lan to ### enter 1. - [X] I have done a quick search in the list of suggestions to make sure this has not been suggested yet. 2. - [X] I have checked the [Trello](https://trello.com/b/aE2tcUwF/mindustry-trello) to make sure my suggestion isn't planned or implemented in a development version. 3. - [X] I am familiar with all the content already in the game or have glanced at the wiki to make sure my suggestion doesn't exist in the game yet. Answers: username_1: I think this should only be implemented as so far as waiting for more players. Not as a pause in during multiplayer gameplay. username_2: Duplicate. Status: Issue closed username_1: For reference. This is a duplicate of #56 or #261 username_3: Not sure if I'd be better asking for this issue to be revived, or making a new issue, but I would like to see a real multiplayer pause functionality, kind of like _AI War: Fleet Command_ (if you are familiar). Any player can pause or un-pause whenever they want. Ideally two different key-bindings for pausing and un-pausing, so if both players decide to "toggle" pausing at the same time, they do not cancel out. There are several aspects of the single player that would be really enjoyable and fun during coop, like the planning and fluid strategic thinking instead of chaotic rehearsed frenzy builds. It seems like a mechanic that work really great here. Many Thanks! 🧡 <sub>P.S. Please let me know if this would be better made into a new issue, or if this one could be revived/revisited.</sub>
bhuvnesh123/FFmpeg-Video-Editor-Android
252207625
Title: How to improve reverse performance Question: username_0: Hi, I am using your example application to reverse a video. The video has 9 seconds duration and its size is 14.3 MB but it takes about 15 minutes to process result video. What should I do to improve the performance? Thanks. Answers: username_1: You can use "-preset", "ultrafast" to increase speed. https://trac.ffmpeg.org/wiki/Encode/H.264 Ideally it should NOT take 15 minutes for a 9 second video and i have tested with different videos.If possible,you can share the video with which you are facing problem at <EMAIL> Status: Issue closed
NGEET/fates
708435399
Title: C13disc_SCPF does not pass restart tests Question: username_0: I added this variable to the AllVars list recently and it does not PASS. This was not previously on the AllVars list, and I added during some testing for the nutrient enabled version. I'm going to remove this from the new testlist as it has no precedent of passing. ``` C13disc_SCPF (lon,lat,fates_levscpf,time) t_index = 7 7 4531 516672 ( 23, 35, 144, 1) ( 1, 1, 1, 1) ( 23, 35, 144, 1) ( 61, 14, 1, 1) 228696 2.700000000000000E+01 0.000000000000000E+00 2.7E+01 2.700000000000000E+01 2.0E-02 2.446018600463867E+01 228696 8.686255455017090E+00 0.000000000000000E+00 0.000000000000000E+00 0.000000000000000E+00 516672 ( 22, 26, 118, 1) ( 1, 1, 1, 1) avg abs field values: 3.845278620719910E-01 rms diff: 2.8E+00 avg rel diff(npos): 2.0E-02 6.057988852262497E-03 avg decimal digits(ndif): 0.0 worst: 0.0 RMS C13disc_SCPF 2.8153E+00 NORMALIZED 1.4416E+01 ```
apache/rocketmq
387205560
Title: while not use FileChannel.write()? rocketmq use mmap file write , may cause use a lot of share memory Question: username_0: head is a test for FileChannel.write() , and mmap file write https://github.com/jkreps/valencia/blob/master/src/test/java/valencia/TestLinearWritePerformance.java what,s more , MaxDirectMemorySize dont work for memory hold by mmap file . Answers: username_0: ![image](https://user-images.githubusercontent.com/2367243/49436622-86c5f000-f7f4-11e8-88ce-332e7483bc24.png) ![image](https://user-images.githubusercontent.com/2367243/49436639-8c233a80-f7f4-11e8-935f-bc823e7e852a.png) username_1: @username_0 Compared with FileChannel, MMAP can write to the page cache directly, but FileChannel and use directbuffer need to write to memory first, then write to the page cache. so RocketMQ has an advantage when writing a lot of small data into a file. username_0: @username_1 but the rocketmq may hold 100G even more SHR memory . **how to control the memory use by rocketmq ,by mmap file ??** ![image](https://user-images.githubusercontent.com/2367243/49513011-96b00380-f8ca-11e8-9cfd-4338ed7b53f5.png) Status: Issue closed username_2: Maybe using the dio in the future will be a good choice, memory management is entirely done by the application layer.
greenplum-db/gpdb
177299960
Title: elog_mock failed the PR Question: username_0: Seems like we have a consistent failure of the elog_mock. Here is the error: ``` No entries for symbol errmsg. ERROR: ../../../../src/test/unit/mock/backend/utils/error/elog_mock.c:338 - Could not get value to check parameter fmt of function errmsg Previously declared parameter value was declared at postgres_test.c:67 [ FAILED ] test__ProcessInterrupts__ClientConnectionLost [ RUN ] test__ProcessInterrupts__DoingCommandRead No entries for symbol errmsg. ERROR: ../../../../src/test/unit/mock/backend/utils/error/elog_mock.c:338 - Could not get value to check parameter fmt of function errmsg Previously declared parameter value was declared at postgres_test.c:143 [ FAILED ] test__ProcessInterrupts__DoingCommandRead [=============] 5 tests ran [ PASSED ] 3 tests [ FAILED ] 2 tests, listed below [ FAILED ] test__ProcessInterrupts__ClientConnectionLost [ FAILED ] test__ProcessInterrupts__DoingCommandRead ``` Answers: username_1: See #1127 for work on this. username_1: With #1127 pushed this should now be fixed. username_2: @username_0 can you confirm if this fixed now and close the same if it is. username_3: @username_0 is it resolved? username_4: Yes. Closing. Status: Issue closed
paulmillr/chokidar
184604023
Title: Fire change event without making an actual change to file Question: username_0: I am using this lib to run individual tests when they themselves change or to run the whole test suite if certain files in the project change. What I am up doing sometimes is adding comments to files ('//') and then saving the file, just to get the watcher 'change' events to fire. Is there some keystrokes to use to fire a change event somehow if a file on the filesystem hasn't experience a change in its contents? Status: Issue closed Answers: username_1: ``` const watcher = chokidar.watch('dir') watcher.emit('change', 'filepath') ``` that should do the trick username_0: thanks but I meant not programmatically so much - I meant via some keystrokes - which somehow change file without changing its contents - is there some recommend way to do that? otherwise maybe could use some sort of stdin thing and then use the simple methodology you described. username_2: Like [`touch`](http://www.unix.com/man-page/POSIX/1posix/touch/)? username_0: maybe, yeah, would that fire a change event in this library? Or maybe a 'touch' event? username_2: There is no `touch` event in chokidar. It will fire a `change` event when you `touch` a file on POSIX platforms username_0: ok good to know! username_0: works for me, I guess it would be interesting for the docs to document would kind of low level events the library hooks into. I frankly don't know which lower level events "change" events originate from.
kapi2289/vulcan-api
994158576
Title: Internal Server Error -- cannot use library at all Question: username_0: The following code worked a few months back ```py import asyncio from vulcan import Keystore, Account, Vulcan # Loading keystore & account with open("keystore.json") as f: keystore = Keystore.load(f) with open("account.json") as f: account = Account.load(f) async def main(): # Client creation client = Vulcan(keystore=keystore, account=account) await client.select_student() # select the first available student print(client.student) await client.close() if __name__ == "__main__": asyncio.run(main()) ``` When trying to run it now, I get the following error ```py Traceback (most recent call last): File "D:\coding-projects\python\vulcan-marks\main.py", line 21, in <module> asyncio.run(main()) File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete return future.result() File "D:\coding-projects\python\vulcan-marks\main.py", line 16, in main await client.select_student() # select the first available student File "D:\coding-projects\python\vulcan-marks\venv\lib\site-packages\vulcan\_client.py", line 42, in select_student students = await self.get_students() File "D:\coding-projects\python\vulcan-marks\venv\lib\site-packages\vulcan\_client.py", line 61, in get_students self._students = await Student.get(self._api) File "D:\coding-projects\python\vulcan-marks\venv\lib\site-packages\vulcan\model\_student.py", line 78, in get data = await api.get(STUDENT_LIST, **kwargs) File "D:\coding-projects\python\vulcan-marks\venv\lib\site-packages\vulcan\_api.py", line 155, in get return await self._request("GET", url, body=None, **kwargs) File "D:\coding-projects\python\vulcan-marks\venv\lib\site-packages\vulcan\_api.py", line 142, in _request raise RuntimeError(status["Message"]) RuntimeError: Internal Server Error (ArgumentException) Unclosed client session client_session: <aiohttp.client.ClientSession object at 0x00000184CA106910> Unclosed connector connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x00000184CA0E96A0>, 158185.046)]'] connector: <aiohttp.connector.TCPConnector object at 0x00000184CA106940> Fatal error on SSL transport protocol: <asyncio.sslproto.SSLProtocol object at 0x00000184CA1183D0> transport: <_ProactorSocketTransport fd=868 read=<_OverlappedFuture cancelled>> Traceback (most recent call last): File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 684, in _process_write_backlog self._transport.write(chunk) File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 359, in write self._loop_writing(data=bytes(data)) File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 395, in _loop_writing self._write_fut = self._loop._proactor.send(self._sock, data) AttributeError: 'NoneType' object has no attribute 'send' Exception ignored in: <function _SSLProtocolTransport.__del__ at 0x00000184C8472430> Traceback (most recent call last): File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 321, in __del__ File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 316, in close File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 593, in _start_shutdown File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 598, in _write_appdata File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 706, in _process_write_backlog File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\sslproto.py", line 720, in _fatal_error File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 151, in _force_close File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 746, in call_soon File "C:\Users\Kwiecinski\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed RuntimeError: Event loop is closed ``` I'm really unsure what the issue could be, followed Answers: username_1: The following code works perfectly fine for me. Try registering the keystore again (with a token, symbol and PIN). If that does not help, provide additional info by enabling debugging: ```py import logging client = Vulcan(keystore, account, logging.DEBUG) # .. rest of the code ``` Status: Issue closed username_0: Yeah there was some issue with the keystore, thanks a lot for the help, have a good day.
ngnjs/NGN
354053042
Title: Data Virtual Fields Question: username_0: From ngn-core created by [username_0](https://github.com/username_0) : ngnjs/ngn-core#16 Virtual values should be cached automatically upon first creation. They should only change if the model values change. _Note:_ It's possible to read and parse the virtual field function, then identify any locally scoped attributes it depends on. These are the fields to monitor. It should be possible to disable this feature so it _always_ reruns the virtual field function.<issue_closed> Status: Issue closed
18F/pulse
225718152
Title: any reason federalregister.gov is not included in DAP section? Question: username_0: federalregister.gov is a live, public-facing, non redirecting site. Thanks. Answers: username_1: According to the .gov domain list, federalregister.gov is a property of the Government Publishing Office, a legislative branch agency. (Note that yes, the [Office of the Federal Register](https://www.ofr.gov) is an office of the National Archives, an executive branch agency. But they do a lot of cross-branch/agency sharing of information, and you can even see that the certificate for https://www.ofr.gov says "Government Publishing Office" next to the URL bar. Government is weird.) username_0: thanks for the explanation @username_1! Will close. Status: Issue closed
ionic-team/ionic-storage
1056963148
Title: Unable to Store data in ANDROID 11 (API 30) Question: username_0: Values not store in android 11 i.e API 30 , from below android 30 all work good. It not store any value in ionic/storage , Is there any new update or any new configuration for android 11 in ionic 4 Thank you...!! Answers: username_1: Pensé que solo me pasaba en los celulares XIOAMI pero al estar investigando solo pasa en EL API 30 Igual estoy esperando una actualización username_2: I'm seeing this too. It works on previous versions of API (29 specifically) username_3: Yesterday got same issue. Storage doesn't work for Android 11, Galaxy A32 5. username_4: I'm also getting the same issue! My app is not working properly in some Android 11 devices. I've tested on Samsung s20 fe and Samsung Tab A7. However, I was able to run the app without problems on Android Emulator and also in some other devices with Android 11. It looks like this problem doesn't happen to all Android 11 devices. username_2: To solve this, we have changed our Ionic storage to use IndexDB rather than SQLite. In Angular, we change the root instantiation to this: ``` IonicStorageModule.forRoot({ driverOrder: ['indexeddb', 'sqlite', 'websql'] }), ``` username_5: Same problem here, Android Version 11 - API (30) ``` 12-14 22:45:36.901 E/SQLitePlugin(17734): unexpected error, stopping db thread 12-14 22:45:36.901 E/SQLitePlugin(17734): java.lang.NullPointerException: Attempt to invoke interface method 'java.lang.String io.liteglue.SQLDatabaseHandle.getLastErrorMessage()' on a null object reference 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.liteglue.SQLiteGlueConnection.<init>(SQLiteGlueConnection.java:12) 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.liteglue.SQLiteConnector.newSQLiteConnection(SQLiteConnector.java:20) 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.sqlc.SQLiteConnectorDatabase.open(SQLiteConnectorDatabase.java:55) 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.sqlc.SQLitePlugin.openDatabase(SQLitePlugin.java:213) 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.sqlc.SQLitePlugin.access$000(SQLitePlugin.java:28) 12-14 22:45:36.901 E/SQLitePlugin(17734): at io.sqlc.SQLitePlugin$DBRunner.run(SQLitePlugin.java:328) 12-14 22:45:36.901 E/SQLitePlugin(17734): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) 12-14 22:45:36.901 E/SQLitePlugin(17734): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 12-14 22:45:36.901 E/SQLitePlugin(17734): at java.lang.Thread.run(Thread.java:923) ``` @username_2 solution worked for as a workaround. Is there any idea when this bug will be addressed? username_4: @username_5, It looks like this problem is actually on cordova-sqlite-storage plugin, as can be seen here https://github.com/storesafe/cordova-sqlite-storage/issues/954. I was able to fix this problem by updating the cordova-sqlite-storage to version 6.0.0. The following steps worked for me: (1) Delete node_modules directory (2) Change the cordova-sqlite-storage plugin version to 6.0.0 In package.json ` "cordova-sqlite-storage": "^6.0.0", ` (3) Run npm install In the case you need to test your application in a variety of devices, the site www.browserstack.com can help you. They provide many virtual devices that are ready to run you application. It helped me a lot to reproduce this bug and confirm the solution. username_0: I don't no what is happen but i tried with this solutions and it works . . Remove the dependencies and install it again. This helps mi to resolve this issue Status: Issue closed username_6: This solved the issue for me (Ionic 5, Angular 11, Capacitor 3, ionic-native/sqlite 5.36, cordova-sqlite-storage: 6.0.0): $ npm install cordova-sqlite-storage $ npm install @awesome-cordova-plugins/sqlite $ ionic cap sync Hope it helps. username_7: Hola, lograste alguna solución ? username_6: Hola, yo lo solucioné desinstalando lo que tenía de SQLite e instalando: $ npm install cordova-sqlite-storage $ npm install @awesome-cordova-plugins/sqlite $ ionic cap sync Es lo que pone la documentación actualizada de capacitor. Seguramente tengas que cambiar cosas del código. Ánimo. https://ionicframework.com/docs/native/sqlite
ng-alain/ng-alain
464446201
Title: sf组件需要一个可以同时设置多个form表单值的setValue方法 Question: username_0: 如题。 现有的setValue方法,只能一次性设置一个值。 假定有个字段a,当修改它的值的时候我希望同时修改字段b的值,那么我需要调用2次setValue方法。 这个时候会产生一个问题,就是它会调用两次formChange事件,这会导致一些不可控的行为。 Answers: username_1: `sf` 的校验与数据更新都是异步行为,我们很难做到哪一次更新不想事件触 发,若需要多次则使用 `refreshSchema` 来重置。 Status: Issue closed username_0: 我需要解决的不是更新不想触发事件。 我是说 SFComponent.setValue这个方法,能不能有个扩展。 以前setValue改变的是form里面单个元素的值 现在我想对其中的某些元素做些定制,改变它的值的时候,同时触发更改其它form元素的值 假设有两个输入框a和b,假设b=2*a 我想做到的是,当a里面输入1的时候,b的值可以自动变成2, 那么在sf组件中,就需要一个a的setValue方法,这个setValue可以一次性的写入两个值 以下代码是在用的代码,继承自ControlWidget: this.setValue(this.obj['Id']); const item = this.sfComp.getProperty('/' + this.ui['name']); item.setValue(this.obj, false); 这段代码的问题在于调用了两次setValue,我希望只用调用一次setValue就可以赋值了
podpis/kodi
341354940
Title: wizja.tv - wsparcie Question: username_0: Kiedy będzie wsparcie dla wizja.tv? Obecnie wtyczka kodiver nie działa. Answers: username_1: @username_0 https://forum.kodiwpigulce.pl/showthread.php?tid=911&highlight=wizja username_1: @username_0 https://forum.kodiwpigulce.pl/showthread.php?tid=911&highlight=wizja
RMLio/RMLStreamer
550157148
Title: Error that GroupID is missing, but it's not Question: username_0: When executing the following ``` ./run.sh -p $(pwd)/../mapping.rml.ttl -f ../flink-1.9.0/bin/flink -o ../test.nt ``` with the following RML rules ``` @prefix rr: <http://www.w3.org/ns/r2rml#>. @prefix rml: <http://semweb.mmlab.be/ns/rml#> . @prefix ql: <http://semweb.mmlab.be/ns/ql#> . @prefix ex: <http://www.example.com/> . @prefix rmls: <http://semweb.mmlab.be/ns/rmls#> . @base <http://example.com/base> . <#TripleMap> a rr:TriplesMap; rml:logicalSource [ rml:source [ a rmls:KafkaStream ; rmls:broker "n076-21.wall1.ilabt.iminds.be:9092" ; rmls:groupid "rmlstreamer"; rmls:topic "people"; ]; rml:referenceFormulation ql:JSONPath; ]; rr:subjectMap [ rr:template "http://example.com/{id}" ]; rr:predicateObjectMap [ rr:predicate ex:firstname; rr:objectMap [ rml:reference "first_name" ] ]. ``` I get this error ``` streamer jar: target/RMLStreamer-1.2.1-SNAPSHOT.jar job name: mapping: /home/consumer/RMLStreamer/../mapping.rml.ttl output: ../test.nt socket: kafkaBrokerList: kafkaTopic: parallelism: 1 // RML Run Script ------------------------------------------ Starting execution of program ------------------------------------------------------------ The program finished with the following exception: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: requirement failed: exactly 1 groupID needed at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) [Truncated] at io.rml.framework.core.extractors.std.StdTriplesMapExtractor$$anonfun$extract$1.apply(StdTriplesMapExtractor.scala:55) at io.rml.framework.core.extractors.std.StdTriplesMapExtractor$$anonfun$extract$1.apply(StdTriplesMapExtractor.scala:55) at scala.collection.immutable.List.flatMap(List.scala:338) at io.rml.framework.core.extractors.std.StdTriplesMapExtractor.extract(StdTriplesMapExtractor.scala:54) at io.rml.framework.core.extractors.std.StdTriplesMapExtractor.extract(StdTriplesMapExtractor.scala:35) at io.rml.framework.core.extractors.std.StdMappingExtractor.extract(StdMappingExtractor.scala:43) at io.rml.framework.core.extractors.std.StdMappingExtractor.extract(StdMappingExtractor.scala:33) at io.rml.framework.core.extractors.std.StdMappingReader.read(StdMappingReader.scala:47) at io.rml.framework.Main$.readMappingFile(Main.scala:180) at io.rml.framework.Main$.main(Main.scala:109) at io.rml.framework.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) ... 9 more ``` This is weird because the Group ID is present in the RML rules: `rmls:groupid "rmlstreamer";` Answers: username_1: It's not supported indeed. This is on the to-do list... Status: Issue closed username_1: Now output to file is supported when having a stream as input.
chris-paul/no-step-no-mile
883181612
Title: 用css画一个三角形 Answers: username_1: ```javascript div { width: 0; height: 0; border-left: 100px solid transparent; border-top: 100px solid red; border-bottom: 100px solid transparent; border-right: 100px solid transparent; } <div/> ```
sedovalx/taxi
63400179
Title: Нужен фильтр для списка пользователей Question: username_0: Над списком пользователей должна быть панелька с полями для ввода параметров фильтрации: - ФИО - Логин Сбоку от панельки нужна кнопка Обновить, при нажатии на которую список должен перезагружаться с учетом введенных параметров фильтрации. Так же должна быть кнопка Очистить, которая очищает введенные параметры фильтрации и перезагружает список. [Вот как](http://emberjs.com/guides/routing/query-params/) указывать параметры фильтрации на клиенте. У нас все параметры фильтрации при выполнении запроса на сервер должны иметь имя с префиксом filter., т.е. filter.fio или filter.login. На сервере пока не реализована поддержка параметров фильтрации - отдельная задача. Answers: username_1: @username_0 Уточни, пожалуйста, по каким полям поддерживается фильтрация на сервере? по роли вроде не поддерживается, а по остальным перечисленным поддерживается. username_0: @username_1 фильтр роли нужно делать выпадающим списком и на сервер передавать значение роли (Administrator, Accountant и т.п.), а не русское наименование. username_1: @username_0 по-моему, совсем не работает, даже если я английское передаю. если я пытаюсь фильтровать по этому полю, то фильтр не работает и по другим полям username_0: @username_1 напиши сюда строку запроса, которая уходит на сервер username_1: @username_0 я так понимаю, это request url? Request URL:http://localhost:9000/api/users?role=test при этом никаких ошибок и выводится полный список без фильтрации username_1: @username_0 все, я понял. Request URL:http://localhost:9000/api/users?role=Administrator так работает. т.е. если я сделаю выпадающий список, то будет работать username_1: @username_0 мне нужно значение role для фильтрации, которое бы отдало все роли. сейчас такой запрос (фильтрация по логину, но без фильтрации по роли) http://localhost:9000/api/users?firstName=&lastName=&login=Test&middleName=&role= отдает весь список, т.е. сервер, как я понимаю, понимает только значение роли, которое уже существует, а ни пустое, ни null не понимает. запроса без role, т.е. так: http://localhost:9000/api/users?firstName=&lastName=&login=Test&middleName= я не могу сделать. почему написано тут: http://guides.emberjs.com/v1.10.0/routing/query-params/ если я непонятно выразился, то давай по телефону поговорим :) username_0: @username_1 посмотри, как я поправил разметку фильтра. Это тоже некруто, т.к. фильтр занимает дофига места. Нужно на одной строке размещать несколько полей ввода, по два к примеру. Для этого почитай про http://getbootstrap.com/css/#grid и посмотри наш компонент {{property-row}}. Наверное в нем нужно использовать не фиксированные ширины столбцов, а параметризуемые. Текущие значения можно оставить по-умолчанию. username_0: @username_1 либо же сделаем попроще - спрячем фильтр в экспандер. Как сделать экспандер смотри [тут](http://getbootstrap.com/javascript/#collapse) Status: Issue closed
luniehq/lunie
489657732
Title: Fetch Proposals Using GraphQL Question: username_0: **Is your feature request related to a problem? Please describe.** With the new backend, it is now possible to retrieve Proposal information using GraphQL. **Describe the solution you'd like** Fetch lists of, and individual proposals using GraphQL. Answers: username_0: Fixed in #3022 Status: Issue closed
steelbrain/linter
227091742
Title: Show old output until new output is ready Question: username_0: I am using this in conjunction with a Scala compiler. This language is very slow to compile so, I fix one error, hit save to trigger a recompile. Unfortunately right at the same point in time, the previous output is removed. It would be nice if it didn't get removed immediately but replaced with the new errors when they are ready. Answers: username_1: That's how things already work: Linter only updates messages when a provider gives it results. The only other way that the text could be losing its marker is if it is modified enough that Atom no longer considers the marker valid and automatically removes it. Do you have external tools running during build that are re-writing the files? There is a bug in Atom where occasionally it replaces the entire file instead of only the differences when the file is updated outside Atom. username_0: I don't think any tools do write operations to the file. Maybe the provider is at fault here? https://github.com/inkytonik/atom-sbt this is the one. username_1: Looks like that provider is using the "Indie" API, which means it controls when messages are updated. It only does so in two places: [here](https://github.com/inkytonik/atom-sbt/blob/v0.10.0/lib/project.coffee#L136) and [here](https://github.com/inkytonik/atom-sbt/blob/v0.10.0/lib/project.coffee#L168). If those two cases don't look like they are causing this you should file an issue on that provider. username_0: Okay, cool I will try to take it in the provider, thanks a lot! Status: Issue closed
facebookresearch/detectron2
1006483193
Title: KeyError: 'image_id' on inferencing step Question: username_0: I'm trying to use Retinanet_R_50_FPN_3x for my object detection task. The training process starts fine, but then breaks pointing to a `KeyError: 'image_id'`. I assume there could be a problem with my preprocessing function `get_board_dicts`. I was surprised by the fact that Detectron2 rearranges the standard COCO structure into its own, so there could be some convertion isues. ## Instructions To Reproduce the Issue: Here's my code: ``` import numpy as np import os, json, cv2, random import matplotlib.pyplot as plt from detectron2.utils.logger import setup_logger setup_logger() from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.data import MetadataCatalog, DatasetCatalog from detectron2.structures import BoxMode from detectron2.engine import DefaultTrainer from detectron2.evaluation import COCOEvaluator class CocoTrainer(DefaultTrainer): @classmethod def build_evaluator(cls, cfg, dataset_name, output_folder=None): if output_folder is None: os.makedirs("coco_eval", exist_ok=True) output_folder = "coco_eval" return COCOEvaluator(dataset_name, cfg, False, output_folder) def get_board_dicts(path_to_images_dir, path_to_COCO_dir, COCO_file): json_file = path_to_COCO_dir + COCO_file #Fetch the json file with open(json_file) as f: dataset_dicts = json.load(f) annos = dataset_dicts["annotations"] for i in dataset_dicts["images"]: filename = i["file_name"] i["file_name"] = path_to_images_dir + filename j = list(filter(lambda anno: anno['image_id'] == i['id'], annos)) for a in j: a["category_id"] = 0 a["bbox_mode"] = BoxMode.XYWH_ABS i["annotations"] = j new_dicts = dataset_dicts["images"] return new_dicts #Registering the Dataset mapper = {"train": {"images_dir": "E:/Detectium/ready_images/templates/1_train/", "COCO_dir": "E:/Detectium/COCO/1_train/", "COCO_file": "1_train_COCO.json"}, "valid": {"images_dir": "E:/Detectium/ready_images/templates/2_valid/", "COCO_dir": "E:/Detectium/COCO/2_valid/", "COCO_file": "2_valid_COCO.json"}} for d in ["train", "valid"]: images_path = mapper[d]["images_dir"] COCO_path = mapper[d]["COCO_dir"] COCO_file = mapper[d]["COCO_file"] DatasetCatalog.register("boardetect_" + d, lambda d=d: get_board_dicts(images_path, COCO_path, COCO_file)) MetadataCatalog.get("boardetect_" + d).set(thing_classes=["FLOWER"]) board_metadata = MetadataCatalog.get("boardetect_train") [Truncated] [[255, 255, 255, ..., 255, 255, 255], [255, 255, 255, ..., 255, 255, 255], [255, 255, 255, ..., 255, 255, 255], ..., [255, 255, 255, ..., 255, 255, 255], [255, 255, 255, ..., 255, 255, 255], [255, 255, 255, ..., 255, 255, 255]]], dtype=torch.uint8)} ``` There's no key named "image_id", but as far as I understand "image_id" is the key for annotations dict, not images dict. ## Expected behavior: The training process runs fine until completion. ## Environment: OS Platform: Windows 10 64-bit Python version: 3.7 CUDA/cuDNN version: 10.1 GPU model and memory: GTX 1080 Ti Answers: username_0: The problem was caused by inappropriate input format. The correct input format's described here: https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html#standard-dataset-dicts Status: Issue closed
jagrosh/Selfbot
234684485
Title: Bot doesn't load up Question: username_0: Try to run it and the Java executable doesn't load. Any solutions Answers: username_1: Can you provide examples of what you've tried so far, and screenshots of any errors you've gotten? username_0: Just reinstalled and it worked, thank you! Status: Issue closed
m-lab/etl
225570884
Title: SIGBUS from web100_snapshot_alloc_from_log Question: username_0: ``` [signal SIGBUS: bus error code=0x80 addr=0x0 pc=0x89684c] runtime stack: runtime.throw(0x9ee9cc, 0x2a) /usr/lib/google-golang/src/runtime/panic.go:599 +0x9e runtime.sigpanic() /usr/lib/google-golang/src/runtime/signal_unix.go:274 +0x2db goroutine 1092 [syscall, locked to thread]: runtime.cgocall(0x895d60, 0xc427635358, 0x27635380) /usr/lib/google-golang/src/runtime/cgocall.go:131 +0xe2 fp=0xc427635328 sp=0xc4276352e8 pc=0x405282 github.com/m-lab/etl/web100._Cfunc_web100_snapshot_alloc_from_log(0x7ff4b07103e0, 0x0) github.com/m-lab/etl/web100/_obj/_cgo_gotypes.go:443 +0x4a fp=0xc427635358 sp=0xc427635328 pc=0x82d90a github.com/m-lab/etl/web100.Open.func2(0x7ff4b07103e0, 0x7ff4b07103e0) /usr/local/google/home/gfr/go/src/github.com/m-lab/etl/web100/web100.go:66 +0x60 fp=0xc427635390 sp=0xc427635358 pc=0x82f090 github.com/m-lab/etl/web100.Open(0xc42050a5e0, 0x1c, 0xc42559f5c0, 0x0, 0x0, 0x0) /usr/local/google/home/gfr/go/src/github.com/m-lab/etl/web100/web100.go:66 +0xdb fp=0xc4276353f8 sp=0xc427635390 pc=0x82de1b ```<issue_closed> Status: Issue closed
dotnet/runtime
876154848
Title: missing some commands in namespace `Intrinsics` Question: username_0: ## Background and Motivation Some instructions are missing like `_mm_set1_epi8` in sse2. Sometimes this command is useful, we could get a Vector<128> like [17,17...17](repeated 16 times) in one cpu cycle. <!-- We welcome API proposals! We have a process to evaluate the value and shape of new API. There is an overview of our process [here](https://github.com/dotnet/runtime/blob/main/docs/project/api-review-process.md). This template will help us gather the information we need to start the review process. First, please describe the purpose and value of the new API here. --> ## Proposed API Personally, I do not care the name, but I know it is important, this is just an example. ``` namespace System.Runtime.Intrinsics.X86 { public abstract class Sse2 : Sse{ + // _mm_set1_epi8 in SSE2 + static Vector128<byte> BroadcastByte<byte>(byte _byte); } } ``` <!-- Please provide the specific public API signature diff that you are proposing. For example: ```diff namespace System.Collections.Generic { - public class HashSet<T> : ICollection<T>, ISet<T> { + public class HashSet<T> : ICollection<T>, ISet<T>, IReadOnlySet<T> { } ``` You may find the [Framework Design Guidelines](https://github.com/dotnet/runtime/blob/main/docs/coding-guidelines/framework-design-guidelines-digest.md) helpful. --> ## Usage Examples ``` var tmp = Sse2.BroadcastByte(8); // Then I got a Vector<128> `tmp` full with byte 8 repeated 16 times. ``` For a real case, I want to implement SwissTable, but find missing this command. <!-- Please provide code examples that highlight how the proposed API additions are meant to be consumed. This will help suggest whether the API has the right shape to be functional, performant and useable. You can use code blocks like this: ``` C# // some lines of code here ``` --> ## Alternative Designs <!-- Were there other options you considered, such as alternative API shapes? How does this compare to analogous APIs in other ecosystems and libraries? --> Well, as far as I know, rust provide it: https://doc.rust-lang.org/core/arch/x86_64/index.html ## Risks <!-- Please mention any risks that to your knowledge the API proposal might entail, such as breaking changes, performance regressions, etc. --> I guess there was a reason for not adding this command at first, but maybe there is a chance to review and add more commands now? Answers: username_1: @username_0 `Vector128.Create()` and other overloads should do it for you username_0: @username_1 Thanks a lot, bro. This really helps. Status: Issue closed
nodejs/node-addon-api
882366064
Title: [Tests] Test document coverage for ObjectWrap Question: username_0: | class | methods | |-------------------------|----------------------------------------------------------------------------------------------------------------------------------| |ObjectWrap | | |Covered | ObjectWrap(const CallbackInfo& callbackInfo) | || static T* Unwrap(Object wrapper) | |Covered #125 |Function DefineClass(Napi::Env env, const char * utf8name, const std::initializer_list<PropertyDescriptor>& properties) | | Covered #125 |Function DefineClass(Napi::Env env, const char * utf8name, const std::vector<PropertyDescriptor>& properties) | | |PropertyDescriptor StaticMethod(const char* utf8name, StaticVoidMethodCallback method, napi_property_attributes) | |Covered #280 |PropertyDescriptor StaticMethod(const char* utf8name, StaticMethodCallback method ) | | |PropertyDescriptor StaticMethod(Symbol name, StaticVoidMethodCallback method ) | |Covered #280 |PropertyDescriptor StaticMethod(Symbol name, StaticMethodCallback method) | |Covered #604 |PropertyDescriptor StaticMethod(Symbol name) template<StaticVoidMethodCallback method> | |Covered #604 |PropertyDescriptor StaticMethod(Symbol name) template <StaticMethodCallback method> | |Covered #604 |PropertyDescriptor StaticMethod(const char * utf8name) template<StaticVoidMethodCallback method> | |Covered #604 |PropertyDescriptor StaticMethod(const char * utf8name) template<StaticMethodCallback method> | |Covered #604 |PropertyDescritptor StaticAccessor (const char * utf8name) template<StaticGetterCallback getter, StaticSetterCallback setter> | |Covered #604 |PropertyDescriptor StaticAccessor(Symbol name) tempolate<StaticGetterCallback getter, StaticSetterCallback setter> | |Covered #280 |PropertyDescriptor StaticAccessor(const char* utf8name, StaticGetterCallback getter, StaticSetterCallback setter) | |Covered #280 |PropertyDescriptor StaticAcessor(Symbol name, StaticGetterCallback getter, StaticSetterCallback setter) | |Covered #280 |PropertyDescriptor StaticValue(const char *utf8name, Napi::Value value) | |Covered #280|PropertyDescriptor StaticValue(Symbol name, Napi::Value value) |
thedilletante/utils
221364686
Title: Configurable thread model Question: username_0: As of now, active object creates its own thread and executes all requests there. It would be useful to provide some other thread to be able to perform operations of several object in the certain thread.
clangen/musikcube
280866258
Title: endless metadata syncing loop Question: username_0: Hi, I just installed musikcube from master and am facing this issue. Musikcube scans all my music (and I can see all my albums/tracks in library and browse/play everything just fine. But as soon as all music is added to library musikcube rescans everything. The library is not reset, but scanning my files never stops. Any hints how I can debug this. Would love to help to solve this one. Answers: username_1: Just noticed this issue... how do you know your files are getting scanned multiple times? Does the banner up top reset to 0 four times? Or do four times as many files show up as counted? username_0: It reset to 0 each time. Will have to test again with a nuked database to be sure it's reproducable username_1: Closing for now -- I gave this a test on Windows, macOS and Linux (and even my Raspberry Pi) and didn't see anything weird. Please re-open if necessary. Status: Issue closed
brotherlogic/githubreceiver
807019053
Title: Error for /pullrequester.PullRequesterService/UpdatePullRequest Question: username_0: newrunner [3ea3b75adfa4b26682759e3056a4ef91]: 11 calls 9 errors (rpc error: code = Unknown desc = Unable to locate PR update:{checks:{source:"Analyze (go)" pass:FAIL} shas:"378ae3106c536a171877f739c5cda6f735519af5"})<issue_closed> Status: Issue closed
SpoilThePrincess/Spoil-The-Princess1
157373433
Title: Giovanni Question: username_0: <blockquote> <p>Sex sex and more sex I&rsquo;m married and need some strange Must be discreet and get staight to the point if u want a fun &hellip; <a href="http://www.growlichat.com/giovanni/">Read More</a></p> <p>Source/Repost=&gt; <a href="http://www.growlichat.com/giovanni/"><br> http://www.growlichat.com/giovanni/</a> growlichat-Blogger <a href="http://www.growlichat.com/">http://www.growlichat.com/</a></p> </blockquote> <p><a href="http://growlichats.wordpress.com/2016/05/29/giovanni-3">View On WordPress</a></p> <br><br> Source/Repost=&gt; <a href="http://goongprincess.tumblr.com/post/145090782101"><br> http://goongprincess.tumblr.com/post/145090782101</a> GrowliChat-username_0 <a href="http://goongprincess.tumblr.com/">http://goongprincess.tumblr.com/</a>
dotnet/cli
153561321
Title: dotnet pack does not support the "suppressParam" flag Question: username_0: ## Steps to reproduce When trying to build a cross plat app using dotnet cli with reference to Microsoft.NETCoreApp with a ```"suppressParent": "all" ``` set, the NETCoreApp dependency is not suppressed. It would be helpful in scenarios where we do not wish to take a dependency on the entire NETCoreApp framework as described here https://github.com/dotnet/cli/issues/2913 ## Expected behavior NETCoreApp dependency should be suppressed ## Actual behavior NETCoreApp dependency is not suppressed ## Environment data `dotnet --info` output: cc: @ericstj Answers: username_1: You can set `<DisableImplicitFrameworkReferences>true</DisableImplicitFrameworkReferences>` and then not add a reference to the M.NC.App package to achieve this. Also, if you want to depend on it but not have that dependency flow, you can set PrivateAssets="All" as metadata to your package reference. Though, this is set implicitly for you as well. Status: Issue closed
ShiqiYu/libfacedetection
281239199
Title: 余老师,您好,求解scale及min_neighbors具体含义?Demo中默认参数有误识别。 Question: username_0: ![image](https://user-images.githubusercontent.com/9295204/33864896-e7b63cec-df29-11e7-9467-fa24ffa375de.png) ![image](https://user-images.githubusercontent.com/9295204/33864897-e7c9f098-df29-11e7-846b-a7ad29c4253a.png) 原图: ![web](https://user-images.githubusercontent.com/9295204/33864936-20752c5a-df2a-11e7-8253-671a70525068.jpg) Answers: username_1: scale:每次缩小图像的比例,不建议修改 min_neighbors:neighbors是检测出的人脸框属性,越大表示是人脸可能性越大。小于min_neighbors的人脸框将被过滤掉。 username_0: 谢谢于老师 scale 缩放的原始图像大小是 函数调用时传的参数?就是Demo中的48?倒数第三个参数? `pResults = facedetect_multiview_reinforce(pBuffer, (unsigned char*)(gray.ptr(0)), gray.cols, gray.rows, (int)gray.step, 1.2f, 2, 48, 0, doLandmark);` username_2: scale是缩放大小,就是那个1.2f吧。每次框的大小都乘以1.2。你说的48是最小人脸框的阈值 Status: Issue closed
okseong/EnactBrowser
728628427
Title: 10/24(토) 회의 내용 Question: username_0: # 회의 내용 - BrowserAudit을 통해 --disable-web-security 실행 + 이 플래그를 통해 실행할 때 --user-data-dir 옵션을 넣어줘야 함. + [링크](https://chromium.googlesource.com/chromium/src/+/master/docs/user_data_dir.md) 참조할 것 - BrowserAudit을 통해 --no-sandbox실행 + 결과 차이가 별로 없어서, BrowserAudit이 해당 플래그의 취약점 검사하는지 더 살펴보고 시나리오를 고민해봐야 할듯 - 선임님께 질문드릴 사항 - 다음 회의 일자는 일요일 아침 10시 Answers: username_1: ## Log / 김홍균 - Browseraudit / Results / Flag 별 결과 정리 및 스크린샷 username_0: ## Log / 박은천 - 컴퓨터에 문제가 생겨 다시 환경설정... - 자바스크립트 문법 공부 - SOP 시나리오 자바스크립트 코드 공부 username_2: ## Log / 박은천 - 더 구체적인 테스트를 위해서 react를 통해 자체적인 웹사이트 개발 - parent와 child관계의 사이트를 iframe을 통해서 구현하고 SOP에 대해서 --disable-web-security 플래그의 유무에 따라 어떻게 반응하는지 테스트
apcountryman/picolibrary
1064733136
Title: Fix TCP socket concepts shutdown documentation Question: username_0: Fix TCP socket concepts shutdown (`::picolibrary::IP::TCP::Client_Concept::shutdown()` and `::picolibrary::IP::TCP::Server_Concept::shutdown()`) documentation: - [ ] `picolibrary::Generic_Error::CONNECTION_LOST` -> `picolibrary::Generic_Error::NOT_CONNECTED`<issue_closed> Status: Issue closed
treverhines/RBF
584420389
Title: Reuse interpolator for repeated evaluation of different variables Question: username_0: Is there a way to reuse the rbf interpolator (or parts of it) to speed up repeated evaluation of different variables between the same two meshes/point distributions? I want to interpolate results from a numerical simulation between to different grids. The two grids are always the same, but since there are several variables, currently I have to run the whole interpolation in a loop: ``` for key in oldMesh.keys(): interpolator = RBFInterpolant( np.array([oldMesh["X"], oldMesh["Y"]]).T, oldMesh[key], phi="phs3", order=1, sigma=0.0001, ) interpolationPositions = np.reshape( (xx,yy), (2, nInterpolationPoints*nInterpolationPoints) ).T fieldInterpolated = interpolator(interpolationPositions) newMesh[key] = fieldInterpolated.flatten() ``` Is there a way to speed this up? Answers: username_1: It is definitely possible to make this faster. When we first instantiate the `RBFInterpolant`, we can cache the LU decomposition and reuse it to build the interpolant for new variables. I added this functionality to a new branch called "faster_rbf_interpolation". In this new branch `RBFInterpolant` has a method called `fit_to_new_data` (I am open to other name suggestions), where you can fit a new interpolant assuming that the observation points are the same as when you instantiated it. So your code would look something likes this: ```python interpolator = None for key in oldMesh.keys(): if interpolator is None: interpolator = RBFInterpolant( np.array([oldMesh["X"], oldMesh["Y"]]).T, oldMesh[key], phi="phs3", order=1, sigma=0.0001, ) else: interpolator.fit_to_new_data(oldMesh[key]) interpolationPositions = np.reshape( (xx,yy), (2, nInterpolationPoints*nInterpolationPoints) ).T fieldInterpolated = interpolator(interpolationPositions) newMesh[key] = fieldInterpolated.flatten() ``` As for the evaluation points, it is a bit trickier. You could potentially save the RBF values at the interpolation points. I am envisioning that this could be done by memoizing the __call__ method of the RBF class. I would have to think about this some more. For now, could you test out the new branch and let me know if that helps you out? username_0: Thank you for your quick reply and the new functionality. I just tried it for one of my cases, and it reduced runtime for the interpolation by a factor of four (for eight variables), so it seems to work well! Concerning your question about the evaluation points, I'm not sure I understand what you mean. Do you want to reuse part of the interpolator even for different evaluation points? That would be nice to have, but for me the new branch already helped substantially. username_1: That is great to hear. There are a couple more things that I want to do to the new branch and then I will integrate it into master. For example, I want to add a flag in `__init__` indicating whether or not to save the LU decomposition, since it can take up a lot of space. I should have better explained my idea about the evaluation points. When you call the interpolant, a lot (or maybe most) of the time is spent evaluating the function `rbf.basis.phs3` at your interpolation points. Since you are interpolating at the same points for each iteration, we can save and reuse the values of `rbf.basis.phs3` at the those points. I am going to try implementing this, and I will get back to you. username_1: Can you try the following code? This is my hacky solution to speed up evaluating the interpolants at the new mesh points ```python from rbf.basis import phs3 from rbf.utils import MemoizeArrayInput # cache a numerical function for phs3 in two-dimensional space phs3._add_diff_to_cache((0, 0)) # memoize the numerical function that was just created phs3._cache[(0, 0)] = MemoizeArrayInput(rbf.basis.phs3._cache[(0, 0)]) interpolator = None for key in oldMesh.keys(): if interpolator is None: interpolator = RBFInterpolant( np.array([oldMesh["X"], oldMesh["Y"]]).T, oldMesh[key], phi="phs3", eps=np.ones_like(oldMesh["X"]), # need to specify eps for memoization to work order=1, sigma=0.0001, ) else: interpolator.fit_to_new_data(oldMesh[key]) interpolationPositions = np.reshape( (xx,yy), (2, nInterpolationPoints*nInterpolationPoints) ).T fieldInterpolated = interpolator(interpolationPositions) newMesh[key] = fieldInterpolated.flatten() ``` username_0: First of all, thanks for the explanation, now I better understand what you try to achieve. Second, I just tested the new version, and now the speedup compared to the original loop is a factor of ~7.7 (for eight variables), pretty impressive! username_1: That is correct.
justinhunt/moodle-mod_wordcards
656578848
Title: Namespace mod_wordcards\report violates the namespace naming guidelines Question: username_0: Please see https://docs.moodle.org/dev/Coding_style#Rules_for_level2 If you really want to use a dedicated namespace for the reporting features, the namespace should be `mod_wordcards\local\report` Status: Issue closed Answers: username_1: Thanks. I moved the reports feature into namespace: mod_wordcards\local\report
varvet/pundit
576380046
Title: Testing `permitted_attributes` for `nested_attributes` Question: username_0: I have used the gem to implement `permitted_attributes` for a subset of the `nested_attributes` on a model for a specific user, i.e. a user with a certain role can only update certain attributes on a nested model and none on the model itself. I can currently test this manually but am struggling to add a spec to my test suite. The README encourages the use of [pundit-matchers](https://github.com/chrisalley/pundit-matchers) and I have found it useful for `permitted_attributes` but unfortunately it appears as though the gem doesn't support `nested_attributes` ([issue #7](https://github.com/chrisalley/pundit-matchers/issues/7)) and may also be unmaintained at this point? I imagine I'm not alone in wanting to test this at policy level so would be interested in learning how others are achieving this. Once I know I'm happy to update the documentation to help others in future.
phusion/passenger
93594334
Title: nginx_config_template in Passengerfile.json is not using relative path Question: username_0: If you start passenger not by app root (where Passengerfile.json is located) then passenger standalone won't be able to find `nginx_config_template`, so you'll have to set manually by using `--nginx-config-template` in your command-line. Answers: username_1: We'll have a look at this. For now, you can work around this issue with absolute paths. Status: Issue closed username_1: This has been fixed in commit 900f3e6e. username_2: This commit breaks the Node.js loader's loadApplication(), because startupFile is now absolute, appRoot was already absolute, so you get require("/home/user/app/"+"/home/user/app/x.js"). ```javascript var startupFile = PhusionPassenger.options.startup_file || 'app.js'; require(appRoot + '/' + startupFile); ``` username_2: If you start passenger not by app root (where Passengerfile.json is located) then passenger standalone won't be able to find `nginx_config_template`, so you'll have to set manually by using `--nginx-config-template` in your command-line. Status: Issue closed
DataBiosphere/azul
636424536
Title: Enable error logs on Elasticsearch domain Question: username_0: Error logs are essential in diagnosing issues like the recently stuck snapshot. See third section in screen shot below. I think the logs will be voluminous so we should make sure the log group has a short retention of, say one month. ![image](https://user-images.githubusercontent.com/5143256/84298470-0fb59a00-ab04-11ea-83b1-e4e1e3830fe4.png)<issue_closed> Status: Issue closed
davidgohel/flextable
329778144
Title: Word equations in col_keys and footnote Question: username_0: I am trying to use `flextable::regulartable` to output a table that includes LaTeX math syntax in the col_keys and in the footnote (specifically, I am trying to use RMarkdown to render a .docx document). If I use `knitr::kable`, the math syntax is converted into Word equations when output to .docx. However, if I use `flextable::regulartable`, the syntax is not converted and rendered as-is. Is there any way to get flextable to render Word equations, and if not, is there any possibility this functionality might be added to flextable/officer? $\mathbfit{SD_{r}}$ $\mathbfit{SD_{res}}$ $\mathbf{\overline{\rho}}$ $\mathbfit{SD_{r_{c}}}$ $\mathbfit{SD_{\mathbf{\rho}}}$ Answers: username_1: Hi Sorry for the delay. I don't have a solution for that now but I'd like to. username_1: I am closing that issue as I still did not have time for it. I may reopen later Status: Issue closed username_2: @username_1 do you think it'd be possible to work with body_add_xml in officer to add math mode equations? if you think so i'll take a look and see if i can get some rudimentary equations working. thanks! username_1: @username_2 Hi. What is hard there is to be able to provide the format expected by Word (https://docs.microsoft.com/fr-fr/dotnet/api/documentformat.openxml.math?view=openxml-2.8.1). This will be extremely painful... At ArData, we thought about it and gave away because we lack the time to do it, I think it's a huge work (but I may be wrong!). Plus, people will want to write latex, so it not only means to be able to embed the equations but also transform them from latex to openxml format. However, if you find a solution, I will be happy merge the PR :) username_2: @username_1 thanks for your response. but, at a high level body_add_xml could do the trick? i'm thinking of starting with some specific things we need regularly for academic papers, like some greek for table labels, instead of implementing the whole spec. the xml i'm seeking didn't look too complicated when i looked at it from word. agreed about the latex to openxml, that would be a bear. username_1: OK, sorry for my misunderstanding Adding greek letters, symbols, etc is already possible. There is a simple example here: https://username_1.github.io/flextable/articles/display.html#sugar-functions-for-complex-formatting <img width="138" alt="Capture d’écran 2019-08-20 à 18 03 11" src="https://user-images.githubusercontent.com/4331618/63363912-d0e80500-c374-11e9-9e25-744da99121e7.png"> Is that what you are after? If not, could you post a screen shot so that I can check there is no solution? PS: flextable is not using `body_add_xml` (too slow). Instead, it create the xml. username_2: this is great, almost what i need. question: can i store the as_paragraph(as_b("µ"), as_sup("blah")) object in a cell of a data.frame then have flextable convert it later somehow? i couldn't figure out the notation for that. (btw, happy to post this as a separate issue or SO question since we've gotten away from math mode...) username_1: I don't know. Probably with rlang (used in flextable also)... Yes, please open a separate issue with what you want to achieve and I will try to help. (and sorry, I will not be able to answer during the following next days) username_1: There is support for 'MathJax' equations since `flextable 0.6.5`. For now, only by using `as_equation`: * The Word and PowerPoint results are filled with a real Word/PPT equation * The HTML version is an SVG image * The PDF version is a latex equation ``` eqs <- c( "(ax^2 + bx + c = 0)", "a \\ne 0", "x = {-b \\pm \\sqrt{b^2-4ac} \\over 2a}") df <- data.frame(formula = eqs) df ft <- flextable(df) ft <- compose( x = ft, j = "formula", value = as_paragraph(as_equation(formula, width = 2, height = .5))) ft <- align(ft, align = "center", part = "all") ft <- width(ft, width = 2) ft ``` ![flextable-012](https://user-images.githubusercontent.com/4331618/114839838-87182200-9dd6-11eb-9a43-c06dc9093998.png) username_3: Hi, I'd love to use the `as_equation()` to render equations within my tables, but I am having some difficulty getting this to work. I've followed the instructions [here:](https://username_1.github.io/flextable/reference/as_equation.html): ie.e., "To use this function, package 'equatags' is required; also equatags::mathjax_install() must be executed only once to install necessary dependencies" I've installed the `equatags` package. However, even running the example code chunk it fails with the error: Error in as_equation(formula, width = 2, height = 0.5) : could not find function "as_equation" Is anyone else having difficulty? Here is my sessionInfo `R version 4.0.5 (2021-03-31) Platform: x86_64-apple-darwin17.0 (64-bit) Running under: macOS Catalina 10.15.7 Matrix products: default BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib locale: [1] en_AU.UTF-8/en_AU.UTF-8/en_AU.UTF-8/C/en_AU.UTF-8/en_AU.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] equatags_0.1.0 flextable_0.6.4 loaded via a namespace (and not attached): [1] Rcpp_1.0.6 bookdown_0.21 packrat_0.6.0 digest_0.6.27 rappdirs_0.3.3 R6_2.5.0 [7] evaluate_0.14 zip_2.1.1 rlang_0.4.10 gdtools_0.2.3 uuid_0.1-4 data.table_1.14.0 [13] xml2_1.3.2 rmarkdown_2.7 locatexec_0.1.0 xslt_1.4.2 tools_4.0.5 officer_0.3.18 [19] rsconnect_0.8.16 xfun_0.22 yaml_2.2.1 compiler_4.0.5 systemfonts_1.0.1 base64enc_0.1-3 [25] htmltools_0.5.1.1 knitr_1.31 ` username_1: @username_3 There is support for 'MathJax' equations since `flextable 0.6.5`, you need to update your package. username_3: @username_1 Of course. That was a dumb question! Thanks for catching that. I did a fresh install of flextable thinking I had the most updated version, but clearly I did not! All is working now.
SSL92/hyperIQA
815106291
Title: None Question: username_0: Hi, I re-formatted his codes and the model can be trained under multiple GPUs. Please check my repo: https://github.com/username_0/oneIQA. Meanwhile, I also trained his model for cross-database validation. The corresponding codes will be uploaded shortly. I'm currently tried to save the trained model for future use. Best, Shuyue Answers: username_1: Hello, your link is invalid, can you send a new one?
wekan/wekan
311495927
Title: Disabling "show cards count" not possible Question: username_0: ## Issue **Server Setup Information**: * Deployment Method(snap/sandstorm/mongodb bundle): Docker 0.79 **Problem description**: I have enabled the "show cards count" setting by entering a number into the field. However, now I can't disable it again, only select a sufficiently high number so that the count won't be shown anymore.<issue_closed> Status: Issue closed
TabakoffLab/PhenoGen
325017555
Title: TypeError: that.gsvg.selectSvg is undefined Question: username_0: View details in Rollbar: [https://rollbar.com/username_0/Phenogen/items/397/](https://rollbar.com/username_0/Phenogen/items/397/) ``` TypeError: that.gsvg.selectSvg is undefined File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5879, in GeneTrack/that.setupDetailedView File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5693, in GeneTrack/that.setSelected File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6703, in GeneTrack/that.draw File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6052, in GeneTrack/that.updateData/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in send/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 4, in call File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in e File "https://cdnjs.cloudflare.com/ajax/libs/rollbar.js/2.3.9/rollbar.min.js", line 2, in i.prototype.instrumentNetwork/</< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in send File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in ajax File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2095, in GenomeSVG/that.getAddMenus File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2918, in GenomeSVG File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5820, in GeneTrack/that.setupDetailedView File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5693, in GeneTrack/that.setSelected File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6703, in GeneTrack/that.draw File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6052, in GeneTrack/that.updateData/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in send/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 4, in call File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in e File "https://cdnjs.cloudflare.com/ajax/libs/rollbar.js/2.3.9/rollbar.min.js", line 2, in i.prototype.instrumentNetwork/</< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in send File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in ajax File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2095, in GenomeSVG/that.getAddMenus File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2918, in GenomeSVG File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5820, in GeneTrack/that.setupDetailedView File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5693, in GeneTrack/that.setSelected File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6703, in GeneTrack/that.draw File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6052, in GeneTrack/that.updateData/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in send/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 4, in call File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in e File "https://cdnjs.cloudflare.com/ajax/libs/rollbar.js/2.3.9/rollbar.min.js", line 2, in i.prototype.instrumentNetwork/</< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in send File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in ajax File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2095, in GenomeSVG/that.getAddMenus File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2918, in GenomeSVG File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5820, in GeneTrack/that.setupDetailedView File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5693, in GeneTrack/that.setSelected File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6703, in GeneTrack/that.draw File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6052, in GeneTrack/that.updateData/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in send/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 4, in call File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in e File "https://cdnjs.cloudflare.com/ajax/libs/rollbar.js/2.3.9/rollbar.min.js", line 2, in i.prototype.instrumentNetwork/</< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in send File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in ajax File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2095, in GenomeSVG/that.getAddMenus File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2918, in GenomeSVG File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5820, in GeneTrack/that.setupDetailedView File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 5693, in GeneTrack/that.setSelected File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6703, in GeneTrack/that.draw File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 6052, in GeneTrack/that.updateData/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in send/< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 4, in call File "https://phenogen.ucdenver.edu/PhenoGen/javascript/d3.v4.8.0.min.js", line 7, in e File "https://cdnjs.cloudflare.com/ajax/libs/rollbar.js/2.3.9/rollbar.min.js", line 2, in i.prototype.instrumentNetwork/</< File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in send File "https://phenogen.ucdenver.edu/PhenoGen/javascript/jquery-1.12.2.min.js", line 4, in ajax File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2095, in GenomeSVG/that.getAddMenus File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.6.12.js", line 2918, in GenomeSVG File "https://phenogen.ucdenver.edu/PhenoGen/javascript/GenomeDataBrowser2.<issue_closed> Status: Issue closed
cilium/cilium
640109436
Title: Clustermesh guide with direct etcd connection requires --set global.identityAllocationMode=kvstore Question: username_0: As per the issue title, basically following the current clustermesh guide (1.7 or earlier, and 1.8 as of today), it is easy to miss that `global.identityAllocationMode` is set to `crd` by default. If this is not reconfigured to `kvstore` then identities for pods in remote clusters will not be propagated to other clusters, so cross-cluster policy will not work correctly. Potential mitigations: * Mention in clustermesh documentation to enable this option * Auto-enable this option in helm charts if `global.etcd.enabled` is set to true Answers: username_1: @username_0 can you assign it to me. Status: Issue closed
cmsc22000-project-2018/spellcheck
319705531
Title: Determining Best Algorithm For Computing Suggestiosn Question: username_0: 1) https://www.wikiwand.com/en/Levenshtein_distance https://stackoverflow.com/questions/346757/how-do-spell-checkers-work 2) http://stevehanov.ca/blog/index.php?id=114 https://blog.afterthedeadline.com/2010/01/29/how-i-trie-to-make-spelling-suggestions/ Answers: username_1: I think I have an efficient solution to this one. WiIl refine it tonight and post the details tomorrow. username_1: So you want to just make edits as you go down the tree and keep track of how many you make, and then you stop when you run out of edits or the tree has no children. See below code. You's need a list of some form to keep all of it in, and the code is a mix of pseudo-python and pseudo-C ``` def try_delete(int edits_left, char* prefix, char* suffix) { // If the prefix has children, continue, otherwise stop if (has_children(prefix) == True) { // Just remove the first character of the suffix and look for suggestions with one less edit suggestions(edits_left - 1, prefix, suffix[1, len(suffix - 1)]) } } def try_insert(int edits_left, char* prefix, char* suffix) { // If the prefix + an insert has children, continue, otherwise stop for (c in valid_chars) { // Valid chars is a list of th chars that exist in the dictionary, this can be easily done somwhere else // Just try adding a valid char to the prefix then check for children if (has_children(prefix + c) == True) { // Continue looking for suggestions with the new prefix suggestions(edits_left - 1, prefix + c, suffix) } } } def try_replace(int edits_left, char* prefix, char* suffix) { // If the prefix with the most recent character switched with a different character has children, contrinue, otherwise stop for (c in valid chars) { // Just separated because this got long new_prefix = prefix[0, len(prefix - 2)] + c // Swap the last character in the prefix and look for children if (has_children(new_prefix) == True) { suggestions(edits_left - 1, new_prefix, suffix) } } } def try_transpose(int edits_left, char* prefix, char* suffix) { // note, this function could probably be optimized/debugged but here's a proof of concept anyways // Just add in swapped first 2 chars of suffixto prefix swap_prefix = prefix + suffix[1] + suffix[0] if (has_children(swap_prefix) == True) { // Make sure to take off the front of the suffix suggestions(edits_left - 1, swap_prefix, suffix[2, len(suffix) - 1]) } } def move_on(int edits_left, char* prefix, char* suffix) { // Just move on to the next position in the string [Truncated] // No more edits, just check if the full string is in the trie now if (valid_word(prefix + suffix) == True) { // Add the full word to the list somehow, doesn't really matter list += prefix + suffix } } try_delete(edits_left, prefix, suffix) try_insert(edits_left, prefix, suffix) try_replace(edits_left, prefix, suffix) try_transpose(edits_left, prefix, suffix) move_on(edits_left, prefix, suffix) }``` username_1: So the theory is that you can pass in edits_left as 2 to get all words within 2 edits. It's a rough sketch that can probably optimized but it gets the point across. It'll rely on has_children from the support tools team username_2: hi dawson, i've added some comments to your pseudocode, let me know what you think So you want to just make edits as you go down the tree and keep track of how many you make, and then you stop when you run out of edits or the tree has no children. See below code. You's need a list of some form to keep all of it in, and the code is a mix of pseudo-python and pseudo-C. Look at suggestions() since it's the driver function. def try_delete(int edits_left, char* prefix, char* suffix) { // If the prefix has children, continue, otherwise stop if (has_children(prefix) == True) { // Just remove the first character of the suffix and look for suggestions with one less edit suggestions(edits_left - 1, prefix, suffix[1, len(suffix - 1)]) } } def try_insert(int edits_left, char* prefix, char* suffix) { // If the prefix + an insert has children, continue, otherwise stop for (c in valid_chars) { // Valid chars is a list of th chars that exist in the dictionary, this can be easily done somwhere else // Just try adding a valid char to the prefix then check for children if (has_children(prefix + c) == True) { // Continue looking for suggestions with the new prefix suggestions(edits_left - 1, prefix + c, suffix) } } } //i’m not sure what this replace function is doing: is it going through all the characters in a string and replacing each character position with all possible chars? def try_replace(int edits_left, char* prefix, char* suffix) { // If the prefix with the most recent character switched with a different character has children, continue, otherwise stop for (c in valid chars) { // Just separated because this got long new_prefix = prefix[0, len(prefix - 2)] + c //is this concatenating strings? // Swap the last character in the prefix and look for children if (has_children(new_prefix) == True) { suggestions(edits_left - 1, new_prefix, suffix) } } } def try_transpose(int edits_left, char* prefix, char* suffix) { // note, this function could probably be optimized/debugged but here's a proof of concept anyways // Just add in swapped first 2 chars of suffixto prefix swap_prefix = prefix + suffix[1] + suffix[0] if (has_children(swap_prefix) == True) { // Make sure to take off the front of the suffix suggestions(edits_left - 1, swap_prefix, suffix[2, len(suffix) - 1]) //you need to write a shave_suffix function } } def move_on(int edits_left, char* prefix, char* suffix) { [Truncated] return; } //function here that divides the word into suffixes and prefixes (for example, for the world HELLO, into H, ELLO, HE,LLO, HEL,LO, etc—I don’t think there’s anything that does this within the delete function try_delete(edits_left, prefix, suffix) try_insert(edits_left, prefix, suffix) try_replace(edits_left, prefix, suffix) try_transpose(edits_left, prefix, suffix) //if the logic above is correct, i don’t see why a move_on function is needed? the functions all advance through each character in the string move_on(edits_left, prefix, suffix) } username_1: try_replace will: 1. Move the last character from the suffix to the prefix 2. Attempt to replace it with one of any valid character 3. If the prefix containing the replaced character has children, continue with suggestions new_prefix is not concatenating strings in try_replace shave_suffix is done by just using pointer arithmetic on the string pointer. For example, if `char *s = "hello"`, then `printf("%s", s+1)` will print `ello` You don't need to explicitly divide into prefixes and suffixes, this is done implicitly by move_on move_on takes the first character away from the suffix and appends it to the prefix without using an edit. This allows moving through the string and any children it that are created by the recursion. Status: Issue closed
pascalabcnet/pascalabcnet
175432047
Title: Вернуть ускорение foreach для массивов Question: username_0: Вернуть ускорение foreach для массивов как только избавимся от ObjectCopier.Clone Answers: username_0: Там были какие-то другие мотивы - по-моему, то, что менялся код, но не в синтаксическом дереве, а на лету, а потом код для лямбд в конце блока повторно проходил и проходил по старому коду и не находил каких-то переменных username_1: А какое именно? Последнее, с заменой на for? username_0: Это моя проблема. Там сложный баг - лямбды повторно обходят старый код, а он уже изменился, и они падают - новую переменную не могут найти. username_0: Вернул ускорение. Несмотря на повторный обход с лямбдой, дело было в генерации кода для for. Status: Issue closed
yiisoft/yii2
80580143
Title: optimisticLock bug Question: username_0: It appears that optimistic locking has a bug. When I implement everything according to documentation and attempt to read 'version' right after save(), such as to send it back to my user through ajax, the new 'version' is not updated. $order->version equals 0 However, when I access 'version' through getOldAttribute('version') the code works as expected. try { if($order->save(false)) { return ['data' => array('validated'), 'tag' => array($order->tag), 'version' => array($order->getOldAttribute('version'))];` else return ['data' => array('error'), 'reason' => array('Server error while saving order.')]; } catch(StaleObjectException $e) { return ['data' => array('error'), 'reason' => array('The order has been modified. Please, look up this order again.')]; } Answers: username_0: So, another words, immediately after updating an existing record (within the same request): $order->version = 0; but $order->getOldAttribute('version') = 1; I am expecting this to be in reverse. username_0: Looking into the source code, looks like 'version' is added to and incremented in `$values[]` array and copied into `$_oldAttributes`, but is not copied into `$_attributes`? protected function updateInternal($attributes = null) { if (!$this->beforeSave(false)) { return false; } $values = $this->getDirtyAttributes($attributes); if (empty($values)) { $this->afterSave(false, $values); return 0; } $condition = $this->getOldPrimaryKey(true); $lock = $this->optimisticLock(); if ($lock !== null) { $values[$lock] = $this->$lock + 1; $condition[$lock] = $this->$lock; } // We do not check the return value of updateAll() because it's possible // that the UPDATE statement doesn't change anything and thus returns 0. $rows = $this->updateAll($values, $condition); if ($lock !== null && !$rows) { throw new StaleObjectException('The object being updated is outdated.'); } $changedAttributes = []; foreach ($values as $name => $value) { $changedAttributes[$name] = isset($this->_oldAttributes[$name]) ? $this->_oldAttributes[$name] : null; $this->_oldAttributes[$name] = $value; } $this->afterSave(false, $changedAttributes); return $rows; } username_1: Issue resolved by commit 51a442d Status: Issue closed username_0: I looked at the fix and without testing it, it seems that the fixed code will now produce the same result for $order->version will be 1 and $order->getOldAttribute('version') will also be 1 instead of 0 username_1: This is expected: as version value has been just updated, so it is not 'dirty'. username_0: OK. Cool. Thanks!
medyas/flutter_qiblah
1066622981
Title: Add the project to Awesome Muslim List Question: username_0: In [this repository](https://github.com/username_0/awesome-Muslims), I've collected open source projects, free tools and resiurces that could help developers and encourage them produce more islamic apps. It would be great to add this project under the [Libraries & Plugins > Dart & Flutter](https://github.com/username_0/Awesome-Muslims#dart--flutter) Section. Thank you ! Status: Issue closed Answers: username_1: @username_0 I prefer keeping the project under my account so i could continue maintaining it. Your welcome to copy the repo there or provide a link to this one.
i18next/i18next
603886476
Title: Nesting with namespaces does not seems to work Question: username_0: Hi, taking an example from Docs, but turning it into namespaced one seems to break it. https://jsfiddle.net/vt06ao9c/ Or is my syntax wrong? ``` translation: { "test": { "girlsAndBoys": "$t(test:girls, {'count': {{girls}} }) and {{count}} boy", "girlsAndBoys_plural": "$t(test:girls, {'count': {{girls}} }) and {{count}} boys", "girls": "{{count}} girl", "girls_plural": "{{count}} girls" } } ``` ``` i18next.t('test.girlsAndBoys', { count: 2, girls: 1 }) ``` Expected: `1 girl and 2 boys` Result: `girls and 2 boys` Answers: username_1: `$t(test:girls, {'count': {{girls}} })` ---> `test:` says look in another namespace (file) called `text` should be `$t(test.girls, {'count': {{girls}} })` username_0: Cheers, you're right. Thanks! Status: Issue closed
vinitkumar/json2xml
571649816
Title: Usage example syntax issue Question: username_0: **Describe the bug** On https://pypi.org/project/json2xml/ The usage example has an error in it. Second from should be import. `... from json2xml.utils from readfromurl, readfromstring, readfromjson ...` should be ` ... from json2xml.utils import readfromurl, readfromstring, readfromjson ...` Answers: username_1: Thanks! You're a lifesaver! :) username_2: cc @username_0 @username_1 Sorry for the trouble Fixed it in the latest release. https://github.com/username_2/json2xml/releases/tag/v3.3.3 https://pypi.org/project/json2xml/3.3.3/ Please upgrade and it should be fixed there. Status: Issue closed
wangding/courses
189574256
Title: 任务12. 配置 Git Question: username_0: 要求如下: 设置 user.name,注意此用户名应该是自己真实的中文姓名 设置 user.email,注意此 email 应该和 github 的注册邮箱相同 Answers: username_0: ![default](https://cloud.githubusercontent.com/assets/23205657/20455458/520ba0e2-ae97-11e6-961d-1feeaca6f774.PNG) @username_1 请检查!!! username_1: 运行 git config --list 把命令的运行结果,截图贴上来啊 username_0: ![default](https://cloud.githubusercontent.com/assets/23205657/20455489/03599bc4-ae98-11e6-8cf9-cf06c9a46911.PNG) username_2: 已检查,可以关闭了 Status: Issue closed
project-chip/connectedhomeip
1102100152
Title: Cannot specify interaction timeouts using the CHIPClusters or InvokeInteraction APIs Question: username_0: #### Problem While the lower-level IM primitive for sending commands on the client side (`CommandSender`) permits specifying a timeout for the interaction as a whole, the higher level CHIPClusters or InvokeInteraction APIs don't. #### Proposed Solution Add an extra argument to `ClusterBase::InvokeCommand` and `Controller::InvokeCommandRequest` APIs. Answers: username_1: At some point we should think a bit about whether the arguments for InvokeCommand/InvokeCommandRequest should be a struct instead of a long list....
tekartik/sembast.dart
1118719435
Title: Using put with merge true adds a record instead of update an existing one Question: username_0: Im making an app to get data from a page about a game (web scrapping) and saving the data in a data base. When I try to update the data (with a floating action button) I use the method put of a storeFactory and the last records are not updated, they are added again, even the same keys. This is the function that I call to update the database temtem_dao.dart: ` static upsert(Temtem temtem) async { await store .record(temtem.number) .put(DatabaseDao().db, temtem.toMap(), merge: true); }` This is the repo of the project. [https://github.com/username_0/temtem_wiki](url) Answers: username_0: I'm running the app in Ubuntu username_1: When you read the record, you get the updated data, correct? My guess is that you are looking directly at the database file in a sembast io format (i.e. ljson file). Some information here on the format should explain what you are seeing (changes are appended to the file which explain why some records are duplicated): https://github.com/tekartik/sembast.dart/blob/master/sembast/doc/storage_format.md If I'm wrong can you try to reproduce what you are seeing in a unit test? thanks username_0: Yes, I have the updated info, the problem is that I'm storing some bytes to use as icons and sometimes they were duplicated (increasing the size of the database). Now that I know that the database is optimized for reading and not for size I will consider write the bytes into external files. Status: Issue closed
Joshua-Chiu/PGDBWebServer
604323481
Title: PLIST CUT OFF FIX: AFTER NEW INSTALL Question: username_0: Hi Josh / Mason, So, as you are aware, the database is now running externally. We uploaded the student data just last night. At first, I was horrified that the PLIST cut offs did not merge over. But, after a few minutes, voila, it happened. They were there and they are correct. There is only only small error. If you look at the PLIST cut-off screen for the 2016-2017 year, you will notice that there is a 95.000% in TERM 1 for grades 11 and 12. There actually shouldn't be anything there. Or there should be the impossible 99.999% showing there. I figure that there is a minor programming error in that spot. Can that be fixed? It should read 99.999% not 95.000%. MERCI!!!!!!!!!!!!!!!!!! M PETH I tried cut/paste the screen shot here but I don't think it worked: ![image](https://user-images.githubusercontent.com/61298381/79919845-00f41600-83e4-11ea-8b5a-d9f6734dcc0c.png) Answers: username_1: Issue can not be replicated Status: Issue closed
jlippold/tweakCompatible
425032684
Title: `Notifica` working on iOS 11.4.1 Question: username_0: ``` { "packageId": "me.nepeta.notifica", "action": "working", "userInfo": { "arch32": false, "packageId": "me.nepeta.notifica", "deviceId": "iPhone8,1", "url": "http://cydia.saurik.com/package/me.nepeta.notifica/", "iOSVersion": "11.4.1", "packageVersionIndexed": true, "packageName": "Notifica", "category": "Tweaks", "repository": "Nepeta", "name": "Notifica", "installed": "0.1.10", "packageIndexed": true, "packageStatusExplaination": "A matching version of this tweak for this iOS version could not be found. Please submit a review if you choose to install.", "id": "me.nepeta.notifica", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "Notification customizer", "latest": "0.1.10", "author": "Nepeta", "packageStatus": "Unknown" }, "base64": "<KEY> "chosenStatus": "working", "notes": "" } ```<issue_closed> Status: Issue closed
typora/typora-issues
501830063
Title: format citations to look different than text? Question: username_0: Hi, I'm using Typora for academic writing so have been using it with Zotero and BibTeX citations. One difficulty is that because Typora doesn't "know" that my citations are any different than the actual text I'm writing, it makes reading my own work kind of awkward before processing the text via `citr` (or some other typesetting system like DocDown). Here's an example (citations are in square brackets): <img width="686" alt="Screen Shot 2019-10-03 at 4 23 31 PM" src="https://user-images.githubusercontent.com/41483946/66097275-311bc900-e5fa-11e9-9517-3b27245f7982.png"> Is there some simple CSS to modify so that BibTeX citations like these could automatically be faded out so they're less intrusive? This seems simpler to set up than my ideal setup, which would have the citations automatically format themselves into whatever citation style the document is set up for. But that seems a lot more complicated than a few lines of CSS... 😆 Answers: username_1: duplicate with #912 #294 Status: Issue closed
easynlp/easynlp
940941589
Title: Hello mRsir Question: username_0: I spotted an issue with your repo name, it suggests that nlp is easy. Maybe you should change it to something like moderatelyhardnlp. Or if it is actually easy for you, considering changing it to hardformostpeoplenlp. Or just hardforrubennlp. thaksyu
jbirondo/Fullstack
477505110
Title: MVP: User auth Question: username_0: [] Users can login/sign up and log out via modal [] Show errors on login/sign up [] Users bootstrapped [] Drop down logout button Answers: username_1: - [ ] sign up form needs first name and last name fields - [ ] "Welcome to OpenTable!"/"Please Sign In" should be bigger on the form - [ ] the links on the bottom that go to the other form should be styled to show that they're links - [ ] demo user Status: Issue closed
Syzygy05/CS435-Project2
599375946
Title: Code Review Question: username_0: ![image](https://user-images.githubusercontent.com/28812354/79632508-c7928080-812d-11ea-9fdd-6e3eaee229e8.png) (graph.py) I believe you aren't supposed to have self-loop nodes so you can get rid of the if statement and just simply append to each other to your matrix. Answers: username_0: ![image](https://user-images.githubusercontent.com/28812354/79632508-c7928080-812d-11ea-9fdd-6e3eaee229e8.png) (graph.py) I believe you aren't supposed to have self-loop nodes so you can get rid of the if statement and just simply append to each other to your matrix. username_0: ![image](https://user-images.githubusercontent.com/28812354/79632633-90709f00-812e-11ea-8c80-626bcef8807e.png) (graph.py) The first two lines did the job for you. Not sure what the rest of the function is doing really tbh. username_0: This is the first time I have seen anyone use an adjacency matrix instead of a list so nice job there btw. username_0: ![image](https://user-images.githubusercontent.com/28812354/79632967-91a2cb80-8130-11ea-8937-7808fc05f1f5.png) There is a good chance that you have the same node created. Make sure you have a flag that checks that doesn't happen. username_0: ![image](https://user-images.githubusercontent.com/28812354/79633045-fbbb7080-8130-11ea-86d5-65d9a8ba7b66.png) (main.py) I don't really know what you are doing here, to be honest. This code doesn't create a LinkedList. All you need to is connect every node in the graph to each other in order. Like 1->2->3->4 etc. Also, this function doesn't require using random. username_0: That's it for my code review! Maybe I missed a few things cause my python knowledge isn't up to par. Since you only did up to the first part of part 3, this is what I was able to do a code review up to. If you finish the rest, lmk if you want me to reopen this and finish the review. Thanks and GLHF! Status: Issue closed
pimusicbox/mopidy-musicbox-webclient
131995444
Title: search not working (properly) Question: username_0: Yes, that didn't help. Answers: username_1: If this is only a problem when searching Spotify, then you may be experiencing one of a number of search issues that occurred when Spotify changed their API recently: see https://github.com/mopidy/mopidy-spotify/issues/89? If so we can probably close the issue here in anticipation of it being resolved in Mopidy-Spotify - I see a PR has already been proposed for this. Status: Issue closed
HelperLine/backend
597174308
Title: Reduce Data Transfer (backend) Question: username_0: Currently with most requests, the result gets send back to the client in the form of the full account/calls data. The frontend state changes only a little bit (e.g. fulfilling a call). A full fetch after every operation is totally unnecessary! Goal: Further reduced server load by reducing the amount fetching data that the client already has.<issue_closed> Status: Issue closed
takamin/transworker
386513274
Title: The last parameter does not reach to the worker side, when the callback is omited Question: username_0: If the last parameter is not a function object, the module should treat that the callback is omitted. Function object is never transported to the worker side as a parameter. Status: Issue closed Answers: username_0: This issue is fixed on PR #16.
Caliburn-Micro/Caliburn.Micro
537246503
Title: Call NotifyOfPropertyChange in a different thread? Question: username_0: Is it possible to call `NotifyOfPropertyChange `from a different thread? I have a `ViewModel `class with a method called `LoadData()` which calls `NotifyOfPropertyChange `for all variables. I also have a class which uses C#s Timer class to run a background method every 1 minute which fetches new data from DB. This method also calls the `LoadData `method. But it does not refresh the UI. Should the `LoadData `method be called only from the UI thread? Answers: username_1: Yes you can, by default `PropertyChangedBase` will ensure the events are fired on the UI thread. Status: Issue closed username_0: Its not working for me. Can I reopen this issue? username_1: What platform are you using? Can you provide a simple piece of code that shows this issue?
ArkEcosystem/explorer
590938848
Title: feat: custom translations Question: username_0: Once we have the settings page implemented (#900), it should be made possible for a user to input a custom translation to be used by the explorer. The idea will be that there is an input field that accepts a JSON object that contains the translations. Once added, the user will be able to set this translation to be used in the explorer. This will also mean that by default the explorer will only support English officially. Answers: username_1: @dated maybe you can adapt your CLI (https://github.com/dated/language-plugin-generator) to work with that too username_2: Closing this due to our new explorer being announced for release soon https://ark.io/blog/ark-explorer-40-first-look-block-explorer-reimagined. Status: Issue closed
elischutze/grrrrl
193292071
Title: Create some sample video data to create site Question: username_0: Create a json file that holds some data about videos (with a link!) so that the page can come together for now. Answers: username_1: I'll give this a go 👍 username_1: Should the data be stored in an object in a .js file for now instead of a .json file? If we use a json object we'd have to set up a server to make xhr requests to (not sure which language this would be in and how comfortable beginners would be with that). With a js file, the sample data could be accessed in this way: ```js <script type="text/javascript" src="./sampleData.js"></script> <script type="text/javascript"> // 'sample' is the object defined in sampleData.js sample.data.forEach(video => { console.log(video.title) }) </script> ``` username_0: @username_1 either works for now !
wolfpld/tracy
784422397
Title: client dumping to file directly Question: username_0: is it possible to dump the trace data directly to a file by the client by choice? this would be useful in applications where networking is difficult (emscripten compiled program inside brwoser) Answers: username_1: See https://github.com/username_1/tracy/issues/157#issuecomment-752087159. username_1: Duplicate of the above. Status: Issue closed
pcm-dpc/COVID-19
585284891
Title: I dati mancanti delle regioni vengono integrati successivamente? Question: username_0: **Tipo di richiesta**: richiesta di informazione ## Riassunto Visto il mancato aggiornamento da parte di alcune regioni in diversi giorni, i dati vengono integrati nel giorno seguente (dunque riportati nel giorno sbagliato), vengono ignorati del tutto, o i file vengono aggiornati non appena i dati diventano disponibili? Mi riferisco in particolare ai seguenti problemi: - 18/03/2020: dati Regione Campania non pervenuti. - 18/03/2020: dati Provincia di Parma non pervenuti. - 17/03/2020: dati Provincia di Rimini non aggiornati. - 16/03/2020: dati P.A. Trento e Puglia non pervenuti. - 11/03/2020: dati Regione Abruzzo non pervenuti. - 10/03/2020: dati Regione Lombardia parziali. Ad esempio, i dati della Regione Campania non pervenuti il 18/03 sono scomparsi, sono stati aggiunti in ritardo al file del 18 o sono stati comunicati tutti insieme il 19? Cosa è stato fatto per tutti i dati non pervenuti finora? ## Interesse pubblico Accuratezza dei dati Answers: username_1: Ciao @username_0, confermo che i dati vengono reintegrati il giorno dopo. Status: Issue closed
year-calendar/js-year-calendar
568299480
Title: Touch events Question: username_0: Hey, I was looking through the docs and couldn't find any info about touch events. How could i get the same code i have working for 'clickDay' work on touch event ? Answers: username_1: Why do you need the touch event ? The click event is working on mobile, no ?
bitnami/charts
963195947
Title: I need a new helmchart on citusdata Question: username_0: <!-- Before you open the bug report please review the following troubleshooting guide: - [Troubleshoot Bitnami Helm Chart Issues](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues) --> **Which chart**: The name (and version) of the affected chart **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Version of Helm and Kubernetes**: - Output of `helm version`: ``` (paste your output here) ``` - Output of `kubectl version`: ``` (paste your output here) ``` **Additional context** Add any other context about the problem here. Answers: username_1: Hi @username_0, Thank you for your suggestion! I have created an internal task to evaluate adding Citus Data to our catalog. username_2: Thanks for your suggestion! We will check it internally and evaluate the feasibility but due to limited resources that could take some time. If you want to contribute with the chart do not hesitate to open a PR and we will review it. [Here](https://github.com/bitnami/charts/blob/master/CONTRIBUTING.md#adding-a-new-chart-to-the-repository) you can find the requirements and scaffolding to create a Helm Chart following the Bitnami guidelines. If the container image is not part of the Bitnami catalog at this moment, we can work on that part while we review the Chart PR. If you want some examples, there are recent additions like MetalLB, ASP.net, Grafana Operator, kubernetes-event-exporter, etc that were based on PRs from users. username_0: Hi Team, Any update? username_2: I'm sorry but still no news, due to the small size of the team we didn't find the required bandwidth to work on this, we will update this GH issue with more news. username_0: Hi Team, any update username_2: Hi, unfortunately, there is no news regarding the addition of this Helm chart to the catalog. Taking a look at our roadmap, it is not something we will work on in the short term. Being said that, contributions via PRs are welcome, [here](https://github.com/bitnami/charts/blob/master/CONTRIBUTING.md#adding-a-new-chart-to-the-repository) you can find the contributing guidelines in case you want to work on this chart addition; we will be happy to review any PR. username_0: Hi Team, Any update? username_2: No, sorry, there is no news regarding this topic. Due to other priorities and team capacity, it's not something we will work on it in the short term. username_0: can u please tell me estimation time. username_2: There is no ETA for new chart additions, it should be evaluated, and if decided to include it as part of the catalog, the engineering team will work on that. But it is possible after the evaluation it is not considered to be developed in the short term, which is this case. The request was internally created and it can be considered in the future depending on the priorities and bandwidth.
mnutt/davros
145031269
Title: Sync with unison (feature request) Question: username_0: Might it be possible to allow one to sync with a Davros repository using the [Unison](https://www.cis.upenn.edu/~bcpierce/unison/) file synchronization tool? This would allow intelligent offline support, able to propagate changes in both directions and resilient to failure. Personally, I have been very happy/impressed with Unison in the past and am now looking to move to Davros or another Sandstorm app for each of my file collections, instead of having them all on a server I can access over unison via ssh. Intelligent offline support is very important to me, so I am hesitant to move away from Unison and adopt something else entirely. But if there exist offline WebDAV tools that are just as intelligent as unison, knowing of them may negate the need for this feature. Answers: username_1: FWIW, I've been theoretically interested in seeing git-annex get documented against Davros for a similar reason. username_2: I have been using it with git-annex and it works great. My only wish is that git-annex gave enough info for its special remotes to show an actual web UI. As is, it works, but the web UI isn't very useful. username_3: I personally use syncthing to sync my Davros folders on my local machine and that seems to work great. username_0: BTW, while I love unison I am beginning to realize this may be infeasible. If the versions of unison on each end [are different](https://alliance.seas.upenn.edu/~bcpierce/wiki/index.php?n=Main.UnisonFAQTroubleshooting), or (!) even [if the versions of Ocaml that unison was compiled with](https://github.com/bcpierce00/unison/issues/15) differ, the sync may fail. I have no idea how you would begin to support all versions of unison in the wild within a sandstorm package. Status: Issue closed username_0: Closing this, as I think there is no reasonable path forward given my previous comment.
Azurblau/Zombies.Zone
292233862
Title: Sugestion and some fix issues Question: username_0: Hi lads I have a few sugestions and few issues. Is it possible to change the boss selection when are more than 10 players on. I belive now its 5+ but when its like 10 usualy 2 3 go z main and u have almos no chance of wining. Aswell the anti-return trait steroids witch soupose to give u muscular trait dosent work, you wont be able to lift heavy props. and after death you u reedem the speed traits not working anymore Status: Issue closed Answers: username_1: Please create separate tickets for separate issues. Thanks.
Moguri/blend2bam
1036187552
Title: Feature Request: Integration into Blender's UI, and with it observance of Blender's object-selection Question: username_0: This request is for two connected features: First, that blend2bam be given a Blender-integrated UI, invoked from Blender's "export" menu. And second, that when exporting via this UI, the option be present to export only those objects that are selected in Blender. The first, I feel, would go a long way to making the exporter user-friendly--and especially to new users. If I'm not much mistaken, recognition tends to be easier than recollection, and thus I would expect that visual UI controls would tend to be easer to use than command-line arguments. UI controls may also be more convenient, by virtue of clicks requiring less subjective effort than typing. The second, I feel, would greatly enhance the utility and power of the exporter. Right now, any workflow that includes elements that are not intended for export (e.g. template objects), or even just multiple individual objects in a single Blender-file, isn't feasible with blend2bam--all will be exported regardless, if I'm not much mistaken. Allowing the user to specify which objects are to be exported would enable these workflows, I daresay--and, well, Blender provides a means to that via its selection functionality. Answers: username_1: Sorry for taking so long to respond to this! I would be open to some more filtering options for blend2bam. However, I believe a Blender addon is out of scope for blend2bam, and I do not feel like maintaining a Blender addon at this time. That said, I encourage others to make a Blender addon that calls out to blend2bam. In other words, create a GUI wrapper (via Blender) around blend2bam. username_0: Ah, that's fair, if a pity! The option to filter the exported objects by Blender's selection (or in some other way, if that's not available without Blender running) would still be a useful addition, I feel.
wagtail/wagtail
46434831
Title: Document all wagtail settings Question: username_0: This ticket proposes to have all wagtail settings documented in a single location much like [Django does](https://docs.djangoproject.com/en/1.7/ref/settings/). Many of these settings are mentioned, if not fully documented, in howto/settings.rst but as you can see below some are not mentioned at all in the docs. howto/settings.rst is a general guide on "how to get your settings file configured for Django & wagtail" and includes a full example settings file as well as urls. I propose cleaning this document up and moving the documentation of the settings themselves to a dedicated page like Django has for all user overridable settings and then have the rest of the documentation link to this. For reference, here is the list of all overridable settings and where they are currently referenced in the docs (if at all): - WAGTAILFRONTENDCACHE - wagtail/docs/contrib_components/frontendcache.rst - WAGTAILFRONTENDCACHE_LOCATION - WAGTAILADMIN_NOTIFICATION_FROM_EMAIL - wagtail/docs/howto/settings.rst - WAGTAIL_USAGE_COUNT_ENABLED - wagtail/docs/releases/0.5.rst - WAGTAIL_PASSWORD_MANAGEMENT_ENABLED - WAGTAIL_SITE_NAME - wagtail/docs/howto/settings.rst - WAGTAILEMBEDS_EMBED_FINDER - wagtail/docs/howto/settings.rst - WAGTAILIMAGES_IMAGE_MODEL - wagtail/docs/howto/settings.rst - wagtail/docs/releases/0.5.rst - WAGTAILIMAGES_FEATURE_DETECTION_ENABLED - wagtail/docs/core_components/images/feature_detection.rst - wagtail/docs/getting_started/installation.rst - WAGTAILSEARCH_BACKENDS - wagtail/docs/core_components/search/backends.rst - wagtail/docs/core_components/search/indexing.rst - wagtail/docs/getting_started/installation.rst - wagtail/docs/howto/settings.rst - wagtail/docs/reference/management_commands.rst - wagtail/docs/releases/0.7.rst - WAGTAILSEARCH_RESULTS_TEMPLATE - wagtail/docs/core_components/search/searching.rst - wagtail/docs/howto/settings.rst - WAGTAILSEARCH_RESULTS_TEMPLATE_AJAX - wagtail/docs/core_components/search/searching.rst - wagtail/docs/howto/settings.rst Answers: username_1: Wagtailembeds settings documented in https://github.com/wagtail/wagtail/pull/2127 username_2: Fixed in https://github.com/wagtail/wagtail/commit/b2e139bbd39ab2eb1923eefa632a615f0bc7abaa Status: Issue closed
fraunhoferfokus/particity
108523146
Title: Offer docker Question: username_0: It would be nice to offer an easy way for selfhosting e.g. Docker/Vagrant/Heroku/... Answers: username_1: Unfortunately we currently have no expertise in this field. However, as mentioned in the [TODOs](https://fraunhoferfokus.github.io/particity/todo.html) (subject to change) and in (#1)[https://github.com/fraunhoferfokus/particity/issues/1] we are aware that more comfortable ways of deployment & installation are required for upcoming releases. Feel free to get involved with particity and provide documentation or changes on the topic! username_1: Docker support was just added in HEAD. Still only makes sense with a (more or less) automated/guided install, which I am currently working on. Seeing forward to get a user-friendly solution for the upcoming release 0.9.4. Status: Issue closed
samuel52/PaystackRubyApi
897528158
Title: ReadMe is outdated Question: username_0: Documentation for this gem is outdated, `Paystackapi::PaystackTransactions.verify(paystack_ref)` doesn't work as stated in the readme; however the method is `verify_payment` Documentation doesn't also include information about adding authorization, nor the ENV variable name that the Gem accepts Answers: username_1: Hi @username_0 so sorry for that. I'll be making some updates on the library in the coming days. Thanks for the heads up!
numericalalgorithmsgroup/pypop
747580147
Title: prv.profile_openmp_regions() on chopped traces gives ValueError: array length 21409 does not match index length 21410 Question: username_0: When calling prv.profile_openmp_regions() on chopped traces from hybrid (MPI + OpenMP with MPI comms inside the OpenMP parallel regions) and using the develop branch I get the following warning /fserver/jonathanb/pop/pypop_stuff/pypop/pypop/prv.py:409: UserWarning: Incomplete OpenMP region found. This likely means the trace was cut through a region warn( and then this error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-84cffa4f9b29> in <module> ----> 1 omp_region_stats = prv1.profile_openmp_regions() ~/pop/pypop_stuff/pypop/pypop/prv.py in profile_openmp_regions(self, no_progress, ignore_cache) 503 ) 504 --> 505 rank_stats[irank] = pd.DataFrame( 506 { 507 "Rank": np.full(region_starts.shape, irank), ~/miniconda3/envs/PyPop/lib/python3.9/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy) 466 467 elif isinstance(data, dict): --> 468 mgr = init_dict(data, index, columns, dtype=dtype) 469 elif isinstance(data, ma.MaskedArray): 470 import numpy.ma.mrecords as mrecords ~/miniconda3/envs/PyPop/lib/python3.9/site-packages/pandas/core/internals/construction.py in init_dict(data, index, columns, dtype) 281 arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays 282 ] --> 283 return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) 284 285 ~/miniconda3/envs/PyPop/lib/python3.9/site-packages/pandas/core/internals/construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity) 76 # figure out the index, if necessary 77 if index is None: ---> 78 index = extract_index(arrays) 79 else: 80 index = ensure_index(index) ~/miniconda3/envs/PyPop/lib/python3.9/site-packages/pandas/core/internals/construction.py in extract_index(data) 409 f"length {len(index)}" 410 ) --> 411 raise ValueError(msg) 412 else: 413 index = ibase.default_index(lengths[0]) ValueError: array length 21409 does not match index length 21410
moust/phonegap-xapkreader
108562757
Title: Videos from expansion file do not display using the video tag (source src='object URL returned from XAPKReader') Question: username_0: I am running the xapkreader plugin 2.0.0 on Android 4.4 with Cordova 3.6. Code example: XAPKReader.get("main_expansion/vid/niceVideo.mp4", gotVideo, errorVideo); function gotVideo(result) { msg = "<li class='brdr-gray text-center'>"; msg += "<div class='row'>"; msg += "<div class='col-sm-12 col-xs-12'>"; msg += "<video id='video01' width='200' height='250' controls>"; msg += "<source src='" + result + "'>"; msg += "</video>" msg += "</div>"; msg += "</div>"; msg += "</li>"; document.querySelector('#video').innerHTML += msg; } The result parameter of gotVideo(result) is an objectURL like 'blob:file%3A.....". In this scenario, the video does not play. I also tried modifying XAPKReader.java to return a base64 string to add to source src of the video tag. The video does not play in this case either. It seems that I can only get a video to play if source src='a relative path to a video in the Android APK'. Can the objectURL returned by this plugin be used as the source src= attribute of the video tag?
holochain/holonix
615296145
Title: Your current version of Yarn is out of date. The latest version is "1.22.4", while you're on "1.17.3". Answers: username_1: To solve this, we need to first update `holo-nixpkgs` (https://github.com/Holo-Host/holo-nixpkgs/pull/453) and then point holonix to use it instead of `nixpkgs`. username_2: We've got yarn "1.22.5" as of de6cbd9ab0a1a5129e6be56037b0993881c4c895 Status: Issue closed
laravel-zero/laravel-zero
622837076
Title: Framework CI build tests failing Question: username_0: Currently the CI on the framework repository is failing. I first noticed this when updating the tests to use Pest, however since pushing a commit for release `v7.2.1`, the build is failing with PHPUnit as well. This appears to just be a single test that fails, however all tests seem to pass when I run it locally on PHP 7.2, 7.3, and 7.4. As far as I can tell, this appears to be related to an update to the `setup-php` action. https://github.com/username_1/setup-php - [With v2.1.4 (passing)](https://github.com/username_0/laravel-zero-framework/actions/runs/111872873) - [With v2.2.0 (failing)](https://github.com/username_0/laravel-zero-framework/actions/runs/111880531) Answers: username_1: @username_0 After `2.0.0` there hasn't been a breaking change. If you run the workflow with `2.1.4` again the same test-case will fail, it is most likely due to some dependency update in your project. username_0: @username_1, this is re-running it with `v2.1.4` and it seems to pass. 🤔 https://github.com/username_0/laravel-zero-framework/actions/runs/111872873 username_1: @username_0 Yes, there is a file ownership issue. I'm testing a patch, give me some time to push it and update the latest release username_0: Thank you very much for your quick response! username_1: Fixed in username_1/setup-php@190220c . It should now work with `2.2.2` release and `v2`. username_0: I've re-run the jobs with `v2` and it's working. 👌 Thanks, that's brilliant! Closing this issue as CI is now passing. Status: Issue closed
NG-ZORRO/ng-zorro-antd
789660619
Title: Carousel support arrow button Question: username_0: ## What problem does this feature solve? can change carousel by arrow button, like this ![image](https://user-images.githubusercontent.com/64340763/105133025-e1327f80-5b26-11eb-81cc-39c12b3fafa7.png) ## What does the proposed API look like? ``` <nz-carousel [nzArrow]="'hover'"> </nz-carousel> ``` <!-- generated by ng-zorro-issue-helper. DO NOT REMOVE --> Answers: username_1: Could you please open a similar feature request on the [React project](https://ant.design/components/carousel-cn/#header) first? This needs to be designed. username_0: ant-design [#5458](https://github.com/ant-design/ant-design/issues/5458) , the carousel component base on [SlickCarousel](https://github.com/akiran/react-slick) which has `arrows` attribute, just need set it to `true` ng-zorro-antd no way to show arrows username_2: Was the React project updated to have arrows on the carousel? This issue is blocked by that 😢
webdriverio/webdriverio
120908418
Title: XUnit outputDir can't handle absolute paths Question: username_0: The XUnit reporter appends the specified `outputDir` to the current working directory even if `outputDir` starts with `/`. So, if you specify an `outputDir` of `/tmp/test-reports/` the reporter will actually write to `$CWD/tmp/test-reports/`. Status: Issue closed Answers: username_1: the xunit reporter moved to another repository. We will make sure to fix that before we release that package.
18F/site-scanning
542695053
Title: Schedule 50% milestone check-in Question: username_0: @username_1 would you mind if we tried to prep this by Friday to avoid any rush on Monday? If so, let me know when you want me to have my individual responses done by. Answers: username_0: @username_1 would you mind if we tried to prep this by Friday to avoid any rush on Monday? If so, let me know when you want me to have my individual responses done by. username_1: @username_0 Is end-of-day Thursday too soon? username_0: nah I can make that happen @username_1 ! username_0: nailed it Status: Issue closed
ZacBuresh/MyriadMobileChallenge2018
394519591
Title: Binding RecyclerViews Question: username_0: Standard practice is to bind all views to the class at the beginning of the `onCreate` method. In EventList the RecyclerView is bound in two spots. The first spot is the Retrofit call (Block 1) and the second is apparently when the user returns to EventList from LoginActivity (Block 2). If the if{}else{} statement executing this is removed and the resulting Retrofit call is kept everything still works (Block 1). Block 1: https://github.com/ZacBuresh/MyriadMobileChallenge2018/blob/773ce882d4adac5fdd1cac301f08abb211565a05/app/src/main/java/com/example/sam01/final_project/activity/EventList.java#L65-L96 Block 2: https://github.com/ZacBuresh/MyriadMobileChallenge2018/blob/773ce882d4adac5fdd1cac301f08abb211565a05/app/src/main/java/com/example/sam01/final_project/activity/EventList.java#L98-L113
acl-org/acl-anthology
917035654
Title: EACL 2021 DOI Question: username_0: Hi, I would like to know if EACL 2021 papers will get a DOI number, and if there is an estimate on when. Thanks. Answers: username_1: As a follow-up, is there a way to guesstimate the DOI prefix so that we can use it when some website asks us the DOI?
LLNL/UnifyFS
525945290
Title: Design: flattening writes Question: username_0: Currently we store all writes in a log and send them all to the server on fsync(). This gives us really fast writes since we can just append our write to the end of the log. The downside is that if you do an overwrite of existing data, that also gets synced to the server. So if you did 100 one byte writes to the same offset in the file, all 100 writes get sent to the server, instead of one single write with the final value. Furthermore, to read that last byte you need to replay all 100 writes, making reads slow. I'm working on a update to Unify that would store all the writes in a segment tree. This would allow us to get rid the overlapping writes. There are two ways to go about it though: **1. Store all writes in a segment tree, only flush to log on fsync()** Whenever you do a write, store the metadata and data of the write in a segment tree. On fsync() or memory pressure, write out everything to the log, and clear the tree. Benefits: - Easiest to implement - The log grows slower, since there would be no overlapping writes stored in the log on every fsync(). Downsides: - The writes are only flattened between fsyncs(). If you write to byte 1, fsynced, then wrote to byte 1 and fsynced again, the log would contain two writes to byte 1. - Detecting memory pressure would be hard. - More overhead on a write as you could potentially have to reallocate buffers in the segment tree on overlapping writes. For example if you wrote to bytes 1-10 and then wrote to 1-5, you would have to reallocate the buffer on the 1-10 write to only hold 6-10. This could be an issue with huge writes. **2. Store all writes to the log as we're doing now, flatten write metadata on fsync()** Write to the log just like we're doing now. On fsync(), re-write the log metadata from our segment tree with the flattened writes. No actual data in the log is flattened, only metadata. Benefits: - Would make sure *all* writes up until that point were flattenened, not just those since last fsync(). We can do this since we're not storing the write data in the tree, only the metadata about the writes (which is tiny). - Reads would be faster than doing the #1 method, since all the writes would have been flattened, not just the writes between fsyncs(). Downsides: - The log grows at the same rate as now. That is, all write data is saved, even if it's been overwritten. Only the metadata is write flattened. - Harder to implement Answers: username_1: I prefer the second design. As part of the client library work I am doing, I'm hoping we can keep the write log metadata on a per-file basis, rather than the mess we have now where the metadata for all open files is interleaved. I think using segment trees requires us to do this per-file anyway. I have some code that already splits out the write log into an implementation that is shared between client and server sides (i.e., it's part of the common library). We should discuss whether the metadata segment tree code belongs there too. username_0: @username_1 right, the segment tree is currently per-file (the tree is stored in `unifyfs_filemeta_t`). username_0: I'm planning to implement option 2 unless I hear otherwise. @adammoody mentioned in an email to me that he'd be in favor of that as well. He said that he doesn't expect there to be a lot of re-writes, so the extra log size wouldn't be a big deal. username_0: #414 implements write flattening. Closing issue Status: Issue closed
sympy/sympy
547673314
Title: RR.parent returns CC Question: username_0: CC ``` Should `RR.parent` be returning `RR`? Answers: username_1: I think it may have something to do with mpmath and mpelements.py, because QQ(2).parent() prints QQ (as expected) and QQ is not used in mpelements.py. Please correct me if I am wrong. username_2: <sympy.polys.domains.mpelements.MPContext object at 0x7f1067743438> ``` This context is assigned to the data types, `RealElement` and `ComplexElement`, when the `MPContext` is initialized: https://github.com/sympy/sympy/blob/df38ad11d3f3ab5b94c878346fe8bfcf40fa6027/sympy/polys/domains/mpelements.py#L57-L62 The problem is that the same code is used to initialize both real and complex contexts. It seems that currently `RR` is created first and, when `CC` is initialized, its context will override the original context of `RealElement` on line 61. To fix this, there should probably be a keyword argument to separate the two cases from each other. username_1: I think there might be another problem because reversing the order of declaration, i.e., interchanging line 61,62 and even 57,58 and 59,60 give the same result as CC for RR(2).parent(). username_1: You were right :). I misunderstood your explanation. Adding a keyword argument, to prevent from overriding fixes this issue. Status: Issue closed
trueos/trueos-core
188952208
Title: sysadm control panel should use a more standard way to navigate Question: username_0: I found it _very_ confusing to use the sysadm control panel. In particular, it presents - Application Management - SysAdm Server Settings - System Management - Utilities and they appear to do nothing. Clicking on them does nothing except select them. Double-clicking on them does nothing except select them. Nothing is listed under them. It seems completely broken. It's only by accident that I stumbled on the fact that _right_-clicking causes them to expand to show items under them that actually do something when you click on them. Right-clicking is normally used for context menus, not for something like this, and I don't know how anyone is expected to know that right-clicking would cause the entries to expand. I really think that it should be changed so that either single-clicking or double-clicking with the _left_ mouse button opens them up (probably single-clicking, since that's what's used to activate the entries under them once they're expanded, but either would be far less confusing than right-clicking). Answers: username_1: Please, which desktop environment? I used SysAdm a few minutes ago on KDE and found it responding, to simple clicks, in the way that you want. username_0: Hmmm. Very weird. I am using KDE with the default mouse settings, and I confirm that if I have make that settings change, it fixes the problem in the sysadm control panel so that it's left-clicks that work rather than right-clicks, but I wouldn't have thought that the KDE settings would have any effect on the sysadm control panel, since it's clearly not a KDE app, since it doesn't depend on anything from KDE - though it is Qt5, I think. So, maybe something weird is going on because of that. And this raises the question of whether it's a sysadmin bug, a KDE bug, or a Qt bug. I had assumed that it was just a very weird design choice. Status: Issue closed username_2: Sorry we are only supporting Lumina going forward for TrueOS. Glad there seems to be a workaround at least. Closing.
timveil-cockroach/oltpbench
408462171
Title: unable to load data into auctionmark Question: username_0: ``` Exception in thread "main" 20:08:52,021 (AuctionMarkLoader.java:202) INFO - *** START USERACCT_ITEM java.lang.RuntimeException: Failed to execute threads: Unexpected error while generating table data for 'CATEGORY' at com.oltpbenchmark.util.ThreadUtil.run(ThreadUtil.java:298) at com.oltpbenchmark.util.ThreadUtil.runNewPool(ThreadUtil.java:262) at com.oltpbenchmark.api.BenchmarkModule.loadDatabase(BenchmarkModule.java:290) at com.oltpbenchmark.api.BenchmarkModule.loadDatabase(BenchmarkModule.java:258) at com.oltpbenchmark.DBWorkload.runLoader(DBWorkload.java:794) at com.oltpbenchmark.DBWorkload.main(DBWorkload.java:525) Caused by: java.lang.RuntimeException: Unexpected error while generating table data for 'CATEGORY' at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$AbstractTableGenerator.load(AuctionMarkLoader.java:410) at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$CountdownLoaderThread.load(AuctionMarkLoader.java:151) at com.oltpbenchmark.api.Loader$LoaderThread.run(Loader.java:64) at com.oltpbenchmark.util.ThreadUtil$LatchRunnable.run(ThreadUtil.java:332) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO CATEGORY VALUES (0, 'Antiques', NULL),(1, 'Antiquities', 0),(3, 'Byzantine', 1),(4, 'Celtic', 1),(5, 'Egyptian', 1),(6, 'Far Eastern', 1),(7, 'Greek', 1),(8, 'Holy Land', 1),(9, 'Islamic', 1),(10, 'Near Eastern', 1),(11, 'Neolithic &amp; Paleolithic', 1),(17, 'Other', 1),(16, 'Price Guides &amp; Publications', 1),(15, 'Reproductions', 1),(12, 'Roman', 1),(13, 'South Italian', 1),(2, 'The Americas', 1),(14, 'Viking', 1),(18, 'Architectural &amp; Garden', 0),(19, 'Balusters', 18),(20, 'Barn Doors', 18),(21, 'Beams', 18),(22, 'Ceiling Tins', 18),(23, 'Chandeliers, Fixtures, Sconces', 18),(24, 'Columns &amp; Posts', 18),(25, 'Corbels', 18),(26, 'Doors', 18),(27, 'Finials', 18),(28, 'Fireplaces &amp; Mantels', 18),(29, 'Garden', 18),(30, 'Hardware', 18),(31, 'Door Bells &amp; Knockers', 30),(32, 'Door Knobs &amp; Handles', 30),(33, 'Door Plates &amp; Backplates', 30),(34, 'Drawer Pulls', 30),(35, 'Escutcheons &amp; Key Hole Covers', 30),(36, 'Heating Grates &amp; Vents', 30),(37, 'Hooks &amp; Brackets', 30),(38, 'Locks &amp; Keys', 30),(39, 'Nails', 30),(41, 'Other', 30),(40, 'Switch Plates &amp; Outlet Covers', 30),(56, 'Other', 18),(42, 'Pediments', 18),(43, 'Plumbing', 18),(55, 'Price Guides &amp; Publications', 18),(54, 'Reproductions', 18),(44, 'Signs', 18),(45, 'Stained Glass Windows', 18),(47, '1900-1940', 45),(48, '1940-Now', 45),(46, 'Pre-1900', 45),(49, 'Unknown', 45),(50, 'Stair &amp; Carpet Rods', 18),(51, 'Tiles', 18),(52, 'Weathervanes &amp; Lightning Rods', 18),(53, 'Windows, Sashes &amp; Locks', 18),(57, 'Asian Antiques', 0),(58, 'Burma', 57),(59, 'China', 57),(60, 'Amulets', 59),(61, 'Armor', 59),(62, 'Baskets', 59),(63, 'Bells', 59),(64, 'Bowls', 59),(65, 'Boxes', 59),(66, 'Bracelets', 59),(67, 'Brush Pots', 59),(68, 'Brush Washers', 59),(69, 'Cabinets', 59),(70, 'Chairs', 59),(71, 'Chests', 59),(72, 'Fans', 59),(73, 'Glasses &amp; Cups', 59),(74, 'Incense Burners', 59),(75, 'Ink Stones', 59),(76, 'Masks', 59),(77, 'Necklaces &amp; Pendants', 59),(113, 'Other', 59),(78, 'Paintings &amp; Scrolls', 59),(79, 'Plates', 59),(80, 'Pots', 59),(81, 'Rings', 59),(82, 'Robes &amp; Textiles', 59),(83, 'Seals', 59),(84, 'Snuff Bottles', 59),(85, 'Statues', 59),(86, 'Birds', 85),(87, 'Buddha', 85),(88, 'Dogs', 85),(89, 'Dragons', 85),(90, 'Elephants', 85),(91, 'Foo Dogs', 85),(92, 'Horses', 85),(93, 'Kwan-yin', 85),(94, 'Men, Women &amp; Children', 85),(95, 'Mice', 85),(96, 'Monkeys', 85),(107, 'Other', 85),(97, 'Oxen', 85),(98, 'Phoenix', 85),(99, 'Pigs', 85),(100, 'Rabbits', 85),(101, 'Rats', 85),(102, 'Roosters', 85),(103, 'Sheep', 85),(104, 'Snakes', 85),(105, 'Tigers', 85),(106, 'Turtles', 85),(108, 'Swords', 59),(109, 'Tables', 59),(110, 'Tea Caddies', 59),(111, 'Teapots', 59),(112, 'Vases', 59),(114, 'India', 57),(115, 'Japan', 57),(116, 'Armor', 115),(117, 'Bells', 115),(118, 'Bowls', 115),(119, 'Boxes', 115),(120, 'Dolls', 115),(121, 'Fans', 115),(122, 'Glasses &amp; Cups', 115),(123, 'Katana', 115),(124, 'Kimonos &amp; Textiles', 115),(125, 'Masks', 115),(126, 'Netsuke', 115),(136, 'Other', 115) was aborted: ERROR: foreign key violation: value [0] not found in category@primary [c_id] (txn=a30ca1f6-1737-4540-892e-5db8e164b748) Call getNextException to see other errors in the batch. at org.postgresql.jdbc.BatchResultHandler.handleCompletion(BatchResultHandler.java:166) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:492) at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:840) at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1538) at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader.generateTableData(AuctionMarkLoader.java:237) at com.oltpbenchmark.benchmarks.auctionmark.AuctionMarkLoader$AbstractTableGenerator.load(AuctionMarkLoader.java:408) ... 6 more Caused by: org.postgresql.util.PSQLException: ERROR: foreign key violation: value [0] not found in category@primary [c_id] (txn=a30ca1f6-1737-4540-892e-5db8e164b748) at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:481) ... 10 more ``` Answers: username_0: may be related to https://github.com/cockroachdb/cockroach/issues/20041 username_1: [auctionmark-cockroachdb-ddl.sql.zip](https://github.com/username_0-cockroach/oltpbench/files/2865232/auctionmark-cockroachdb-ddl.sql.zip) a workaround, for now, until the self-reference is supported ``` DROP TABLE IF EXISTS category CASCADE; CREATE TABLE category ( c_id BIGINT NOT NULL, c_name VARCHAR(50), c_parent_id BIGINT, -- c_parent_id BIGINT REFERENCES category (c_id), PRIMARY KEY (c_id), INDEX IDX_CATEGORY_PARENT (c_parent_id) ); ``` username_0: able to rollback workaround. self referencing constraint seems to work well in 20.1.x Status: Issue closed
twardokus/csec380-project
493783158
Title: Ability to delete video Question: username_0: As a user I want to be able to delete videos I uploaded. Developer Tasks - [ ] create a "delete video" button - [ ] create delete video functionality Acceptance Criteria - [ ] a user can delete a video that they have uploaded Answers: username_1: Done Status: Issue closed
space-wizards/space-station-14
738118878
Title: Content.Client crashes when typing CRE in Spawn entities search box Question: username_0: Steps to reproduce: 1. Launch Server and Client (in playground mode) 2. Open the Spawn Entities menu 3. Enter CRE (case not important) I expect to search the entities. Instead I client crashes with following log ``` [FATL] unhandled: System.Collections.Generic.KeyNotFoundException: The given key 'Open' was not present in the dictionary. at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Robust.Client.GameObjects.AppearanceComponent.GetData[T](Enum key) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameObjects\Components\Appearance\AppearanceComponent.cs:line 49 at Content.Client.GameObjects.Components.Storage.CrematoriumVisualizer.OnChangeData(AppearanceComponent component) in D:\projects\space-station-14\Content.Client\GameObjects\Components\Morgue \CrematoriumVisualizer.cs:line 47 at Robust.Client.GameObjects.SpriteComponent.GetPrototypeTextures(EntityPrototype prototype, IResourceCache resourceCache)+MoveNext() in D:\projects\space-station-14\RobustToolbox\Robust.Clie nt\GameObjects\Components\Renderable\SpriteComponent.cs:line 1840 at System.Linq.Enumerable.SelectEnumerableIterator`2.ToList() at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.InsertEntityButton(EntityPrototype prototype, Boolean insertFirst, Int32 index) in D:\projects\space-station-14\RobustToolbox\R obust.Client\UserInterface\CustomControls\EntitySpawnWindow.cs:line 315 at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.UpdateVisiblePrototypes() in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\CustomControls\EntitySpawnW indow.cs:line 286 at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.FrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\CustomControls\Entit ySpawnWindow.cs:line 386 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 703 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 706 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 706 at Robust.Client.UserInterface.UserInterfaceManager.FrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\UserInterfaceManager.cs:line 183 at Robust.Client.GameController.Update(FrameEventArgs frameEventArgs) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController.cs:line 294 at Robust.Client.GameController.<MainLoop>b__65_3(Object sender, FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:l ine 108 at Robust.Shared.Timing.GameLoop.Run() in D:\projects\space-station-14\RobustToolbox\Robust.Shared\Timing\GameLoop.cs:line 242 at Robust.Client.GameController.MainLoop(DisplayMode mode) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:line 113 at Robust.Client.GameController.ParsedMain(CommandLineArgs args, Boolean contentStart) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs: line 59 at Robust.Client.GameController.Start(String[] args, Boolean contentStart) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:line 35 at Robust.Client.ContentStart.Start(String[] args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\ContentStart.cs:line 10 at Content.Client.Program.Main(String[] args) in D:\projects\space-station-14\Content.Client\Program.cs:line 9 Unhandled exception. System.Collections.Generic.KeyNotFoundException: The given key 'Open' was not present in the dictionary. at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Robust.Client.GameObjects.AppearanceComponent.GetData[T](Enum key) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameObjects\Components\Appearance\AppearanceComponent.cs:line 49 at Content.Client.GameObjects.Components.Storage.CrematoriumVisualizer.OnChangeData(AppearanceComponent component) in D:\projects\space-station-14\Content.Client\GameObjects\Components\Morgue \CrematoriumVisualizer.cs:line 47 at Robust.Client.GameObjects.SpriteComponent.GetPrototypeTextures(EntityPrototype prototype, IResourceCache resourceCache)+MoveNext() in D:\projects\space-station-14\RobustToolbox\Robust.Clie nt\GameObjects\Components\Renderable\SpriteComponent.cs:line 1840 at System.Linq.Enumerable.SelectEnumerableIterator`2.ToList() at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.InsertEntityButton(EntityPrototype prototype, Boolean insertFirst, Int32 index) in D:\projects\space-station-14\RobustToolbox\R obust.Client\UserInterface\CustomControls\EntitySpawnWindow.cs:line 315 at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.UpdateVisiblePrototypes() in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\CustomControls\EntitySpawnW indow.cs:line 286 at Robust.Client.UserInterface.CustomControls.EntitySpawnWindow.FrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\CustomControls\Entit ySpawnWindow.cs:line 386 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 703 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 706 at Robust.Client.UserInterface.Control.DoFrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\Control.cs:line 706 at Robust.Client.UserInterface.UserInterfaceManager.FrameUpdate(FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\UserInterface\UserInterfaceManager.cs:line 183 at Robust.Client.GameController.Update(FrameEventArgs frameEventArgs) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController.cs:line 294 at Robust.Client.GameController.<MainLoop>b__65_3(Object sender, FrameEventArgs args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:l ine 108 at Robust.Shared.Timing.GameLoop.Run() in D:\projects\space-station-14\RobustToolbox\Robust.Shared\Timing\GameLoop.cs:line 242 at Robust.Client.GameController.MainLoop(DisplayMode mode) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:line 113 at Robust.Client.GameController.ParsedMain(CommandLineArgs args, Boolean contentStart) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs: line 59 at Robust.Client.GameController.Start(String[] args, Boolean contentStart) in D:\projects\space-station-14\RobustToolbox\Robust.Client\GameController\GameController.Standalone.cs:line 35 at Robust.Client.ContentStart.Start(String[] args) in D:\projects\space-station-14\RobustToolbox\Robust.Client\ContentStart.cs:line 10 at Content.Client.Program.Main(String[] args) in D:\projects\space-station-14\Content.Client\Program.cs:line 9 ``` https://media.giphy.com/media/n64kLkXgHfCgDmKQbU/giphy.gif Status: Issue closed Answers: username_0: 🎉
2019-pemrogramanberbasiskerangkakerja-g/sistem-kehadiran-online-huda-findryan-fatimatus
443106925
Title: Parameter endpoint tidak jelas #1 Question: username_0: Pada endpoint `/apitambahjadwal`, ada beberapa parameter yang tidak jelas formatnya. 1. matkul: *id_matkul* atau *nama_matkul* 2. waktu_mulai dan waktu_selesai: format waktu yang diterima kurang jelas. Mohon berikan contoh. Answers: username_1: Sebelum terima kasih infonya,nanti akan diberitahukan lagi untuk input parameternya Status: Issue closed
scalameta/scalafmt
657057847
Title: version in .scalafmt.conf is not recognized if `newline.source` exists Question: username_0: This template is a guideline, not a strict requirement. - **Version**: 2.5.2 - **Integration**: sbt 1.3.13, sbt-scalafmt 2.4.0 - **Configuration**: ``` version=2.5.2 newlines.source=fold,unfold ``` ## Steps Run `scalafmt` on sbt ## Problem Scalafmt failed to format, causing the below error: ``` [info] Formatting 29 Scala sources... [error] missing setting 'version'. To fix this problem, add the following line to .scalafmt.conf: 'version=2.5.2'.: /home/vagrant/IdeaProjects/aws-lambda-scalajs-facade/.scalafmt.conf [error] (Compile / scalafmt) missing setting 'version'. To fix this problem, add the following line to .scalafmt.conf: 'version=2.5.2'.: /home/vagrant/IdeaProjects/aws-lambda-scalajs-facade/.scalafmt.conf [error] Total time: 0 s, completed 2020/07/15 13:45:52 ``` However, as the above configuration shows, `version` seems specified. ## Expectation Scalafmt should recognize `version` in `.scalafmt.conf`, and format with `newline.source` configuration. ## Workaround N/A Answers: username_1: you should use either `fold` or `unfold`, not both. Status: Issue closed
qlik-oss/enigma.js
434450594
Title: example of using getProgress with doReload Question: username_0: I am struggling to find any example or using getProgress to find out the status of a reload. I know it needs to be passed the requestId but I tried it and no matter what requestId gets passed it always just outputs: { qStarted: true, qFinished: true, qCompleted: 0, qTotal: 0, qKB: 0, qMillisecs: 0, qErrorData: [], qPersistentProgressMessages: [], qTransientProgressMessage: { qMessageCode: 0, qMessageParameters: [ '' ] } } Any ideas? Status: Issue closed Answers: username_1: Sorry that we missed this issue. You can send in `-1`: `getProgress(-1)`. As for error messages, you can configure how reload works in your session: https://core.qlik.com/services/qix-engine/apis/qix/global/#configurereload
facebookresearch/detectron2
792872552
Title: Please read & provide the following Question: username_0: I've been trying to preprocess my Google Open Images Dataset for detectron2 to detect saucers. The thing with the dataset is that the mask size and image size are different. So when I try to plot the images, I get wrong segmentations like this. ![image](https://user-images.githubusercontent.com/67138426/105640904-d0ce2c00-5ebb-11eb-8b75-ae120484498d.png) I've narrowed this down to be the issue from the preprocessing logs, where this is the final values I get for the dictionary `{'file_name': 'validation/0483900e6425b41a.jpg', 'image_id': '0483900e6425b41a', 'height': 1024, 'width': 683, 'annotations': [{'bbox': array([ 24.17699, 187.28024, 657.31195, 824.63715], dtype=float32), 'bbox_mode': <BoxMode.XYXY_ABS: 0>, 'category_id': 0, 'segmentation': {'size': [1000, 667], 'counts': b'WR...` As you can see, the 'height': 1024, 'width': 683 of the image and 'size': [1000, 667] of the segmentation are different. I've tried plotting the images using this code, but I keep getting differences in the segmentation mask size and image size. ``` from detectron2.utils.visualizer import ColorMode dataset_dicts = val_img_dicts for d in random.sample(dataset_dicts, 3): im = cv2.imread(d["file_name"]) v = Visualizer(im[:, :, ::-1], metadata=val_metadata, scale=1, instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels ) v = v.draw_dataset_dict(d) plt.imshow(cv2.cvtColor(v.get_image()[:, :, ::-1], cv2.COLOR_BGR2RGB)) plt.show() ``` How do I fix this issue? Status: Issue closed Answers: username_0: I've been trying to preprocess my Google Open Images Dataset for detectron2 to detect saucers. The thing with the dataset is that the mask size and image size are different. So when I try to plot the images, I get wrong segmentations like this. ![image](https://user-images.githubusercontent.com/67138426/105640904-d0ce2c00-5ebb-11eb-8b75-ae120484498d.png) I've narrowed this down to be the issue from the preprocessing logs, where this is the final values I get for the dictionary `{'file_name': 'validation/0483900e6425b41a.jpg', 'image_id': '0483900e6425b41a', 'height': 1024, 'width': 683, 'annotations': [{'bbox': array([ 24.17699, 187.28024, 657.31195, 824.63715], dtype=float32), 'bbox_mode': <BoxMode.XYXY_ABS: 0>, 'category_id': 0, 'segmentation': {'size': [1000, 667], 'counts': b'WR...` As you can see, the 'height': 1024, 'width': 683 of the image and 'size': [1000, 667] of the segmentation are different. I've tried plotting the images using this code, but I keep getting differences in the segmentation mask size and image size. ``` from detectron2.utils.visualizer import ColorMode dataset_dicts = val_img_dicts for d in random.sample(dataset_dicts, 3): im = cv2.imread(d["file_name"]) v = Visualizer(im[:, :, ::-1], metadata=val_metadata, scale=1, instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels ) v = v.draw_dataset_dict(d) plt.imshow(cv2.cvtColor(v.get_image()[:, :, ::-1], cv2.COLOR_BGR2RGB)) plt.show() ``` How do I fix this issue? username_1: We don't support such dataset. Status: Issue closed
cu-mkp/m-k-manuscript-data
483543264
Title: 5r-v - translation of representer Question: username_0: 5r-v: "Mays si tu regardes de pres il te represente a lendroit mays le visaige fort grand & le poil de la barbe gros comme fisselle & representera aussi un teston grand comme une assiette Et les f<emmes> voyent les lieulx secrets quelles ne veulent pas monstrer aulx Chirurgiens Il gecte la representation hors de soy Et si tu touches du doigt le lieu lœil de la representation un aultre doigt viendra contre le tien" But if you look at it close up, it will **show** you the right way up but with your face quite large & the hairs of your beard as thick as string & will **reflect** a nipple as large as a plate, and women can see the secret places that they do not want to show to surgeons. It casts the **representation** out of itself, and if you touch le lieu the eye of the **representation** with your finger, another finger will come against yours. WHY NOT USE REFLECT/REFLECTION THROUGHOUT TO TRANSLATE REPRESENTER/ATION? Answers: username_1: change instances in 5r-v to represent (no show, reflect, produce) no need for glossary entry username_0: Might this be too literal--in the sense of trying to stick to root word? Cotgrave also has "resemblance" or "shew" [show] for "representation". I have to say that the English "representation" today is a more abstract concept (which is also one of Cotgrave's meanings, but the question, as always, is what the a-p meant...). So, I am not fully on board with this, but if it has been fully discussed, I'll let it go. For Le mirouer concave compose de la forme susdicte rend une infinite de gentilesses rend = renders or produces. Not sure if you meant this "produces" Status: Issue closed username_2: Decision: translate "representer" as "represent" wherever possible across the 21 instances in the manuscript, inc. 5r-v. Done.
armstrongmd/hubbsly_pg_api
807825217
Title: Lists: Add item to shopping list Question: username_0: A user needs to be able to add an item to a shopping list in the database. Given an item ID, the system must be able to add that item to the shopping list, along with the following attribute: Quantity
php-tmdb/api
202316862
Title: getCredits in other language? similar error Question: username_0: `$movie = $this->repository->load('120', array('language' => 'es-ES'));` `$movie->getCredits()` is empty ! is normal¿ Status: Issue closed Answers: username_1: Hi @username_0, I'd recommend you make use of the global language filter in the first place, as so: ```php $token = new \Tmdb\ApiToken(TMDB_API_KEY); $client = new \Tmdb\Client($token); $plugin = new \Tmdb\HttpClient\Plugin\LanguageFilterPlugin('es-ES'); $client->getHttpClient()->addSubscriber($plugin); $repository = new \Tmdb\Repository\MovieRepository($client); /** @var \Tmdb\Model\Movie $movie */ $movie = $repository->load(87421); var_dump($movie->getOverview()); ``` This results in; ``` string(379) "Traicionado por su propia especie y dado por muerto en un lejano y desolado planeta, aparentemente sin vida, el duro Riddick tendrá que luchar por la supervivencia contra depredadores alienígenas y cazarrecompensas, convirtiéndose en un ser más poderoso y peligroso que nunca... Tercera entrega de la saga iniciada en "Pitch Black" y continuada en "Las crónicas de Riddick"." ``` By specifying the parameters, the `append_to_response` parameter was ignored, I have changed the code slightly to improve this behavior. This is fixed in b8eacc2
tox-dev/tox-conda
561927995
Title: tox-conda interacts badly with numpy on Windows, Python 3.7/3.8 Question: username_0: Having `numpy` in `conda_deps` on Windows with Python 3.7 or Python 3.8 causes a crash when any command inside the environment tries to import `numpy`. Python 3.6 works. See cross-reference https://github.com/numpy/numpy/issues/15537 Answers: username_0: Also https://github.com/conda-forge/numpy-feedstock/issues/184 username_0: It seems the issue may be that `numpy` does not load unless the conda environment is explicitly activated (see https://github.com/numpy/numpy/issues/15537#issuecomment-583773970). Does `tox-conda` activate the environments that it manages (or mimick *exactly* what `conda activate <envname>` would do)? username_1: I believe we are also seeing this issue, however we are seeing it in a tox environment that has `basepython=python3.6`. username_2: I think I might be seeing this as well (though for me it fails on 3.6 as well). Same setup as @username_0 mentioned, numpy gets installed via `conda_deps`. Only fails on windows. In case it's an important detail, I'm on github actions, using goanpeca/[email protected] to install conda. Here's a failing test: https://github.com/username_2/napari-omero/runs/804849289?check_suite_focus=true#step:6:62 username_1: I have managed to find a work around for this. By setting `CONDA_DLL_SEARCH_MODIFICATION_ENABLE = 1` in the `setenv` section of the tox testenv configuration it works again. There is more information in the [conda docs](https://docs.conda.io/projects/conda/en/latest/user-guide/troubleshooting.html#numpy-mkl-library-load-failed) about this workaround. username_3: A detail about @username_1 's DLL env variable fix. I've noticed this only works with numpy from the default channel from anaconda.com. It does not however, seem to work if you install numpy from conda-forge. username_4: @username_3, I can indeed confirm that if *does not* work with NumPy installed from `conda-forge`. I still see this issue for NumPy v1.20.1. Did you manage to find a workaround? username_4: I managed to work around it by commenting out the ``` conda_channels = conda-forge ``` section. Unfortunately, this led to some other issues, like [this](https://stackoverflow.com/questions/57484399/issue-importing-scikit-learn-module-scipy-has-no-attribute-lib). In the end I had to install both `numpy`, `scipy`, and `h5py` from the defaults channel to make it work. I am looking forward to a better (more stable) solution! username_3: @username_4, I got around it by setting my conda channels as follows: ``` conda_channels= defaults conda-forge ``` ... and then adding an env variable as per username_1's comment above: ``` setenv = CONDA_DLL_SEARCH_MODIFICATION_ENABLE=1 ``` I have dependencies on conda-forge as well, which is why it's still in my conda_channels list. So functionally this will pull dependencies from defaults first, and then check conda-forge if it can't find the package there. Setting the env variable took care of my dll issues. Status: Issue closed username_5: No update from author: closing.
ballerina-platform/ballerina-lang
508324645
Title: Initializing a project in an existing directory Question: username_0: The option you have for creating projects is `ballerina new`. However, this command doesn't allow initializing a project in an already existing directory. For example, I created a new directory manually and wanted to initialize a new Ballerina project in it. The CLI complains saying the directory already exists. ``` $ ballerina new 18449 ballerina: destination '/home/pubudu/testing/ballerina/issues/18449' already exists USAGE: ballerina new <project-name> For more information try --help ``` Wouldn't it be better if the user is warned that the specified directory already exists and whether the user wants to proceed or not and prompt the user to enter yes or no? Answers: username_1: This feature is supported now. Status: Issue closed
danbooru/danbooru
602264383
Title: Something broke for imageboard_desourced Question: username_0: Open URL https://danbooru.donmai.us/posts?tags=imageboard_desourced get this error. Logs: GET https://danbooru.donmai.us/posts?tags=imageboard_desourced 500 GET https://danbooru.donmai.us/assets/application.css net::ERR_ABORTED 404 Try incognito: worked. Try incognito login: error repeat. Logout incognito: worked. User ID | 418258 Level | Member ![image](https://user-images.githubusercontent.com/19782385/79618410-6b2a6380-8123-11ea-88d7-73d2f87ae2d7.png) ![image](https://user-images.githubusercontent.com/19782385/79618420-71b8db00-8123-11ea-8de9-6bd02e6596d5.png) Status: Issue closed Answers: username_1: This is a non-issue. You're searching a tag which has a low incidence and using a high limit of posts per page, and since your account is Member-level you have a search timeout of 3 seconds ([Help:Users](https://danbooru.donmai.us/wiki_pages/help:users)) which is what is happening. The reason you're not experiencing this when incognito is because the default limit for that is 20 posts per page. If you lower the limit for the search using the `limit:` metatag it'll be less likely to timeout ([Help:Cheatsheet#Limit](https://danbooru.donmai.us/wiki_pages/help:cheatsheet.html#dtext-n2-6)). You can also use range qualifiers as well to lower the number of posts to search through ([Help:Cheatsheet#RangeQualifiers](https://danbooru.donmai.us/wiki_pages/help:cheatsheet.html#dtext-n2-2)).
robotframework/robotframework
308573805
Title: Question:Which IDE you recommend for developing robot cases in python3 env Question: username_0: Hi, I want to know which IDE you recommend for developing robot cases in python3 env. RIDE is out of maintenance and I can't find a good tool to develop, run and debug cases. B.R. Luis Answers: username_1: Good question with no single right answer. Because this issue tracker is only for bug reports and enhancement requests, the question should be be asked on the [available support forums](http://robotframework.org/#Support). Status: Issue closed
SeldonIO/seldon-core
554303262
Title: Seldon Core Webhook Fails Randomly Question: username_0: In a cluster deployment the controller went into an error state, returning the following error when new deployments were being created: `Error from server (InternalError): error when creating "income.yaml": Internal error occurred: failed calling webhook "mseldondeployment.kb.io": Post https://seldon-webhook-service.seldon-system.svc:443/mutate-machinelearning-seldon-io-v1alpha2-seldondeployment?timeout=30s: ssh: rejected: connect failed (Connection refused)` I have not been able to reproduce the issue. Answers: username_1: Lets reopen if we find a repeatable situation as its too general for now. Status: Issue closed