repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
hcoona/hexo-renderer-asciidoc
109694041
Title: {} replacement sabotages use of passthrough with JS/CSS Question: username_0: ``` return $.html() .replace(/{/g, '&#123;') .replace(/}/g, '&#125;'); ``` If you're using asciidoc passthrough functionality (`+++`), '{' and '}' still gets replaced, making both JS and CSS broken. Why do these two lines exist? It was added in https://github.com/username_1/hexo-renderer-asciidoc/commit/7f3559ed3fbcab047655871c70ca2a2ff7a78011 with no explanation why. Answers: username_0: Maybe related to https://github.com/hexojs/hexo/commit/683fd0a19c4f94216dcc13c35970135d7a6ef03e? But the markdown plugin doesn't do it - https://github.com/hexojs/hexo-renderer-marked/blob/master/index.js username_1: I forgot why. I think it is related to the rendering of code blocks. The asciidoctor engine is different from markdown engine. username_1: I'm currently in some troubles and have no time to test & fix it. username_0: I've figured away around this. If you do escape both asciidoc (`++++`) and hexo (`{% raw %}`): ``` Some asciidoc text ++++ {% raw %} <script>(function () {alert(1);})()</script> {% endraw %} ++++ Some more asciidoc text ``` It will work. I'll close this issue as it's not a problem. Thanks for your time and the plugin, hope your troubles get better :) Status: Issue closed
zulip/zulip
595097048
Title: pinning streams: Pinning streams causes traceback Question: username_0: 39577b58ba4f5c10e91f8a4b6f2066f5c7387a96 seems to have introduced an error. To reproduce: * narrow to some unpinned stream * use left sidebar menu to pin that stream * open console and look for tracebacks ![image](https://user-images.githubusercontent.com/142908/78561104-2af2e900-77e5-11ea-9d50-7094cfdd7cee.png) Answers: username_0: This has the same basic reproducer as #14465, but it appears unrelated otherwise. Status: Issue closed
mapstruct/mapstruct
177568229
Title: Fix integration-test execution with jigsawed JDK 9 in toolchain Question: username_0: With the Jigsaw-enabled JDK 9 builds, we need to add `-addmods java.annotations.common` to the compiler args, as otherwise some of the annotations we add to the classes won't be available (e.g. `@Generated`). We might also consider detecting if the annotation is available in the classpath and leave it out / comment it out in the generated classes to make it easier for our users. Answers: username_1: oops. wrong reference.. sorry username_2: Done. Status: Issue closed
caojiangxia/caojiangxia.github.io
420459708
Title: Numpy函数解析 | caojiangxia Question: username_0: https://username_0.github.io/Numpy/#more NumpyNumpy作为一个工具包,已经得到了大量的应用,可以说只要你用Pyhton,都必须要接触到这个工具包,但是numpy中的函数你真的了解吗?本文结合我自己的使用过程,对一些函数进行一些总结,这样我们在用时方便查找。 导入我们在使用时为了方便,通常在导入时将numpy重命名为np。 1import numpy as np 版本号1np.__version__ 我现在使用的版本是以下版本: 1
microsoft/DeepSpeed
798881065
Title: DeepSpeed not using all GPUs available Question: username_0: I followed the DeepSpeed tutorial on DCGAN exactly: https://www.deepspeed.ai/tutorials/gan/ Except applied to StyleGAN2: https://github.com/NVlabs/stylegan2-ada-pytorch It runs fine with DeepSpeed but only uses 1 of 2 GPUs. nvidia-smi: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 | | N/A 56C P0 286W / 300W | 12846MiB / 16160MiB | 70% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... Off | 00000000:00:05.0 Off | 0 | | N/A 37C P0 32W / 300W | 3MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 20748 C python 12843MiB | +-----------------------------------------------------------------------------+ ``` Please advise. Answers: username_0: My sincere apologies, I was calling 'python train.py' instead of 'deepspeed train.py'. Status: Issue closed username_1: Hi, how to modify the code of stylegan2-ada-pytorch? Is it faster with deepspeed?
pi18a-group-dynamics/First-Team
599690746
Title: Реализовать форму Добавление рецепта Question: username_0: 1. Реализовать ввод всех полей с их валидацией. 2. Реализовать выбор существующих категорий добавляемого рецепта. 3.Выполнить добавления выбранных ингредиентов и их меры: - если ингредиент есть, но мера иная - добавить с новой мерой; -если данного ингридиента и меры нет - добавить новый ингредиент; 4. Выполнить удаление ингридиента в случае ошибочного его добавления - удаляет выделенную запись. 5. Реализовать выбор фотографии с окном выбора фотографии с устройства. 6. Реализовать ввод в свободной форме алгоритма приготовления. 7.Реализовать сохранение рецепта.<issue_closed> Status: Issue closed
OverZealous/gulp-cdnizer
192204746
Title: How to test if a css file is loaded? Question: username_0: It is easy to test if a js file is loaded, e.g., ``` { file: 'bower_components/please-wait/build/please-wait.js', package: 'please-wait', test: 'window.pleaseWait', cdn: '//cdnjs.cloudflare.com/ajax/libs/please-wait/${ version }/please-wait.min.js' }, ``` However, I could not figure out how to test css file ``` { file: 'bower_components/please-wait/build/please-wait.css', package: 'please-wait', // test: 'window.pleaseWait', // How to test .css? cdn: '//cdnjs.cloudflare.com/ajax/libs/please-wait/${ version }/please-wait.min.css' } ``` Answers: username_1: Hey there. As it says in the README, there's no built-in functionality for a non-JavaScript fallback. For CSS, especially, this would be difficult to solve in a generic way. However, the `fallbackTest` can be customized per import, [which is described here in the docs](https://github.com/username_1/gulp-cdnizer#optionsfallbacktest). Just write a JS snippet to detect if your CSS is loaded properly, and if not, inject the local copy of the CSS instead. Status: Issue closed username_1: Also, for future reference, these sort of questions belong on the (cdnizer repo)[https://github.com/username_1/cdnizer]—this repo is only for the Gulp plugin itself.
ChiselsAndBits/Chisels-and-Bits
1015162825
Title: Control menu keybinding, block picking issues, sphere grid reduce a significant amount of FPS while moving. Question: username_0: * MC Version: 1.16.5 * C&B Version: 1.0.11 * Do You have Optifine: Yes - 1.16.5_HD_U_G8 - Changing the original bind for Tool menu to keys like Alt, Ctrl, Shift, the menu wont pop up, only if the keys are ordinary letters or numbers. In 0.3.x any key was working for the menu. - Picking a chiseled block seems to be impossible now since trying to pick them with a free hand gives it's bit instead. - The Medium and Full sizes of the Sphere grid reduce a good amount of FPS, Medium reducing about 10+ ( Not much of a problem ) and Full size reducing between 40 and 80+ [ I'm running on a Ryzen 7 2700 3.2ghz, 16gb ram ( 8 of them allocated to Minecraft ) and Radeon R7 370 Sapphire Nitro 4gb ] Answers: username_1: Interesting, let me cover this a bit more accurate: Picking a full block: Hold shift to pick the entire block and not just the targeted bit. The spheres are known, there is sadly not much I can do, it is related to how MC treats the voxelshapes, but I will see what I can do in the future. As for the keybind. I just use the vanilla keybinding system. And check if it is pressed to open..... So not sure I can be of help there, but I will check. Status: Issue closed username_1: So I checked that menu keybinding issue and I had no problems binding CTRL + G or SHIFT + H to the key and opening the UI. As such unless you have a better way for me to reproduce this, logs, video recordings etc, I can't really help you, and will be closing this issue.
mosrainis/SHBHiking
456280927
Title: Task list Question: username_0: [ ] Develop the back-end of SHB Hiking with PHP language. It need to be compatible with WordPress [ ] Redesign the responsive system [ ] Fix the links on Page contents and also on navigation # Optional : [ ] Convert the project to the EN version [ ] New features (Like tools that improves user experience) Status: Issue closed Answers: username_0: - [ ] Develop the back-end of SHB Hiking with PHP language. It need to be compatible with WordPress - [ ] Redesign the responsive system - [ ] Fix the links on Page contents and also on navigation ### Optional : - [ ] Convert the project to the EN version - [ ] New features (Like tools that improves user experience)
cccneto/Ibamam
906456991
Title: Submeter trabalho na Enanppas 2021 Question: username_0: https://anppas.org.br/x-enanppas-2021/ GT 07 - Desafios e perspectivas para a produção de dados socioambientais relacionados às mudanças climáticas globais Instruções: https://even3.blob.core.windows.net/geral/X_Enannpas_2021_ChamadaTrabalhosCompletos.4b8ed7892564411c9efa.pdf Answers: username_0: - Quais são os desafios ao usar essa base? - Moeda - nome dos municípios para cruzar com o geobr e criar mapas username_1: **Moeda** - - As moedas só são comparáveis depois de realizar sua conversão para uma moeda de referência. - Para fazer a conversão precisamos seguir um metodologia que foi parcialmente desenvolvida função "`converter()`" - A esse respeito informo que são 10 tipos diferentes de moeda. - As moedas que oferecem dificuldade de lidar com sua conversão para o _Real_ são: UFIR, OTN, BTN, MVR. - Essas moedas representam 0,54% das observações do banco de dados.
telerik/kendo-ui-core
206447568
Title: AutoComplete top group header stays displayed, even when there are not matches. Question: username_0: ### Bug report AutoComplete top group header stays displayed, even when there are not matches regarding the typed in the input text. ### Reproduction of the problem use the following demo: https://demos.telerik.com/kendo-ui/autocomplete/grouping Type any sequence of characters that don't match any item. (e.g. test) See video: https://www.screencast.com/t/tgV4dsdyEMCc ### Current behavior The first listed Group Header in the dropdown element is still displayed ### Expected/desired behavior Should display "NO DATA FOUND" only ### Environment * **Kendo UI version:** 201x.r.ddd * **jQuery version:** x.y * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]<issue_closed> Status: Issue closed
slime-lang/slime
207008572
Title: Ability to interpolate HTML inside text blocks Question: username_0: Currently ``` p | Hello #{link "world", to: "/foo"} ``` will result in an error ``` protocol String.Chars not implemented for {:safe, [60, "a", [[32, "href", 61, 34, "/world", 34]], 62, "Hello", 60, 47, "a", 62]} ``` Answers: username_1: Thanks @username_0! Want to take a shot at implementing this? username_2: @username_0 could you confirm it is fixed with current master? Status: Issue closed username_2: fixed by #130
yiisoft/yii2-debug
185371000
Title: Toolbar responsive but transparant Question: username_0: Background is limited to 40px which causes unreadable overlaps. Take a look at the screenshot ![responsive-bug](https://cloud.githubusercontent.com/assets/125726/19724845/c8db501e-9b82-11e6-97cf-0b0b6f648029.png)<issue_closed> Status: Issue closed
jasontaylordev/CleanArchitecture
974255292
Title: getting error "No connection could be made because the target machine actively refused it." Question: username_0: **Describe the bug** I have set up the application as per the steps outlined in [https://github.com/username_3/CleanArchitecture](url). Now while trying to access https://localhost:5001/, getting the socket exception : "No connection could be made because the target machine actively refused it." **To Reproduce** Steps to reproduce the behavior: 1. Set up the application as per the steps outlined in [https://github.com/username_3/CleanArchitecture](url). 2. After step 8, when I am trying to access https://localhost:5001/, getting the socket exception : "No connection could be made because the target machine actively refused it." 3. Have installed cert for iis. 4. Have not configured Docker for Windows, not sure if that's mandatory or not. **Expected behavior** The Project UI should display **Screenshots** If applicable, add screenshots to help explain your problem. ![SocketException](https://user-images.githubusercontent.com/7944380/130007602-62025a6a-6d83-4b21-84e7-117afa1c76f0.PNG) **Additional context** Add any other context about the problem here. Answers: username_1: use npm run start, to start the client username_2: I had the same problem. The issue was the ClientApp was not running. Execute the following commands and then try again. 1. Navigate to src/WebUI/ClientApp and run `npm install` 2. Navigate to src/WebUI/ClientApp and run `npm start `to launch the front end (Angular) Status: Issue closed username_3: It's covered here - https://github.com/username_3/CleanArchitecture#getting-started username_4: This error is a network-related error occurred while establishing a connection to the Server. It means that the error is occurring because there is no server listening at the hostname and port you assigned. It literally means that the machine exists but that it has no services listening on the specified port . So, no connection can be established. Generally, it happens that something is preventing a connection to the port or hostname. Either there is a firewall blocking the connection or the process that is hosting the service is not listening on that specific port. This may be because it is not running at all or because it is listening on a different port. So, [no connection](http://csharp.net-informations.com/communications/connection.htm) can be established. Try running netstat -anb from the command line to see if there's anything listening on the port you were entered. If you get nothing, try changing your port number and see if that works for you. In Windows operating systems, you can use the netstat services via the command line (cmd.exe) . On Linux you may need to do netstat -anp instead. The target machine actively refused it occasionally , it is likely because the server has a full 'backlog' . Regardless of whether you can increase the server backlog , you do need retry logic in your client code, sometimes it cope with this issue; as even with a long backlog the server might be receiving lots of other requests on that port at that time.
puppylinux-woof-CE/puppy_icon_theme
152538559
Title: xlink is not always defined Question: username_0: In some icons (inode-chardevice.svg, spell-check.svg) xmlns:xlink is defined in the \<svg\> tag attributes while in others (internet_connect*.svg, networkboth.svg) is not. Lack of [name space declaration](https://developer.mozilla.org/en/docs/Web/SVG/Namespaces_Crash_Course) may result in failures in apps/environments that adhere to protocols more strictly. Answers: username_1: For simplicity, I will probably remove all `xlink` lines and redo gradients and glyphs affected. Status: Issue closed username_1: Ok, have fixed the script so that if xlink is defined then it is carried over. I was too lazy to draw paths for the glyphs! I'll close this but feel free to re-open if there is still a problem.
pswai/ember-simple-redux
350257932
Title: Assertion Failed: calling set on destroyed object Question: username_0: Sometimes `Assertion Failed: calling set on destroyed object` is being thrown in `runUpdater`. That happens when: - A `set()` is being called in the resolution of a promise - User navigates away before the promise is resolved - When the promise is resolved, the component is no longer there We need to check for `isDestroyed` and `isDestroying` properties of `componentInstance` of `runUpdater`<issue_closed> Status: Issue closed
GautamChibde/android-audio-visualizer
302987231
Title: Cannot Initialize Visualizer Engine Question: username_0: Hi, I followed the Wiki, but I still got this error. `Caused by: java.lang.RuntimeException: Cannot initialize Visualizer engine, error: -3 at android.media.audiofx.Visualizer.<init>(Visualizer.java:218)` Answers: username_1: @username_0 Use of the visualizer requires the permission **android.permission.RECORD_AUDIO** so make sure to add it to your manifest file. If you are using the device with Marshmallow or higher you need to request permission at runtime please follow [this](https://stackoverflow.com/a/34722591/5164673). username_0: Cool!. it works now. many thanks. Status: Issue closed
maiera/gde-app
50483820
Title: Move repository to GoogleDeveloperExperts org Question: username_0: Hey @maiera, We created an org to host projects related to the GDE program: https://github.com/GoogleDeveloperExperts I think it would be a good idea to move this repository to that org as well (your invitation has been already sent). Answers: username_1: The frontend will be further developed (mostly from scratch) at https://github.com/GoogleDeveloperExperts/gde-app-web The backend will be moved to https://github.com/GoogleDeveloperExperts/gde-app-backend Having them in separate repos makes maintenance much easier as well, than having them in the same repo in different branches. @username_2 do we want to copy the current frontend source to a branch on gde-app-web to keep it around for historic reasons and fixes that might be necessary while the new frontend is in development? Because then we could completely retire this repository. We still have to go through the issues and copy the ones that are still relevant to the new repo(s). username_2: I think it's a good idea having a branch with the current version, but instead of from this repo we could use the one from connected to the Jenkins deploy job (same code but structured to be server by GAE). username_1: @username_2 sure, feel free to add any code that you feel will make maintenance easiest for you. We can probably adjust the Jenkins deploy job to read from this branch then as well. @username_3 I've moved the current backend source to https://github.com/GoogleDeveloperExperts/gde-app-backend. Can you check if everything is in order there and use this as our main repo from now on? username_3: Yes I am happy to report that it looks as if its all ok !!! username_2: Created HTML5 branch with the current prod source structured to be served by GAE. Updated the prod Jenkins build to use the new repo and branch. Time to copy the issues \o/ username_1: Well, I guess that's done then, thanks for copying all the issues :) I'll leave the honour to close this issue to you ;) username_2: Eheh sorry for the git mail spam Status: Issue closed
Blizzard/node-rdkafka
502221551
Title: unable to install on Mac Mojave Question: username_0: Keep getting the same error like below: ../src/producer.cc:453:59: error: no matching member function for call to 'ToObject' v8::Local<v8::Object> header = v8Headers->Get(i)->ToObject(); ~~~~~~~~~~~~~~~~~~~^~~~~~~~ ../src/producer.cc:457:46: error: no matching member function for call to 'GetOwnPropertyNames' v8::Local<v8::Array> props = header->GetOwnPropertyNames(); ~~~~~~~~^~~~~~~~~~~~~~~~~~~ 5 warnings and 2 errors generated. make: *** [Release/obj.target/node-librdkafka/src/producer.o] Error 1 rm 11a9e3388a67e1ca5c31c1d8da49cb6d2714eb41.intermediate gyp ERR! build error gyp ERR! stack Error: `make` failed with exit code: 2 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:262:23) gyp ERR! stack at ChildProcess.emit (events.js:200:13) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12) gyp ERR! System Darwin 18.7.0 gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild" gyp ERR! cwd /Users/username_0/flow/dcc/dcc-feed/node_modules/node-rdkafka gyp ERR! node -v v12.3.1 gyp ERR! node-gyp -v v3.8.0 gyp ERR! not ok npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] install: `node-gyp rebuild` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. Answers: username_1: @username_0 I had this issue, what version of Node are you running? I had to drop down to v10.14.2. Try that and it should be fine. username_2: @username_1 Thank you, I had this issue and downgrading node resolved it. Any plans to fix the compatibility with future versions of node? username_3: Can you try with the latest version `v2.7.4`? username_2: @username_3 looks like its working ok with latest version, thanks username_4: I'm getting this same or a similar error with node-rdkafka v2.7.4 and node v13.1.0. It installs correctly with v12.12.0 ``` 5 errors generated. make: *** [Release/obj.target/node-librdkafka/src/binding.o] Error 1 rm 11a9e3388a67e1ca5c31c1d8da49cb6d2714eb41.intermediate gyp ERR! build error gyp ERR! stack Error: `make` failed with exit code: 2 gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23) gyp ERR! stack at ChildProcess.emit (events.js:210:5) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12) gyp ERR! System Darwin 18.7.0 gyp ERR! command "/usr/local/Cellar/node/13.1.0/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild" gyp ERR! cwd <REDACTED>/node_modules/node-rdkafka gyp ERR! node -v v13.1.0 gyp ERR! node-gyp -v v5.0.5 gyp ERR! not ok npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] install: `node-gyp rebuild` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /Users/<REDACTED>/.npm/_logs/2019-11-20T02_11_11_249Z-debug.log ``` username_5: Same Error with `username_4` username_3: Node 13 is not currently supported. username_6: This issue is happening to me with node 12.16.1 too, anyone knows how to fix it without downgrading node? Thanks!
chenglou/valid-css-props
53348288
Title: SVG Properties? Question: username_0: Are you open or opposed to accepting SVG properties in this project? Answers: username_1: Oh man, how many are there? username_0: This is the most comprehensive list I could find: https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute#Presentation_attributes 59. * alignment-baseline * baseline-shift * clip * clip-path * clip-rule * color * color-interpolation * color-interpolation-filters * color-profile * color-rendering * cursor * direction * display * dominant-baseline * enable-background * fill * fill-opacity * fill-rule * filter * flood-color * flood-opacity * font-family * font-size * font-size-adjust * font-stretch * font-style * font-variant * font-weight * glyph-orientation-horizontal * glyph-orientation-vertical * image-rendering * kerning * letter-spacing * lighting-color * marker-end * marker-mid * marker-start * mask * opacity * overflow * pointer-events * shape-rendering * stop-color * stop-opacity * stroke * stroke-dasharray * stroke-dashoffset * stroke-linecap * stroke-linejoin * stroke-miterlimit * stroke-opacity * stroke-width * text-anchor * text-decoration * text-rendering * unicode-bidi * visibility * word-spacing * writing-mode username_1: Hmm, how's the browser support for lots of these? I guess if it's not too much of a moving spec anymore we can add them? I'm not familiar with CSS SVG support though. username_0: I suspect it’s rather good. I used maybe 70% of them building an SVG/canvas t-shirt designer a couple years ago. Pinging @shepazu. How good is browser support for SVG’s presentational attributes in CSS?
germanotero/Cyrius
568421880
Title: Permitir borrar obras sociales Question: username_0: 13:30:28 ERROR org.hibernate.event.def.AbstractFlushingEventListener:300 : Could not synchronize database state with session org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:71) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:202) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:144) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) at com.framework.persistence.PersistenceService.delete(PersistenceService.java:140) at com.framework.abstractfactorys.DeleteSelectedTargetDecorator.executeAction(DeleteSelectedTargetDecorator.java:18) at com.framework.actions.RefreshAction.executeFormAction(RefreshAction.java:67) at com.framework.actions.AbstractFormAction.executeAction(AbstractFormAction.java:100) at com.framework.actions.DefaultActionListener$1.run(DefaultActionListener.java:20) at foxtrot.AbstractWorkerThread$2.run(AbstractWorkerThread.java:49) at java.security.AccessController.doPrivileged(Native Method) at foxtrot.AbstractWorkerThread.runTask(AbstractWorkerThread.java:45) at foxtrot.workers.DefaultWorkerThread.run(DefaultWorkerThread.java:153) at java.lang.Thread.run(Thread.java:748) Caused by: java.sql.BatchUpdateException: Batch entry 0 delete from cyrius.OBRASOCIAL where id=64 was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2512) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:401) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1312) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:349) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2574) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195) ... 17 more 13:30:28 WARN com.framework.exceptions.ExceptionHandler:41 : Error 13:30:28 ERROR org.apache.commons.logging.impl.Log4JLogger:169 : Finalizo: no se encontro la key: Error al intentar borrar en la base de datos, Verifique el estado de la misma com.framework.persistence.PersistenceException: ERROR I18N: Error al intentar borrar en la base de datos, Verifique el estado de la misma at com.framework.persistence.PersistenceService.delete(PersistenceService.java:144) at com.framework.abstractfactorys.DeleteSelectedTargetDecorator.executeAction(DeleteSelectedTargetDecorator.java:18) at com.framework.actions.RefreshAction.executeFormAction(RefreshAction.java:67) at com.framework.actions.AbstractFormAction.executeAction(AbstractFormAction.java:100) at com.framework.actions.DefaultActionListener$1.run(DefaultActionListener.java:20) at foxtrot.AbstractWorkerThread$2.run(AbstractWorkerThread.java:49) at java.security.AccessController.doPrivileged(Native Method) at foxtrot.AbstractWorkerThread.runTask(AbstractWorkerThread.java:45) at foxtrot.workers.DefaultWorkerThread.run(DefaultWorkerThread.java:153) at java.lang.Thread.run(Thread.java:748) Caused by: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:71) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:202) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:144) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) at com.framework.persistence.PersistenceService.delete(PersistenceService.java:140) ... 9 more Caused by: java.sql.BatchUpdateException: Batch entry 0 delete from cyrius.OBRASOCIAL where id=64 was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2512) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:401) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1312) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:349) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2574) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195) ... 17 more 13:30:28 ERROR com.framework.utils.StringProvider:32 : Finalizo: no se encontro la key: Error al intentar borrar en la base de datos, Verifique el estado de la misma
YapayPagamentos/yapay-magento1
749890185
Title: Erro ao criar/recomprar/editar pedidos pelo painel administrativo Question: username_0: Quando fazemos o pedido pelo painel por boleto, no bloco de resumo do pedido ele mostra o desconto do boleto, mas quando o pedido é feito e o boleto gerado, o desconto não é aplicado. Pedidos feito pela loja têm o desconto aplicado normalmente, até mesmo no boleto gerado. Apenas pedidos feitos pelo painel que não possuem o desconto.
angelozerr/typescript.java
195732077
Title: null pointer when building Question: username_0: I am using now the 1.2.0-snapshot version but building directly bombs out (over both my typescript projects) java.lang.NullPointerException at ts.utils.VersionHelper.versionCompare(VersionHelper.java:30) at ts.utils.VersionHelper.canSupport(VersionHelper.java:9) at ts.client.CommandNames.canSupport(CommandNames.java:78) at ts.resources.TypeScriptProject.canSupport(TypeScriptProject.java:393) at ts.eclipse.ide.core.builder.TypeScriptBuilder.incrementalBuild(TypeScriptBuilder.java:133) at ts.eclipse.ide.core.builder.TypeScriptBuilder.build(TypeScriptBuilder.java:50) at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:735) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:206) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:246) at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:301) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:304) at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:360) at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:383) at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:144) at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:235) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Answers: username_1: I think it's because your TypeScript Runtime is not good. Select the well TypeScript Runtime in the preferences JavaScript / TypeScript / Runtime. I must fix this problem in the https://github.com/username_1/typescript.java/issues/121 username_1: https://github.com/username_1/typescript.java/issues/121 should fix this issue. Status: Issue closed
tensorflow/tensorflow
341643890
Title: AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike' Question: username_0: Please go to Stack Overflow for help and support: https://stackoverflow.com/questions/tagged/tensorflow If you open a GitHub issue, here is our policy: 1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). 2. The form below must be filled out. 3. It shouldn't be a TensorBoard issue. Those go [here](https://github.com/tensorflow/tensorboard/issues). **Here's why we have that policy**: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow. ------------------------ ### System information - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **TensorFlow installed from (source or binary)**: - **TensorFlow version (use command below)**: - **Python version**: - **Bazel version (if compiling from source)**: - **GCC/Compiler version (if compiling from source)**: - **CUDA/cuDNN version**: - **GPU model and memory**: - **Exact command to reproduce**: You can collect some of this information using our environment capture script: https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh You can obtain the TensorFlow version with python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" ### Describe the problem Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request. ### Source code / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. Answers: username_1: Nagging Assignee @username_2: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. username_2: This question is better asked on [StackOverflow](http://stackoverflow.com/questions/tagged/tensorflow) since it is not a bug or feature request. There is also a larger community that reads questions there. If you think we've misinterpreted a bug, please comment again with a clear explanation, as well as all of the information requested in the [issue template](https://github.com/tensorflow/tensorflow/issues/new). Thanks! Status: Issue closed
pyqtgraph/pyqtgraph
615299804
Title: Unable to save flowchart in flowchart GUI Question: username_0: ### Short description <!-- In the following, please describe your issue in detail! --> When clicking the "save" button to save flowcharts, It throws an exception,like this. ``` fileName = unicode(fileName) NameError: name 'unicode' is not defined ``` and the GUI collapse unexpectedly. <!-- If some of the sections do not apply, just remove them. --> <!-- This should summarize the issue. --> ### Code to reproduce <!-- Please provide a minimal working example that reproduces the issue in the code block below. Ideally, this should be a full example someone else could run without additional setup. --> The code is right the example of "flowchart" in pyqtgraph.examples. ```python # -*- coding: utf-8 -*- """ This example demonstrates a very basic use of flowcharts: filter data, displaying both the input and output of the filter. The behavior of the filter can be reprogrammed by the user. Basic steps are: - create a flowchart and two plots - input noisy data to the flowchart - flowchart connects data to the first plot, where it is displayed - add a gaussian filter to lowpass the data, then display it in the second plot. """ import initExample ## Add path to library (just for examples; you do not need this) from pyqtgraph.flowchart import Flowchart from pyqtgraph.Qt import QtGui, QtCore import pyqtgraph as pg import numpy as np import pyqtgraph.metaarray as metaarray app = QtGui.QApplication([]) ## Create main window with grid layout win = QtGui.QMainWindow() win.setWindowTitle('pyqtgraph example: Flowchart') cw = QtGui.QWidget() win.setCentralWidget(cw) layout = QtGui.QGridLayout() cw.setLayout(layout) ## Create flowchart, define input/output terminals fc = Flowchart(terminals={ 'dataIn': {'io': 'in'}, 'dataOut': {'io': 'out'} }) w = fc.widget() ## Add flowchart control panel to the main window layout.addWidget(fc.widget(), 0, 0, 2, 1) [Truncated] /home/hzy/anaconda3/envs/femcalc/lib/python3.6/site-packages/pyqtgraph/functions.py:1276: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. filtered = filtered[sl] Traceback (most recent call last): File "/home/hzy/anaconda3/envs/femcalc/lib/python3.6/site-packages/pyqtgraph/flowchart/Flowchart.py", line 538, in saveFile fileName = unicode(fileName) NameError: name 'unicode' is not defined Fatal Python error: Aborted Current thread 0x00007f517801c700 (most recent call first): File "<stdin>", line 84 in <module> ``` ### Tested environment(s) * PyQtGraph version: '0.10.0'<!-- output of pyqtgraph.__version__ --> * Qt Python binding: 'PyQt5 5.12.3 Qt 5.12.5'<!-- output of pyqtgraph.Qt.VERSION_INFO --> * Python version: Python 3.6.7 * NumPy version: 1.17.5 <!-- output of numpy.__version__ --> * Operating system: deepin linux 15.11(acts like ubuntu 16.04) * Installation method:pip install pyqtgraph <!-- e.g. pip, conda, system packages, ... --> Status: Issue closed Answers: username_1: Thanks for the detailed report. This is a duplicate of #512 and was fixed with #886. You can get the 0.11.0 release candidate with `pip install pyqtgraph==0.11.0rc0`. You could also install the latest with `pip install git+https://github.com/pyqtgraph/pyqtgraph@develop`.
EOL/tramea
146735722
Title: metadata missing from TB records Question: username_0: I think one of these may be a metadata field we haven't hit before. The other I have no theory for. They're in http://eol.org/content_partners/709/resources/804 all records (eg: http://eol.org/pages/23284/data#data_point_9529372) should have both the following fields: http://rs.tdwg.org/dwc/terms/measurementDeterminedBy is a field we may not have harvested yet in New TB http://purl.org/dc/terms/source we certainly have in other resources, eg: http://eol.org/pages/1179469/data#data_point_847556 Answers: username_1: Huh. For that data point you reference, there are only the following predicates associated with it: http://www.w3.org/1999/02/22-rdf-syntax-ns#type = http://rs.tdwg.org/dwc/terms/MeasurementOrFact http://www.w3.org/1999/02/22-rdf-syntax-ns#type = http://eol.org/schema/trait http://rs.tdwg.org/dwc/terms/measurementType = http://www.owl-ontologies.com/unnamed.owl#Colonial http://rs.tdwg.org/dwc/terms/measurementValue = http://eol.org/schema/terms/yes http://rs.tdwg.org/dwc/terms/measurementValue = http://eol.org/schema/terms/yes http://rs.tdwg.org/dwc/terms/occurrenceID = http://eol.org/resources/804/occurrences/zoopilus_echinatus dana,_1846 http://eol.org/schema/measurementOfTaxon = http://eol.org/schema/terms/true http://rs.tdwg.org/dwc/terms/scientificName = Zoopilus echinatus Dana, 1846 http://eol.org/schema/terms/resource = http://eol.org/resources/804 ...So it wasn't harvested. I'll check the original source data... username_1: BTW, developers can now get that information for any data point by doing something like: ```ruby trait = DataPointUri.find 9529372 trait.show_metadata ``` username_1: I can't check the original data... it's not on-disk and there's no pointer to it in the resource. Looks like it's fallen into the void. ...I can't add the data without that... :S Sorry, this has become a dead end. ...What do you want me to do from here? username_0: Oops, here is the [original spreadsheet](https://www.dropbox.com/s/290g7j3f3hmtee1/scleractinia%20lifestye.xlsx?dl=0) @jrice, did you just want to look, or do you need it as a DwC-A? I can send it to Eli if that is needed. username_1: Sorry, I do need to see the DwC-A. :S ...That could have been where it went AWOL, and if not, I would hate to miss it... username_0: note that this would be a newly created DwC-A. This resource was originally uploaded as a spreadsheet, back when that was possible. Does that mean we really don't have the information you need? username_1: At the moment, we *really* don't have the information for that field you're looking for. If it's in TraitBank, it's stored in a way that the code can't get to it... unless I've missed something (which is possible). Are you sure we could see that metadata before the old code died? If so, I can pull up an old version of the code and try and figure it out... But if not, I suspect it was never actually there... :S username_0: You mean we don't have the field at all, I expect, not just that it's missing in this resource? http://rs.tdwg.org/dwc/terms/measurementDeterminedBy, I presume. No problem; I'll put that info in another field. http://purl.org/dc/terms/source is a field we use frequently, definitely works in other resources, and was certainly showing in this one when first published. Shall I get a DwC-A from Eli and try a reharvest? If at least that field shows up on the next try, then we don't have a problem... username_1: That's not actually what I meant, sorry. A few things could have happened: 1. The data might have been missing from the original file. 2. The http://rs.tdwg.org/dwc/terms/measurementDeterminedBy field was encoded in a way that wouldn't have been read during harvesting. 3. Something weird about harvesting causes http://rs.tdwg.org/dwc/terms/measurementDeterminedBy to not be harvested. ...This would be weird. 4. Something weird about harvesting causes http://rs.tdwg.org/dwc/terms/measurementDeterminedBy to be *stored* in a weird way such that my code can't see it in the old traitbank, and thus can't port it to the new one. 5. Something else which I couldn't predict (but I think this is actually unlikely, there's not much else that could go wrong, IMO). All I meant to say is that I don't have access to the data right now. At all. It's either gone or was never there. Please *do* use http://rs.tdwg.org/dwc/terms/measurementDeterminedBy ...otherwise I won't be able to determine which of those went wrong! :) username_0: OK, new [DwC-A](https://dl.dropboxusercontent.com/u/7597512/spreadsheets/resources/JenH/scleractinia_lifestye.tar.gz) is ready, listed in the harvest queue with annotation to check on this. username_1: Okay. I see the file, thanks. I see the field, which is good, and I see the data (which all appear to be set to "<NAME>"). I don't really see anything wrong here. So, now I'm concerned that this wasn't the file that was actually processed when we harvested this resource. ...Could we re-harvest it using this file? username_0: I'm in favor of that. At best, this is "a DwC-A created from the original file" but I'm only assuming the spreadsheet is the original file because I found it in my TB resource files folder. But if we harvest this one and it works, then we don't have a problem, and that works for me... username_0: OK, this was reharvested, but alas, source and determinedBy metadata are still missing, eg: http://eol.org/pages/23255/data#data_point_44884713 username_1: Alas, I can see in the harvest log that it had trouble reaching Virtuoso, so a lot of data was lost. We'll have to re-harvest. This is no good. Tramea harvesting should notice these kinds of errors and STOP to report them! Status: Issue closed username_0: Both fields now showing!
haskell/haskell-language-server
842684393
Title: File prefixes cannot be matched if files are located in mapped drives over SMB on Windows Question: username_0: Therefore, is it possible to replace all `canonicalizePath` function calls to `makeAbsolute`, given that the `makeAbsolute` function is recommended and is interchangeable with `canonicalizePath`? --- ### Your environment Output of `haskell-language-server --probe-tools` or `haskell-language-server-wrapper --probe-tools`: ```sh haskell-language-server version: 172.16.58.3 (GHC: 8.10.4) (PATH: haskell-language-server-8.10.4.exe) (GIT hash: 4cd1cf934638881e52b3eba9f70157a4b799c0e9) Tool versions found on the $PATH cabal: 3.4.0.0 stack: Not found ghc: 8.10.4 ``` Which lsp-client do you use: Sublime Text (Dev Channel, Build 4099) Describe your project (alternative: link to the project): Private project ### Steps to reproduce 1. Open source code on a mapped drive over SMB in the text editor 2. Let the language server client send requests to the language server ### Expected behaviour The file path in the response from the language server is still the mapped path, so the file prefix can be matched. ### Actual behaviour The file path in the response from the language server is resolved to the full network location path, and the file prefix cannot be matched. ### Include debug information <details> <summary> Debug output: (**Note**: As the project is private, all paths has been replaced with appropriate paths in the example. I have checked the output line by line, but it may still have some inconsistencies, apologize for that.) </summary> ``` Module "H:\Path\to\Project\a" is loaded by Cradle: Cradle {cradleRootDir = "H:\\Path\\to\\Project\\", cradleOptsProg = CradleAction: Cabal} Run entered for haskell-language-server-wrapper(haskell-language-server-wrapper.exe) Version 1.0.0.0, Git revision 4cd1cf934638881e52b3eba9f70157a4b799c0e9 (dirty) x86_64 ghc-8.10.4 Current directory: H:\Path\to\Project Operating system: mingw32 Arguments: ["--debug","."] Cradle directory: H:\Path\to\Project Cradle type: Cabal Tool versions found on the $PATH cabal: 3.4.0.0 stack: Not found ghc: 8.10.4 Consulting the cradle to get project GHC version... Project GHC version: 8.10.4 haskell-language-server exe candidates: ["haskell-language-server-8.10.4.exe","haskell-language-server-8.10.exe","haskell-language-server.exe"] Launching haskell-language-server exe at:***\haskell-language-server-8.10.4.exe haskell-language-server version: 172.16.58.3 (GHC: 8.10.4) (PATH: ***\haskell-language-server-8.10.4.exe) (GIT hash: 4cd1cf934638881e52b3eba9f70157a4b799c0e9) ghcide setup tester in H:\Path\to\Project. Report bugs at https://github.com/haskell/haskell-language-server/issues Step 1/4: Finding files to test in H:\Path\to\Project [Truncated] Completed (0 files worked, 8 files failed) haskell-language-server-wrapper.exe: callProcess: ***\haskell-language-server-8.10.4.exe "--debug" "." (exit 8): failed ``` </details> <details> <summary> LSP logs: </summary> ``` haskell-language-server: haskell-lsp:incoming message parse error. {"id":0,"result":null,"jsonrpc":"2.0"} Error in $.result: parsing () failed, expected Array, but encountered Null haskell-language-server: haskell-lsp:incoming message parse error. {"id":1,"result":null,"jsonrpc":"2.0"} Error in $.result: parsing () failed, expected Array, but encountered Null haskell-language-server: haskell-lsp:incoming message parse error. {"id":0,"result":null,"jsonrpc":"2.0"} Error in $.result: parsing () failed, expected Array, but encountered Null haskell-language-server: haskell-lsp:incoming message parse error. {"id":1,"result":null,"jsonrpc":"2.0"} Error in $.result: parsing () failed, expected Array, but encountered Null ``` </details>
derUli/ulicms
293261070
Title: Bootstrap Klassen an alle Formularelemente anfügen Question: username_0: * Die Bootstrap-Klasse "form-control" soll an alle Formularelemente gefügt werden. Das kann übergangsweise über global über jQuery gemacht werden, bis tatsächlich alle HTML-Tags angepasst wurden. Answers: username_0: folgende Input Types noch ausschließen: * button * submit * reset username_0: * image Status: Issue closed username_0: erledigt in ulicms-2018.2
openshift/installer
471416606
Title: Cluster creation fails on Azure using Mac installer Question: username_0: time="2019-07-22T15:59:55-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 82% complete" time="2019-07-22T16:00:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056" time="2019-07-22T16:00:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: downloading update" time="2019-07-22T16:00:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056" time="2019-07-22T16:00:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 23% complete" time="2019-07-22T16:00:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 24% complete" time="2019-07-22T16:00:31-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 78% complete" time="2019-07-22T16:00:46-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 90% complete" time="2019-07-22T16:01:02-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 91% complete" time="2019-07-22T16:01:31-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 92% complete" time="2019-07-22T16:03:46-07:00" level=debug msg="Still waiting for the cluster to initialize: Multiple errors are preventing progress:\n* Could not update servicemonitor \"openshift-apiserver-operator/openshift-apiserver-operator\" (369 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-authentication-operator/authentication-operator\" (344 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-controller-manager-operator/openshift-controller-manager-operator\" (372 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-image-registry/image-registry\" (350 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-apiserver-operator/kube-apiserver-operator\" (360 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-controller-manager-operator/kube-controller-manager-operator\" (363 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-scheduler-operator/kube-scheduler-operator\" (366 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-operator-lifecycle-manager/olm-operator\" (272 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator\" (353 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator\" (356 of 373): the server does not recognize this resource, check extension API servers" time="2019-07-22T16:10:46-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 93% complete" time="2019-07-22T16:11:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-06-03-135056: 94% complete" time="2019-07-22T16:12:16-07:00" level=debug msg="Still waiting for the cluster to initialize: Multiple errors are preventing progress:\n* Could not update servicemonitor \"openshift-apiserver-operator/openshift-apiserver-operator\" (369 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-authentication-operator/authentication-operator\" (344 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-controller-manager-operator/openshift-controller-manager-operator\" (372 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-image-registry/image-registry\" (350 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-apiserver-operator/kube-apiserver-operator\" (360 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-controller-manager-operator/kube-controller-manager-operator\" (363 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-scheduler-operator/kube-scheduler-operator\" (366 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-operator-lifecycle-manager/olm-operator\" (272 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator\" (353 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator\" (356 of 373): the server does not recognize this resource, check extension API servers" time="2019-07-22T16:31:21-07:00" level=fatal msg="failed to initialize the cluster: Multiple errors are preventing progress:\n* Could not update servicemonitor \"openshift-apiserver-operator/openshift-apiserver-operator\" (369 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-authentication-operator/authentication-operator\" (344 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-controller-manager-operator/openshift-controller-manager-operator\" (372 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-image-registry/image-registry\" (350 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-apiserver-operator/kube-apiserver-operator\" (360 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-controller-manager-operator/kube-controller-manager-operator\" (363 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-kube-scheduler-operator/kube-scheduler-operator\" (366 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-operator-lifecycle-manager/olm-operator\" (272 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator\" (353 of 373): the server does not recognize this resource, check extension API servers\n* Could not update servicemonitor \"openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator\" (356 of 373): the server does not recognize this resource, check extension API servers: timed out waiting for the condition" # What you expected to happen? OpenShift 4.x cluster should be successfully created on Azure. <img width="1189" alt="image" src="https://user-images.githubusercontent.com/12838766/61673254-8d31ab00-aca3-11e9-8176-36aed289e0a8.png"> <img width="690" alt="image" src="https://user-images.githubusercontent.com/12838766/61673295-b6ead200-aca3-11e9-992e-56804d601149.png"> <img width="1423" alt="image" src="https://user-images.githubusercontent.com/12838766/61673318-d550cd80-aca3-11e9-92bb-18b593e2e581.png"> # How to reproduce it (as minimally and precisely as possible)? <!-- Please list the full steps required to reproduce the issue. --> Download installer from https://cloud.redhat.com/openshift and run the following on Mac: ```console az login -t  <TENANT_ID> az account set --subscription <SUB_ID> export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE="/resourceGroups/rhcos_images/providers/Microsoft.Compute/images/rhcos" export CLUSTERNAME=<NAME> ./openshift-install create install-config --dir ~/.tmp/$CLUSTERNAME --log-level debug ./openshift-install create manifests --dir ~/.tmp/$CLUSTERNAME --log-level debug ./openshift-install create ignition-configs --dir ~/.tmp/$CLUSTERNAME --log-level debug ./openshift-install create cluster --dir ~/.tmp/$CLUSTERNAME --log-level debug ``` # Anything else we need to know? Enter text here. # References <!-- Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example: - #6017 --> - enter text here. Answers: username_1: /label platform/azure 1) Can you make sure the machines have actually being created. 2) Make sure you have approved the CSRs for the compute nodes joining. The tech preview installation had errors where the compute nodes don't join the cluster because the CSRs were not approved. See the note in https://github.com/openshift/installer/blob/master/docs/user/azure/install.md#create-cluster 3) you can try out the installer from the latest code by following the instructions here https://origin-release.svc.ci.openshift.org/releasestream/4.2.0-0.okd/release/4.2.0-0.okd-2019-07-22-195548 to download the installer binary. username_0: DEBUG Symlinking plugin terraform-provider-local src: "/Users/anubhuti/go/src/github.com/openshift/installer/bin/openshift-install" dst: "/var/folders/lp/pylwgl_x6gs16b61llvjndc40000gn/T/openshift-install-641839823/plugins/terraform-provider-local" DEBUG Initializing modules... DEBUG - bootstrap in DEBUG - dns in DEBUG - master in DEBUG - vnet in ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory [Truncated] ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR ERROR ERROR Error: Unreadable module directory ERROR ERROR Unable to evaluate directory symlink: lstat ../var: no such file or directory ERROR ERROR ERROR Error: Failed to read module directory ERROR ERROR Module directory does not exist or cannot be read. ERROR FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to initialize Terraform username_2: User newest installer from cloud.redhat.com and got csr issue (as expected): ``` [mjudeiki@redhat openshift-azure]$ oc get csr NAME AGE REQUESTOR CONDITION csr-2nd84 28m system:node:mj-test-8t745-master-1 Approved,Issued csr-6lpcv 7m26s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-7z5tw 28m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-89hxr 28m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-b4gkl 10m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bbkvx 19m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-cbglv 28m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-f6z6j 2m23s system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-fw6rl 28m system:node:mj-test-8t745-master-2 Approved,Issued csr-ngsf5 28m system:node:mj-test-8t745-master-0 Approved,Issued csr-qsv5w 14m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ``` after approval, I got fully working cluster. I suspect this is somehow related to where the installer is running username_3: I cannot reproduce this on my Mac. I tried the latest 4.2 https://openshift-release.svc.ci.openshift.org/releasestream/4.2.0-0.ci/release/4.2.0-0.ci-2019-07-22-175905 and I also regularly build the installer from master and install from that and haven't seen this. username_2: golang version? username_1: Cannot reproduce this on our end. Are you still seeing this error? username_1: we can't reproduce this error on our end.. /close this might be a duplicate of https://github.com/openshift/installer/issues/1839
Ariel-Thomas/adventurers-league-log
185710620
Title: Export/Import Data (Enhancement) Question: username_0: If looks like this was mentioned here, but I haven't found anything else: https://github.com/username_2/adventurers-league-log/issues/6 It would be nice to be able to export the log data for a character and also to be able to import that data. I have several characters on my account that I would like to move to a new account. This would prevent me from having to re-enter all the information. Answers: username_1: I was just looking for this sort of feature too. Would love to see it. username_0: Is there any chance of this happening? I know that it's an enhancement, but it would be great to get a sense of if you are even considering this feature. Thanks. username_2: Oh, sorry. Yes, it's actually one of the highest in my queue. My hope is I'll be back to working on AdventurersLeagueLog this weekend username_0: Great. Thanks for the confirmation. username_3: Even without the import functionality, just an export to a csv file would be awesome for backup purposes. username_0: The only thing is that I am trying to move a bunch of characters to two other accounts. username_2: Yeah, the csv export is my first focus of work. username_2: Did an initial thing in a CSV. let me know if there's something I've glaringly omitted. username_3: Looks nice so far! Would also be nice to be able to export the DM rewards page, but I assume that'll come eventually. username_2: Probably going to restructure this tonight. Now that I've done an initial pass I've got a few ideas. username_4: Is there any import mechanism currently? I can get the data out of my PDFs but I don't see a way to get it into adventurersleaguelog. username_4: I there were a way to auth a program to the site, it would be easy to push content with in with a HTTP POST script on the user side. username_5: Is this still being worked on? Trying to move characters to a second account, for a splinter group that doesn't use the season 8 rules but does allow a one-time import of AL characters...so I can keep them separate.
spring-projects/spring-session
747833102
Title: SessionRepositoryFilter.SessionRepositoryRequestWrapper.commitSession() triggers extraneous call to SessionRespository.findById(...) Question: username_0: **Describe the bug** For a valid session, SessionRepositoryFilter.SessionRepositoryRequestWrapper.commitSession() does the following: 1. Clears cached values for references for requestedSessionCache, requestedSession, and requestedSessionId. 1. Saves the state of the session 1. If the requested session ID is not valid, or if the ID of the session changed from the requested session ID, then the session ID is sent to the client However, since #1 already cleared the requested session cached values #3 requires a completely redundant call to SessionRepository.findById(...). Depending on the implementation, this can be unnecessarily expensive. **To Reproduce** Use a tracer to detect calls to SessionRepository.findById(...). For a requested session, this is triggered twice, once at the beginning of the request, and again after the session is saved. **Expected behavior** SessionRepository.findById(...) should be considered a potentially expensive operation, and only triggered when necessary. Answers: username_1: Hi All can i know in which release above fix will be available as i am facing the same issue mentioned above username_2: Spring Session works great for session management but noticed lot of slowness because of this known issue. This is very urgent issue and blocker. Is there any official **release date** for this? username_3: All this is a significant impact to performance for spring with apps that are using session management, Any chance we will see this fix pushed to the master branch in the next week or so. Thanks in advance.
MichaelDysart/SYSC4806_Project
592793827
Title: Weekly scrum - [April 2, 2020] Question: username_0: This week, I: Add a summary for option type by adding a pie chart #67 Updated documentation and UML #87 Answers: username_1: This week, I: Add a summary for number type #66 Add client side tests for fetch requests Fix bug in summary #81 username_0: This week, I: Add a summary for option type by adding a pie chart #67 Updated documentation and UML #87 username_2: For this week I: Added the summary option for open ended question type #65 username_3: This week, I: Separated the view for create and retrieve survey #57 Created the Help Page #56 username_4: This week, I: * Added generation of links for surveys #75 * Added support for retrieving surveys by link on the backend, and frontend display for them #79 * Gave assistance for splitting the views #57
samvera-labs/hyrax-batch_ingest
359999208
Title: Collect User Stories Question: username_0: ### Done looks like * Shared document enumerating Avalon _and_ AMS user stories. Answers: username_1: @username_2 @davidschober @sroosa Has this work already been done? If not could you three please point this and refine it if necessary? Status: Issue closed
fastai/fastai
391630443
Title: "Expected more than 1 value per channel when training". Question: username_0: Sorry, I can't find where to create a new topic on forum.... So I ask here.. I use the senet154 from "fastai/old/fastai/models/senet.py" on fastai '1.0.22'. data = ImageDataBunch.create(train_ds, val_ds, test_ds=test_ds, path=path, bs=bs, tfms=(tfms, []), num_workers=8, size=512).normalize(kk) learn = create_cnn( data, senet154, ps=0.5, cut=-2, path=path, metrics=[acc] ) learn.model = nn.DataParallel(learn.model) learn.callback_fns.append(partial(SaveModel, every='improvement', monitor='val_loss')) learn.fit_one_cycle(5, lrs) After 1 epoch end, it gives: "Expected more than 1 value per channel when training". And if I use size=128 or 256 of ImageDataBunch.create, everything is fine. Answers: username_1: That's because you didn't use `drop_last=True` for your training dataloader (now default in fastai) and ran in a batch of size 1, which causes error with BatchNorm during training (that's not us, it's on pytorch). You should use that option (or update your library). Status: Issue closed
sharedflight/issues-tracker
1121284707
Title: Crash from expired JWT token when XP left in settings for a long time Question: username_0: h/t PoutineFlyer for leaving his sim in settings for so long while eating a big snack .... ``` 2022-02-01 17:53:34 [SharedFlight][SFFlightClientController.cpp:866]: [WARNING] Failed to fetch active flights due to expired JWT token, attempting to renew login credentials. 2022-02-01 17:53:34 [SharedFlight][SFFlightClientController.cpp:1194]: [WARNING] Failed to fetch notifications. Error code 401: Expired JWT Token 2022-02-01 17:53:34 [SharedFlight][except.c:274]: Caught unknown exception 20474343 Backtrace is: 0 00007FFE27D64F69 C:\Windows\System32\KERNELBASE.dll+0000000000034F69 () 1 0000000081376101 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\Resources\plugins\SharedFlight\win_x64\SharedFlight.xpl+0000000000676101 (_Unwind_RaiseException+51) 2 000000EEDE93EBB0 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000000EEDE93EBB0 () 3 00000000814C2745 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\Resources\plugins\SharedFlight\win_x64\SharedFlight.xpl+00000000007C2745 (_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE12_Alloc_hiderD1Ev+15) 4 000000EEDE93EC70 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000000EEDE93EC70 () 5 0000000000000038 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+0000000000000038 () 6 000002A8CEA5DED0 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000002A8CEA5DED0 () 7 000000008153C408 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\Resources\plugins\SharedFlight\win_x64\SharedFlight.xpl+000000000083C408 (__cxa_throw+68) 8 000002A8CEA5DE90 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000002A8CEA5DE90 () 9 00000000814C5AF1 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\Resources\plugins\SharedFlight\win_x64\SharedFlight.xpl+00000000007C5AF1 (_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev+21) 10 000000EEDE93EC70 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000000EEDE93EC70 () 11 000000EEDE93EC70 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000000EEDE93EC70 () 12 000002A8CEA5DED0 C:\Program Files (x86)\Steam\steamapps\common\X-Plane 11\X-Plane.exe+000002A8CEA5DED0 () --=={This application has crashed because of the plugin: Shared Flight}==-- (Art controls are modified.) ```
styled-components/jest-styled-components
535905198
Title: Does not work with babel-plugin-styled-components Question: username_0: When using the latest version alongside `babel-plugin-styled-components`, the `toHaveStyleRule` matcher always fails (`"No style rules found on passed Component"`). (similar issue to #285) Answers: username_1: | ^ 102 | expect(component).toHaveStyleRule('cursor', 'pointer'); 103 | }); 104 | ``` username_2: I don't think that `sc-` filter is at fault here, as removing it does not change things. Going deeper, it seems that in certain situations, the stylesheet is empty (i.e. if you dig into `__PRIVATE__.masterSheet.tag.tag.element.sheet`). Not yet sure where the culprit lies, but I do know that in my case, simply changing where the component is rendered (i.e. moving it into the test itself, instead of at the top level ... yeah 🤷‍♂ ) makes it work well again. username_3: I can confirm that this issue is present, after updating styled-components to v5 and jest-styled-components to v7 (with babel-plugin-styled-components v1.10.6). Worked fine before! username_4: As far as I understand [this line](https://github.com/styled-components/jest-styled-components/blob/master/src/toHaveStyleRule.js#L71), that was introduced with [the following commit](https://github.com/styled-components/jest-styled-components/commit/8c2ea4a0a8789e11707e7f18e76b811e0d70c4c0#diff-4eed74593d3d8efde6a0959c9c35119bR71). It filters classes to those starting with `sc-`: ``` const staticClassNames = classNames.filter(x => x.startsWith("sc-")); ``` Unfortunately if your project is bootstrapped with Create React App, and [you cannot simply add babel-plugin-styled-components](https://styled-components.com/docs/tooling#better-debugging) you have to use [Babel Macro instead](https://styled-components.com/docs/tooling#babel-macro) eg: ``` import styled from 'styled-components/macro' ``` that change class names from `sc-<hash>` to `styled__<ComponentName>-sc-<hash>` A quickfix for that I suggest here, is to change this line in `toHaveStyleRule.js:71`: ``` const staticClassNames = classNames.filter(x => x.startsWith("sc-")); ``` to filter not only starting with `sc-` but generally including that substring: ``` const staticClassNames = classNames.filter(x => x.includes("sc-")); ``` It works properly for both new setup (styled-components to v5 and jest-styled-components to v7) as well as older one. username_5: Setting the ssr and displayName plugin parameters to false helped me during testing. `["babel-plugin-styled-components", { ssr: false, displayName: false }]` username_6: @username_5 is there a way to apply this setting just to my test folder? I've tried all the tricks I know with `.babelrc` but can't get my tests to not apply the root `.babelrc` config. Basically, I want `displayName` to be enabled during development, but disabled for tests. username_5: @username_6 ` // babel.config.js module.exports = (api) => { const isTest = api.env("test"); // You can use isTest to determine what presets and plugins to use. const plugins = [ ["babel-plugin-styled-components", { ssr: !isTest, displayName: !isTest }] ]; return { // ... }; }; ` username_7: Above fix seems to work, but now `.find('DisplayName')` does not work, so importing and doing `.find(DisplayName)` is now needed instead. 😕 username_8: @username_7 Is the plan to get this working without such fixes or should I proceed with my project to try these workarounds. Would hope there'd be no breaking changes 🤞 username_7: @username_8 I’m not the maintainer of this project, sorry. username_9: As a alternative fix, I submitted https://github.com/styled-components/babel-plugin-styled-components/issues/268 to `babel-plugin-styled-components` that makes sure all component IDs are prefixed `sc-` The correct solution depends on the how styled-components *should* work. username_10: My team is experiencing this issue as well for the tests on our design system as we upgrade to Styled Components V5. Tests were working fine just before the upgrade. username_11: I am also experiencing this issue and found that coping the change @username_9 suggests in styled-components/babel-plugin-styled-components#268 didn't actually fix the issue for me. I have used the workaround suggested by @username_5 to disable "displayName" when running tests and found it works in my case. Status: Issue closed username_9: Thanks for taking care of this @username_12 ! Excited to see next realest ship. username_13: It's still not working for me with `[email protected]`, `[email protected]`, and `[email protected]` :( username_14: Same here have also tried upgrading to v5 again today (with jest-styled-components v7) but also still run into this problem. username_15: Me too, upgraded to v5 and had this issue pop up. username_15: If you're using create-react-app and the babel macro (`import styled from "styled-components/macro";`), you can create a `.babel-plugin-macrosrc.js` (https://styled-components.com/docs/tooling#experimental-config) in your project root with contents: ``` module.exports = { styledComponents: { ...(process.env.TEST === "true" ? { ssr: false, displayName: false } : {}), }, }; ``` and run your test script with `TEST=true` in front of it. eg. `TEST=true react-scripts test`.
cdr-register/register
572455968
Title: Register JWKS Endpoint Question: username_0: The location of this for the Register is not published, could the documentation please reflect the location of the Register JWKS? Answers: username_0: _Preamble: Biza currently has 16 open issues on CDR Register (and had until cleanup 42 on CDSA Standards). In an effort to optimise our own backlog we are closing those which have none or limited response from the ACCC. They may be reopened at a later time or referenced when the issues are highlighted by third parties._ No response provided, it is assumed that the ACCC believes that security through obscurity is a valid defensive technique. Closing. Status: Issue closed username_1: Team has raised the same question. Can this question be officially answered? username_2: The location of this for the Register is not published, could the documentation please reflect the location of the Register JWKS or is the intention to adopt a security through obscurity approach by not providing the details unless a participant is onboarded? username_2: Reopening issue. This will be addressed as part of the documentation work scheduled for issue #129 Will keep this issue open to track work to completion username_0: Closing, looks to be resolved. Status: Issue closed
trentm/node-cmdln
63155078
Title: tests fail against node 0.12 Question: username_0: see dap's https://gist.github.com/davepacheco/2da16db023666037495c I can repro. E.g.: node 0.10: ``` [21:03:52 username_0@grape:~/tm/node-cmdln/test/cmd (master)] $ DEBUG=1 node bwcompat-main-v1.js Test out old v1 cmdln.main(). Usage: bwcompat-main-v1 [OPTIONS] COMMAND [ARGS...] bwcompat-main-v1 help COMMAND Options: -h, --help Show this help message and exit. Commands: help (?) Help on a specific sub-command. bwcompat-main-v1: error: NoCommandError: no command given at /Users/username_0/tm/node-cmdln/lib/cmdln.js:446:29 at CLI.printHelp (/Users/username_0/tm/node-cmdln/lib/cmdln.js:573:5) at CLI.emptyLine (/Users/username_0/tm/node-cmdln/lib/cmdln.js:445:10) at /Users/username_0/tm/node-cmdln/lib/cmdln.js:411:18 at CLI.init (/Users/username_0/tm/node-cmdln/lib/cmdln.js:471:5) at CLI.init (/Users/username_0/tm/node-cmdln/test/cmd/bwcompat-main-v1.js:22:32) at CLI.main (/Users/username_0/tm/node-cmdln/lib/cmdln.js:398:10) at Object.main (/Users/username_0/tm/node-cmdln/lib/cmdln.js:789:9) at Object.<anonymous> (/Users/username_0/tm/node-cmdln/test/cmd/bwcompat-main-v1.js:26:11) at Module._compile (module.js:456:26) ``` vs. node 0.12: ``` $ DEBUG=1 ~/opt/node-0.12/bin/node bwcompat-main-v1.js Test out old v1 cmdln.main(). Usage: bwcompat-main-v1 [OPTIONS] COMMAND [ARGS...] bwcompat-main-v1 help COMMAND Options: -h, --help Show this help message and exit. Commands: help (?) Help on a specific sub-command. bwcompat-main-v1: error: WError: no command given at /Users/username_0/tm/node-cmdln/lib/cmdln.js:446:29 at CLI.printHelp (/Users/username_0/tm/node-cmdln/lib/cmdln.js:573:5) at CLI.emptyLine (/Users/username_0/tm/node-cmdln/lib/cmdln.js:445:10) at /Users/username_0/tm/node-cmdln/lib/cmdln.js:411:18 at CLI.init (/Users/username_0/tm/node-cmdln/lib/cmdln.js:471:5) at CLI.init (/Users/username_0/tm/node-cmdln/test/cmd/bwcompat-main-v1.js:22:32) at CLI.main (/Users/username_0/tm/node-cmdln/lib/cmdln.js:398:10) at Object.main (/Users/username_0/tm/node-cmdln/lib/cmdln.js:789:9) at Object.<anonymous> (/Users/username_0/tm/node-cmdln/test/cmd/bwcompat-main-v1.js:26:11) at Module._compile (module.js:460:26) ``` Note the error name diff. Something changed btwn node 0.10 and 0.12 in how we're sniffing the error name. Answers: username_0: See these diffs. @davepacheco I realize this is likely hitting code paths in verror that *I* added, but I don't follow the difference here. Too late perhaps. ``` $ cat foo.js var util = require('util'); var verror = require('verror'); function MyVError(msg) { verror.VError.call(this, msg); } util.inherits(MyVError, verror.VError); //MyVError.prototype.name = 'WallaWallaError'; try { throw new MyVError('boom'); } catch (err) { console.log('err:', err) console.log('err.toString():', err.toString()) console.log('err.stack:', err.stack) } [21:37:14 username_0@grape:~/tm/node-cmdln (master)] $ ~/opt/node-0.10/bin/node foo.js err: { [VError: boom] jse_shortmsg: 'boom', jse_summary: 'boom', message: 'boom' } err.toString(): MyVError: boom err.stack: MyVError: boom at new MyVError (/Users/username_0/tm/node-cmdln/foo.js:5:19) at Object.<anonymous> (/Users/username_0/tm/node-cmdln/foo.js:11:11) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:906:3 [21:37:17 username_0@grape:~/tm/node-cmdln (master)] $ ~/opt/node-0.12/bin/node foo.js err: { [VError: boom] jse_shortmsg: 'boom', jse_summary: 'boom', message: 'boom' } err.toString(): MyVError: boom err.stack: VError: boom at new MyVError (/Users/username_0/tm/node-cmdln/foo.js:5:19) at Object.<anonymous> (/Users/username_0/tm/node-cmdln/foo.js:11:11) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Function.Module.runMain (module.js:501:10) at startup (node.js:129:16) at node.js:814:3 ``` Note that `err.stack` for node 0.12 doesn't get the constructor.name... but ends up getting the `VError.prototype.name`. Perhaps an intentional change in 0.12. Not sure. username_0: Ah, I see the difference: I suspect node 0.10 was using `<err>.toString()` for the first line of `<err>.stack`, but no longer in node 0.12: Here is foo.js without using verror.js to have a simpler test case: ``` $ cat foo.js var util = require('util'); function MyError(msg) { Error.call(this); this.message = msg; Error.captureStackTrace(this, this.constructor); } util.inherits(MyError, Error); MyError.prototype.toString = function () { return 'WallaWalla: ' + this.message; }; try { throw new MyError('boom'); } catch (err) { console.log('err:', err) console.log('err.toString():', err.toString()) console.log('err.stack:', err.stack) } ``` And running that with 0.10 and 0.12: ``` $ ~/opt/node-0.10/bin/node foo.js err: { [Error: boom] message: 'boom' } err.toString(): WallaWalla: boom err.stack: WallaWalla: boom at Object.<anonymous> (/Users/username_0/tm/node-cmdln/foo.js:15:11) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:906:3 [21:49:48 username_0@grape:~/tm/node-cmdln (master)] $ ~/opt/node-0.12/bin/node foo.js err: { [Error: boom] message: 'boom' } err.toString(): WallaWalla: boom err.stack: Error: boom at Object.<anonymous> (/Users/username_0/tm/node-cmdln/foo.js:15:11) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Function.Module.runMain (module.js:501:10) at startup (node.js:129:16) at node.js:814:3 ``` @username_1 Do you know if my suspicion is correct about node 0.12 no longer using `err.toString()` for `err.stack`? username_1: @username_0 Yes, it is correct if you don't define a `prepareStackTrace` method. In this case, with v0.12.x, V8 [uses the `name` property to generate the content of the `stack` property](https://github.com/joyent/node/blob/v0.12/deps/v8/src/messages.js#L1247-L1260). With v0.10.x, [it calls `toString`](https://github.com/joyent/node/blob/v0.10/deps/v8/src/messages.js#L1056). So one way to have a consistent behavior across node versions could be to define a `prepareStackTrace` method. username_0: Julien also suggested a possible 'prepareStackTrace' to customize this: https://code.google.com/p/v8-wiki/wiki/JavaScriptStackTraceApi#Customizing_stack_traces Perhaps could have that used in verror.js. Status: Issue closed username_0: will be in 3.2.1
jeffbass/imagezmq
515909881
Title: AttributeError: module 'imagezmq' has no attribute 'ImageSender' Question: username_0: Sorry this must be pretty basic. When I tried to run client.py I got AttributeError: module 'imagezmq' has no attribute 'ImageSender' Status: Issue closed Answers: username_1: Glad you found a work-around. Let me take a look at the link you provide and try to understand what the issue is. I'll post another comment in this thread in a few days after I have looked at it. username_0: Thank you. Great work. Thanks for sharing. I found it down below says import imagezmq by from imagezmq import imagezmq that resolved it. username_2: Still getting this error. What @username_0 suggested did not work for me. Any solution for this? username_1: (py3cv4) pi@rpi24:~ $ ``` Be sure that any Python programs that use **imagezmq** are run in this same virtual environment. I think that running the above commands should enable you to find and fix your error. Let me know if I can help further. Jeff
OpenFlutter/tobias
687060684
Title: Unhandled Exception: MissingPluginException(No implementation found for method pay on channel com.jarvanmo/tobias) Question: username_0: Unhandled Exception: MissingPluginException(No implementation found for method pay on channel com.jarvanmo/tobias) Answers: username_1: 老哥这个问题,你搞定了吗? username_2: 我也遇到了这个问题 username_2: 我把Android build.gradle 的kotlin_version版本改为如下就好了: `ext.kotlin_version = '1.3.72'` username_3: 为啥呢? 这个奇怪的问题 username_2: ext.kotlin_version = '1.3.72' 这个版本是插件 编译的版本,对不上的话会找不到他插件定义的方法,这是我的理解。 username_3: 嗯嗯, 改成这就解决了, 多谢, 我上一个项目也有支付。 版本是1.3.50,但是支付没有问题。感觉是不是还是跟 那个插件冲突了? username_1: ios 14 swift 5.x username_4: 搞了好久,换了其他的插件又运行不起来,你解决了大问题了 Status: Issue closed
Jeavon/Slimsy
619102668
Title: Feature Request: ConvertImgToSrcSet for media.cshtml Question: username_0: Create a version of ConvertImgToSrcSet that can be applied safely to media.cshtml so the Slimsy magic can work when adding an Image in a grid layout. Or, if it's already possible, document how it's done. Answers: username_1: Hey Craig, If you are using the Picture Helper then I think like this ``` C# @inherits UmbracoViewPage<dynamic> @if (Model.value != null) { var udi = Model.value.udi.ToString(); var item = Umbraco.Media(udi); if (item != null) { var altText = Model.value.altText ?? Model.value.caption ?? string.Empty; int width = 160; int height = 0; if (Model.editor.config != null && Model.editor.config.size != null) { bool successWidth = Int32.TryParse(Model.editor.config.size.width.ToString(), out width); bool successHeight = Int32.TryParse(Model.editor.config.size.height.ToString(), out height); } @SlimsyHelper.RenderPicture(Url, item, width, height, altText) } else { // log something maybe... } if (Model.value.caption != null) { <p class="caption">@Model.value.caption</p> } } ``` I'm not to sure how you get value into `Model.editor.config.size`, do you? username_0: Thanks username_1. I just tried it and find that "SlimsyHelper" doesn't exist in the current context. I'm using the v3.0.0-beta5 version. username_1: SlimsyHelper is a Razor helper, you can copy if from https://github.com/username_1/Slimsy/blob/dev-v3/TestSite/App_Code/SlimsyHelper.cshtml to your own App_Code folder if you want to use it. Or maybe you already have your own Razor helper? username_1: That looks good @username_0 All working? The problem with a method for this is that not all grids are implemented the same. Having this example is good though. Perhaps we should do more to highlight the Razor helper and how to update rte.cshtml and media.cshtml...? Happy to hear any suggestions... username_0: Not too sure tbh. The tricky bit was the caption funnily enough. I keep getting "System.Net.WebException: The request was aborted: The connection was closed unexpectedly." Wondering if that's due to putting the razor helper in App_Code. It's only cleared by touching web.config, then it appears to work ok. So still looking into it. The output is:- `<picture> <source data-srcset="/media/ycjb305h/landlords.jpg?anchor=center&amp;mode=crop&amp;quality=90&amp;width=160&amp;height=107&amp;rnd=132326448829100000 160w" srcset="/media/ycjb305h/landlords.jpg?anchor=center&amp;mode=crop&amp;quality=90&amp;width=160&amp;height=107&amp;rnd=132326448829100000 160w" type="image/jpeg" data-sizes="auto" sizes="300px"> <img src="/media/ycjb305h/landlords.jpg?anchor=center&amp;mode=crop&amp;width=300&amp;height=200&amp;rnd=132326448829100000" data-src="/media/ycjb305h/landlords.jpg?anchor=center&amp;mode=crop&amp;width=300&amp;height=200&amp;rnd=132326448829100000" class="lazyautosizes lazyloaded" data-sizes="auto" alt="Keys" sizes="300px" width="300" height="201"> </picture>` All I was imagining was something to handle a media grid item as it exists out of the box, nothing fancy. Those that need fancy can usually sort it out themselves given a bit of a clue ;) username_0: Just as I was putting the site to staging I realised I was stuck with a crop. The `@SlimsyHelper.RenderPicture(Url, "SideImage", item, altText)` means it's not responsive to the grid layout itself. If there was a way to know which layout it was in it wouldn't be a problem as you could just select the appropriate crop for the layout section. So this is helpful but not the final answer as far as the grid is concerned.
hmoliveira1/intro-data-capstone-musclehub
306429650
Title: Increase code readability with line breaks Question: username_0: Your code: ``` app_pivot = pd.pivot_table(app_counts, values='first_name', index='ab_test_group', columns='is_application').reset_index() ``` You already have done this in earlier codes. Adding line breaks really increases the code readability. According to PEP8 there should not be more than 72 characters per line. https://www.python.org/dev/peps/pep-0008/#maximum-line-length We did not follow code styling strictly in this course. However, I felt to remind you here. I would do: ``` app_pivot = app_counts.pivot(columns='is_application', index='ab_test_group', values='first_name')\ .reset_index() ```
cvisco/eslint-plugin-requirejs
69457549
Title: Rules warning on non-define calls Question: username_0: Rules have been sometimes warning on non-define calls. For example, in `no-object-define`: ```js // This warns, as it should define('name', {}); // This also warns, and it shouldn't: foo('name', {}); ```<issue_closed> Status: Issue closed
bounswe/bounswe2019group9
441423210
Title: Executive Summary Question: username_0: Summary of project status and any changes that are planned for moving forward - Introduction - Work done so far - Road ahead - Challenges you met as a group Answers: username_1: Done. The document can be found in the sidebar, under the title Milestone Report 2 as Executive Summary. [Here](https://github.com/bounswe/bounswe2019group9/wiki/Executive-Summary-2). Status: Issue closed
vmware/govmomi
1061543431
Title: [BUG] govc vcsa.net.proxy.info doesnt give output in json format Question: username_0: HTTP proxy: Disabled HTTPS proxy: Disabled FTP proxy: Disabled No Proxy addresses: localhost, 127.0.0.1 ``` **To Reproduce** Steps to reproduce the behavior: 1. Install VC 1. Install govc 1. Run `govc vcsa.net.proxy.info -json=true` **Expected behavior** `govc vcsa.net.proxy.info -json=true` should provide output in json format **Affected version** Latest<issue_closed> Status: Issue closed
livewire/livewire
616124657
Title: AuthorizesRequests Question: username_0: I do have followed this https://laravel-livewire.com/docs/authorization on how to authorize actions such as update and so and this is my code... <?php namespace App\Http\Livewire\Auth\User; use App\User; use Illuminate\Auth\Access\AuthorizationException; use Illuminate\Contracts\Foundation\Application; use Illuminate\Foundation\Auth\Access\AuthorizesRequests; use Illuminate\Http\RedirectResponse; use Illuminate\Routing\Redirector; use Livewire\Component; class Verify extends Component { use AuthorizesRequests; /** * @param string $validation * @return Application|RedirectResponse|Redirector * @throws AuthorizationException */ public function mount(string $validation) { $user = User::query()->findOrFail(decrypt($validation, true)); $this->authorize('update', $user); $user->update([ 'email_verified_at' => now() ]); return redirect('/home'); } public function render() { return view('livewire.auth.user.verify'); } } On trying to update I get 403 This action is unauthorized. Status: Issue closed Answers: username_0: Yeah got the problem forgot adding `email_verified_at` in the user model `$fillable` username_0: I do have followed this [https://laravel-livewire.com/docs/authorization](url) on how to authorize actions such as update and so and this is my code... `<?php namespace App\Http\Livewire\Auth\User; use App\User; use Illuminate\Auth\Access\AuthorizationException; use Illuminate\Contracts\Foundation\Application; use Illuminate\Foundation\Auth\Access\AuthorizesRequests; use Illuminate\Http\RedirectResponse; use Illuminate\Routing\Redirector; use Livewire\Component; class Verify extends Component { use AuthorizesRequests; /** * @param string $validation * @return Application|RedirectResponse|Redirector * @throws AuthorizationException */ public function mount(string $validation) { $user = User::query()->findOrFail(decrypt($validation, true)); $this->authorize('update', $user); $user->update([ 'email_verified_at' => now() ]); return redirect('/home'); } public function render() { return view('livewire.auth.user.verify'); } }` On trying to update I get 403 This action is unauthorized. username_1: Did you already create and enable a policy for your users model? If you haven’t you’ll always get 403. See here https://laravel.com/docs/7.x/authorization#creating-policies username_0: I will check it out. But on Livewire `https://github.com/livewire/livewire/issues/url` doesn't mention any of this. username_1: I guess a bit of Laravel knowledge is assumed. But you are right the doc could mention that you have to put up your guards first. username_2: Thanks Michael for the help, this issue is definitely Laravel specific Status: Issue closed
mpv-player/mpv-build
802315922
Title: Can't compile static build, fails at linking stage | /usr/bin/ld: cannot find -l<name-of-lib> Question: username_0: At the end of waf build I encounter: ``` /usr/bin/ld: cannot find -lasound ``` And depending on my ffmpeg enabled libraries, I get the same error referencing a different library. It seems like it can't find any of the compiled libs. Disabling static-build works. I'm using Ubuntu 20.04 on a virtual machine (WSL2) mpv_options: ``` --lua=luajit --disable-gl --enable-static-build ``` ffmpeg_options: ``` --enable-nonfree --enable-gpl --enable-version3 --enable-vdpau --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libpulse --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-static ``` Log: ``` Using mpv options: --lua=luajit --disable-gl --enable-static-build Setting top to : /opt/mpv-build/mpv Setting out to : /opt/mpv-build/mpv/build Checking for waf version in 1.8.4-2.1.0 : ok Checking for program 'cc' : /usr/bin/cc Checking for program 'pkg-config' : /usr/bin/pkg-config Checking for program 'ar' : /usr/bin/ar Checking for program 'rst2html' : not found Checking for program 'rst2man' : not found Checking for program 'rst2pdf' : not found Checking for program 'windres' : not found Checking for program 'perl' : /usr/bin/perl Checking for 'gcc' (C compiler) : /usr/bin/cc Detected target OS: : os-linux Checking for compiler flags -std=c11 : yes Checking for compiler flags -Werror -Werror=implicit-function-declaration : yes Checking for compiler flags -Werror -Wno-error=deprecated-declarations : yes Checking for compiler flags -Werror -Wno-error=unused-function : yes [Truncated] [204/216] Compiling video/out/vo_caca.c [205/216] Compiling video/out/gpu/spirv.c [206/216] Compiling input/event.c [207/216] Compiling video/out/gpu/libmpv_gpu.c [208/216] Compiling common/version.c [209/216] Compiling input/input.c [210/216] Compiling video/out/gpu/ra.c [211/216] Compiling demux/codec_tags.c [212/216] Compiling misc/charset_conv.c [213/216] Compiling misc/bstr.c [214/216] Compiling demux/demux.c [215/216] Compiling osdep/main-fn-unix.c [216/216] Linking build/mpv /usr/bin/ld: cannot find -lasound collect2: error: ld returned 1 exit status Waf: Leaving directory `/opt/mpv-build/mpv/build' Build failed -> task in 'mpv' failed with exit status 1 (run with -v to display more information) ``` Answers: username_1: Getting this same issue. `libasound2-dev` is installed. username_0: Thanks for clarifying. Which distro would you recommend for building then?
SCIInstitute/SCIRun
168410856
Title: Should BuildFEMatrix check for nan/inf conductivity values? Question: username_0: When having inf or nan values in the conductivity vector, BuildFEMatrix doesn't throw an error, instead AddKnownsToLinearSystem throws the error "NaN exist in the b vector". Since inf values can relatively easy occur when being careless during rescaling of conductivities (e.g., when computing them from DTI data), this might be a very helpful check. The current error obviously is not very helpful to solve the problem. A drawback would of course be that this additional check costs performance, so this would be a decision between performance and usability. ### Versions v5.0-beta.D opensuse leap Answers: username_1: @username_4 @username_3 @username_2 Thoughts? username_2: Yes, a more careful check is needed and I think there are ways (in Eigen) to check for these cases efficiently. We also should put some checking routines in place for the second input of the module and test its functionality again. username_3: I think a check would be useful. It shouldn’t be necessary, but I could foresee scenarios in which people would try to put in inf or nan values and so, we should probably have something built in to check. The second input is supposed to accept a look up table of conductivity values, so checking that for inf and nan makes sense as well. <NAME> PhD Candidate in Bioengineering | University of Utah www.sci.utah.edu/~bburton www.linkedin.com/in/brettmburton/ > username_4: I have lately come to favor only passing errors when it will break the code. I think that it should be in the modules that cannot handle them, ie, the solver and addknowns (which already has a check apparently), and then add to the error message more info. A warning, rather than an error, in BuildFEMatrix would be better as long as it isn't too expensive. Though our solver doesn't explicitly handle those values, there may be some applications where it is useful, assuming that it does it correctly. If it makes no mathematical sense, or we aren't giving the correct answer then we shouldn't allow it. username_3: I see logic in what Jess is saying. I don’t know if it makes sense mathematically to allow inf or nan values. It doesn’t make sense in any of the cases that I work with, but I’ve never thrown a perfect conductor into a model (inf conductivity), and I’ve never tried to solve for unknown conductivites. We should talk to <NAME> or <NAME> about this to see if such conditions should ever be allowed in FE matrix construction. If it shouldn’t be allowed, we have BuildFE throw an error. If it is possible to include them, we have it throw a warning. I would say it is a relatively low priority, though <NAME> PhD Candidate in Bioengineering | University of Utah www.sci.utah.edu/~bburton www.linkedin.com/in/brettmburton/ > username_0: I am not aware of any case where inf or nan values would make sense mathematically. If one wants to model an infinite conductivity one should use addlinkednodestolinearsystem, trying to do this using inf values for your conductivity messes up the complete matrix condition, furthermore I have no idea what this would do to the solver. nan values for the conductivity don't make sense anyway. username_3: So we return to the question, should we put a check into the BuildFEMatrix module for inf/nan values? My vote is yes, if it doesn’t require a lot of work. New users won’t necessarily know about the “AddLinkedNodes…" module, and, therefore might try to input inf values. It is, however, a low priority feature. <NAME> PhD Candidate in Bioengineering | University of Utah www.sci.utah.edu/~bburton www.linkedin.com/in/brettmburton/ > username_4: I think we should do it if we can, but I think it is still low priority. Status: Issue closed
colbycheeze/bluegrid
131903787
Title: Flex-direction column Question: username_0: I can see how it may have been intentional to omit something like this a flexbox grid system, but what do you think about support for switching a flex-direction of a row. I know it sounds counter intuitive, but being able to switch from two 50% `.col.m6` columns side by side to stacking would be nice. Thoughts? Easier way to do this? ``` .row .col.m6 .col.m6 ``` Answers: username_1: one way to do this is by setting an offset to push the contents off at a certain breakpoint such as: ```sass .item-1 { @include column(6); @include breakpoint(1px 960px) { @include offset(6, right); } } .item-2 { @include column(6); } ``` There is also a way to change flex-direction from row to column when assigning it here: `@include row(column)` that might not be what you want in this case though. Status: Issue closed username_0: I wasn't aware of the latter one. `@include row(column)` Thanks!
Piwigo/Piwigo-Mobile
773353582
Title: [BUG] crash when selecting album for upload Question: username_0: Steps to reproduce the behavior: 1. Press the '+' button , then photo upload (app shows photos library with local albums) 2. Select my local album called "highlights" 3. Piwigo crashes **What did you do already** - tried to upload to different remote album (still crashes) - tried to upload a different local album (no crash) - (Did you try to reproduce it with https://www.piwigo.org/demo/ ?) demo does not allow upload **Smartphone (please complete the following information):** - Device: iPhone 6s - OS: 14.2 - App version 2.5.3 (331) Answers: username_1: I have created albums, synced albums from a Mac, etc. and I cannot reproduce this issue with full or limited access to the Photo Library. Can please tell me more about this album "highlights"? How was it created? How many and what type of images are stored in it? Dis you provide full or limited access to the Photo Library of your iPhone? username_0: It is an album containing 33 photos. I created it on my Mac and then dragged a bunch of my favourite photos into it. There are two HDR photos. Some are "Live". Some were captured with a digital camera. Piwigo app has access to "All Photos" username_0: does the app have debugging/logging? username_1: I have now received the logs of the crash via TestFlight and I am going to try to fix this issue in the coming week. username_0: these steps no longer cause a crash in 2.5.4 Thank you. Status: Issue closed
forcedotcom/salesforcedx-vscode
493887139
Title: SFDX Command is not found Question: username_0: Issue Type: <b>Bug</b> Hi Team, I'm facing this issue for 3 days. previously it was working without any issue. please find the solution ASAP Extension version: 46.14.0 VS Code version: Code 1.38.1 (b37e54c98e1a74ba89e03073e5a3761284e3ffb0, 2019-09-11T13:35:15.005Z) OS version: Windows_NT x64 10.0.16299 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (8 x 1800)| |GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>native_gpu_memory_buffers: disabled_software<br>oop_rasterization: disabled_off<br>protected_video_decode: enabled<br>rasterization: enabled<br>skia_deferred_display_list: disabled_off<br>skia_renderer: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: disabled_off<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|undefined| |Memory (System)|15.86GB (8.92GB free)| |Process Argv|| |Screen Reader|no| |VM|0%| </details> <!-- generated by issue reporter --> Answers: username_1: We need a lot more info to understand what is happen. Please provide complete reproduction steps and the full error details and reopen. Status: Issue closed
agda/cubical
1111181744
Title: "Not in Scope: R.Telescope Question: username_0: P.S. Sorry if I did not post this in the right outlet. I'm new around here... Status: Issue closed Answers: username_0: P.S. Sorry if I did not post this in the right outlet. I'm new around here... Status: Issue closed username_0: Hi, I am trying to run the HoTT game. Upon launching first part of the game and trying to load the .agda file, I received the following error message: .../agda/cubical/Cubical/Reflection/Base.agda:49,39-50 Not in scope: R.Telescope at .../agda/cubical/Cubical/Reflection/Base.agda:49,39-50 when scope checking R.Telescope" The dots are just the path from my home directory. I then went to load the Base.agda file and received the same error. In emacs, the occurrence of R.Telescope appearing near the end of the file is highlighted in red. It is this appearance: extend*Context : ∀ {ℓ} {A : Type ℓ} → R.Telescope → R.TC A → R.TC A I'm new to agda and not sure what is causing this problem. Any help would be appreciated. username_1: Update to the latest agda from the master branch and do the same for cubical. username_0: Ah, thank you. I had used dnf install to install agda. I didn't realize I had 2.6.2 not 2.6.3. Status: Issue closed
asrob-uc3m/asrob-uc3m.github.io
359418024
Title: Especificar más las vías de comunicación Question: username_0: https://github.com/asrob-uc3m/asrob-uc3m.github.io/blob/master/_posts/2017-05-27-about.markdown hereda y sustituye a http://asrob.uc3m.es/index.php/Info_para_los_nuevos En línea con https://github.com/asrob-uc3m/actas/issues/141 la intención es especificar más las vías de comunicación. La idea sería transmitir que alguien se puede comunicar con nosotros: - Public: Vía issue en GitHub (se crea una cuenta en GitHub y pone una issue, p.ej. en [https://github.com/asrob-uc3m/actas](actas) que es básicamente nuestro Q&A) - Protected: Vía Telegram (se instala Telegram y comenta, p.ej. en el [grupo grande](https://t.me/joinchat/ADT1EAgigvgsuobGUHMl_w)) - Protected: vía email que contestará nuestro encargado (envía un email a asrob-uc3m [at] gmail [dot] com, que llegará al grupo google grande https://groups.google.com/forum/#!forum/asrob_uc3m) - Private: va en persona a una reunión Omito la opción de Facebook, en https://github.com/asrob-uc3m/actas/issues/141#issuecomment-420301691 comenté la opción de respuesta automatizada (y posteriormente comentá acerca de PM vía Twitter). ¿Posibles sugerencias? PD: Lo de `public`/`protected`/`private` lo pongo modo _shorthand_ sólo aquí, la idea es detallar explícitamente a dónde llega cada mensaje. Answers: username_0: En https://github.com/asrob-uc3m/actas/issues/157 se votó a favor. Lo dejo abierto hasta los correspondientes commits o PR.
scalaz/scalaz-nio
370177570
Title: Implement missing buffers Question: username_0: - `DoubleBuffer` - `FloatBuffer` - `IntBuffer` - `LongBuffer` - `MappedByteBuffer` - `ShortBuffer` Answers: username_1: Shouldn't we use `byteBuffer.get()` in ByteBuffer instead of `byteBuffer.array().asInstanceOf[Byte]` ? And if so should we make `Buffer#get: A` pure? username_0: @username_1 totally! would you like to create a pull request for this? username_1: sure! I will take it username_0: what did you mean by making it pure? wrapping it into IO? username_0: There are as well a lot of `Buffer` specific methods, e.g. `asCharBuffer` for `ByteBuffer` or `asDoubleBuffer` for CharBuffer... I'm wondering if it make sense to create specific issues for each Buffer? username_1: I think there is no need in that as long as it's trivial username_0: @username_1 got you! we try to be unopinionated (don't change _behaviour_) - meaning if get modify buffer state (e.g. position) we don't change this behaviour. I think our main goal is to make it safe (e.g. make Exceptions explicit), compassable and non-blocking (e.g. java Future.get) username_0: @username_1 does it make any sense for you? or have you anything else on your mind? username_1: yeah i got it, thanks for clarification! username_2: I'll pick up `IntBuffer` username_3: Added missing buffers to #47 branch. username_0: well done guys! =) Status: Issue closed
codefog/contao-cookiebar
132506592
Title: Wrong cookie-path Question: username_0: If the user confirms the cookie-usage on a subpage like domain.tld/about-us/about-us.html the cookie-path is set to this subpath. This forces the user to repeatedly confirm the cookie on every subpage. This bug is getting fixed by setting the cookie-path on /, see https://github.com/codefog/contao-cookiebar/pull/12<issue_closed> Status: Issue closed
GeotrekCE/Geotrek-admin
781254390
Title: Outdoor - Sites heritage Question: username_0: En lien avec https://github.com/GeotrekCE/Geotrek-admin/issues/2410#issuecomment-748497973 - Les données des sites enfants ne sont pas agglomérées dans la BDD au niveau des sites parents - Certaines infos sont cependant calculables et affichables au niveau d'un site parent, en agglomérant celles des sites enfants. Toutes les infos de type liste notamment (pratiques, orientations...) Status: Issue closed Answers: username_0: Mis en place dans la version 2.49.0
mKeRix/room-assistant
751034147
Title: ExceptionHandler: Cannot assign to read only property Question: username_0: **Describe the bug** Room Assistant no longer starts, it encounters an exception after loading the intergrations and while trying to start the Nest application **To reproduce** Install room-assistant and start. **Relevant logs** *** npm install *** ``` npm WARN @nestjs/[email protected] requires a peer of class-transformer@^0.3.0 but none is installed. You must install peer dependencies yourself. npm WARN @nestjs/[email protected] requires a peer of class-validator@^0.11.1 || ^0.12.0 but none is installed. You must install peer dependencies yourself. ``` *** room-assistant -v*** ``` *** WARNING *** The program 'node' uses the Apple Bonjour compatibility layer of Avahi. *** WARNING *** Please fix your application to use the native API of Avahi! *** WARNING *** For more information see <http://0pointer.de/blog/projects/avahi-compat.html> *** WARNING *** The program 'node' called 'DNSServiceRegister()' which is not supported (or only supported partially) in the Apple Bonjour compatibility layer of Avahi. *** WARNING *** Please fix your application to use the native API of Avahi! *** WARNING *** For more information see <http://0pointer.de/blog/projects/avahi-compat.html> 11/25/2020, 12:08:53 PM - info - IntegrationsModule: Loading integrations: home-assistant, grid-eye 11/25/2020, 12:08:57 PM - info - NestFactory: Starting Nest application... 11/25/2020, 12:08:57 PM - error - ExceptionHandler: Cannot assign to read only property 'hasOwnProperty' of object '#<Object>' ``` **Relevant configuration** Paste the relevant parts of your configuration below. ``` global: instanceName: mediaroom integrations: - homeAssistant - gridEye homeAssistant: mqttUrl: 'mqtt://192.168.0.77:1883' mqttOptions: username: youruser password: <PASSWORD> gridEye: deltaThreshold: 3 cluster: networkInterface: wlan0 port: 6425 quorum: 3 ``` **Expected behavior** Room Assistant was running, just with Grid-Eye but the response seemed slow. I did an upgrade and it started spewing unicode garbage characters out to MQTT. After trying to downgrade to 2.10.1, Grid-Eye didn't seem to do anything. I have uninstalled and reinstalled, even deleted /opt/nodejs to try and start fresh. The install does complain about missing peer dependencies for nestjs, which I install, but they don't seem to 'stick' **Environment** - room-assistant version: 2.12.0 - installation type: NodeJS - hardware: Raspberry Pi Zero W - OS: Linux **Additional context** Add any other context about the problem here. Answers: username_1: The bug that you are describing was fixed in v2.11.2, it occurs since a library that room-assistant depends on changed how they use the settings object in a minor upgrade. You can either try to downgrade the `mqtt` dependency or upgrade room-assistant to at least v2.11.2 again (latter is recommended). The peer dependency stuff you can ignore, the warnings are not really relevant for any room-assistant usage but I also can't disable them since they come from a dependency. The unicode characters are just the log of the camera state, it tries to convert the image data into a string. I know it's not pretty and makes the verbose log unreadable, I just haven't gotten around to disabling state logging for camera entities yet. And another quick sidenote, since you are using Grid-EYE on a Pi Zero: starting with v2.12.0 the heatmap generation was changed and will be a bit slower, but work without native dependencies or memory leaks. v2.13.0-beta.2 adds a config optio that allows you to disable the heatmap should you not need it and would rather have better performance. username_0: Thank you. Yes, the unicode characters it finally dawned on me what they were, and just excluding them from my mqtt_subscribe took care of that. Thank you for all your work on this, you are a rock star! Status: Issue closed
FOSS-UCSC/FOSSALGO
718601789
Title: Simple documentation for bin sort algorithm Question: username_0: # [Title] [short description - you can make links to any proper, publicly available explanations of the algorithm.] # Implementations [add links to currently available implementation under the corresponding folder according to the languages] Answers: username_1: I would like to work on this. Status: Issue closed
uclibs/uc_drc
647492295
Title: Batch import Question: username_0: ### Notes * (.csv, etc.) * Will be relevant to born digital archives work ### Resources * https://wiki.lyrasis.org/display/samvera/Hyrax+Batch+Import-Export+WG Answers: username_0: Need to be able to set visibility (including archival master and restricted) at time of bulkrax import.
monarch-initiative/mondo
385353389
Title: 12q14 microdeletion syndrome Question: username_0: 12q14 microdeletion syndrome is characterised by mild intellectual deficit, failure to thrive, short stature and osteopoikilosis. This really should not be classified as a bone disease, bone/skeletal features are part of the . disease but it is a multisystem disease. https://www.ebi.ac.uk/ols/ontologies/mondo/terms?iri=http%3A%2F%2Fpurl.obolibrary.org%2Fobo%2FMONDO_0019784 Answers: username_1: Action item: - [ ] exclude superclasses: 'syndromic intellectual disability' 'primary bone dysplasia with increased bone density' username_1: @kallia-p I don't see those inferred parents when the two superclasses are excluded: ![image](https://user-images.githubusercontent.com/6722114/117927351-f5091980-b2ae-11eb-8221-26661feec8f1.png) Perhaps your reasoner was out of date? Please see my comments on the PR. If you still see the inferred parents when you remove those additional parents, let me know, there may be some other underlying issue. Status: Issue closed
BahamutDragon/pcgen
187585451
Title: [Dnd 3.5e] Immediate Magic ACF form player handbook 2 Question: username_0: Immediate Magic ACF from ph2_abilities.lst does not work as intended. Now allows you to select ability from spell pool instead of predefined abilities. I can't figure out how to implement this. I suppose it relates to this - CHOOSE:SPELLS|CLASSLIST=Wizard[LEVELMAX=1;LEVELMIN=0] where instead of spells there should be abilities list? ![image](https://cloud.githubusercontent.com/assets/2230015/20041150/d57260ee-a46d-11e6-8960-d8d0803dd34f.png) Answers: username_1: fixed - https://github.com/username_1/pcgen/commit/e7b0aac6c0671aa6d10fb6578c247f25ff245b7a Status: Issue closed
zaproxy/zaproxy
159628946
Title: How to know that script is loaded then running zap from cmd Question: username_0: ``` And finally inspect http request (packet) in wireshark. Unfortunately request did not contain session cookie. This made me question is this script is actually executed? And how can I make sure it is executed. Answers: username_1: That can be achieved with the ZAP API endpoint `http://zap/UI/script/view/listScripts/`, that allows to know if it was correctly loaded, if it's enabled and if it ran without errors. Note that the ECMAScript/JavaScript engine's name should be "Mozilla Rhino", if Java 7, or "Oracle Nashorn", if Java 8+. For questions about ZAP usage is better to use the mailing list. [1] Please, close the issue if the question was addressed. Thank you. [1] http://groups.google.com/group/zaproxy-users username_0: Thats exactly what I was looking for, thank you! :+1: Status: Issue closed username_1: You're welcome.
e4exp/paper_manager_abstract
671976660
Title: Quasi-Newton Solver for Robust Non-Rigid Registration Question: username_0: * http://xpaperchallenge.org/cv/survey/cvpr2020_summaries/70 * https://openaccess.thecvf.com/content_CVPR_2020/html/Yao_Quasi-Newton_Solver_for_Robust_Non-Rigid_Registration_CVPR_2020_paper.html * CVPR 2020 不完全なデータ(ノイズ、外れ値、部分的なオーバーラップ)と高い自由度により、非剛体登録はコンピュータビジョンにおける古典的な挑戦的な問題となっている。 既存の方法は、典型的には、フィットと平滑性を正則化するためにl_p型のロバスト推定器を採用し、結果として生じる非平滑問題を解くために近位演算子を使用する。 しかし、これらのアルゴリズムは収束が遅いため、その応用範囲が限られている。 本論文では、外れ値や部分的なオーバーラップを扱うことができる、データのフィッティングと正則化のための大域的に滑らかなロバスト推定器に基づいたロバスト非剛体登録のための定式化を提案する。 この問題にはメジャー化最小化アルゴリズムを適用し、各反復をL-BFGSを用いた単純な最小二乗問題を解くことに削減する。 広範な実験により、アウトライアや部分的なオーバーラップを持つ2つの形状間の非剛体アライメントに対する本手法の有効性が実証された。 ソースコードは https://github.com/Juyong/Fast_RNRR で公開されています。
pachyderm/pachyderm
217364135
Title: Allow for combining multiple inputs, as opposed to joining them Question: username_0: Right now, when a pipeline has multiple inputs, it processes the cross product of the inputs, essentially performing a join. For instance, if one input has datums `{A, B}` and the other has datums `{C, D}`, then the pipeline will process 4 tuples: `[(A, C), (A, D), (B, C), (B, D)]`. However, sometimes you might want to process the "sum" of the input repos as opposed to the product of them. For instance, if I have a `word-count` pipeline and I want to perform a word count over two different repos, it'd be nice if I could have my pipeline treat those two repos as one giant repo that contains data from both repos. That is, if the repos have datums `{A, B}` and `{C, D}`, the pipeline should process 4 datums `{A, B, C, D}`. Answers: username_1: I'd like to add some details on our use case. Imagine that we have four multi-stage pipelines, each of which does some fairly complex processing. 1. `source_a` → ... `processed_a` 2. `source_b` → ... `processed_b` 3. `source_c` → ... `processed_c` 4. `source_d` → ... `processed_d` In each of the 4 `processed_*` top-level **repos** (not directories in a mega-repo), the data layout looks like: ``` processed_a/ 000/ 5ff9e947-2065-4821-b1a7-5ef635545aa5.csv.gz f1e3ca2b-7267-4e42-95a1-9b7d62fc40e8.csv.gz 7ea94526-aee2-44c4-82f6-524d45af5a15.csv.gz ... 001/ 002/ ... 999/ processed_b/ 000/ 11f7e13e-5dc4-4671-9c57-9608527079dc.csv.gz ... 001/ 002/ ... 999/ processed_c/ 000/ 2c554605-b564-4b37-a56e-301c35b0ff6e.csv.gz ... 001/ 002/ ... 999/ processed_d/ 000/ c7b85b69-e107-44f2-9522-9a372176eb8d.csv.gz ... 001/ 002/ ... 999/ ``` Our desired output looks like: ``` out/ 001.gz 002.gz 003.gz ... 999.gz ``` To compute `001.gz`, we need to combine all of: ``` 5ff9e947-2065-4821-b1a7-5ef635545aa5.csv.gz f1e3ca2b-7267-4e42-95a1-9b7d62fc40e8.csv.gz 7ea94526-aee2-44c4-82f6-524d45af5a15.csv.gz 11f7e13e-5dc4-4671-9c57-9608527079dc.csv.gz 2c554605-b564-4b37-a56e-301c35b0ff6e.csv.gz c7b85b69-e107-44f2-9522-9a372176eb8d.csv.gz ``` ...onto a single worker. Then the worker will do a very complex calculation to reconcile all those inputs into a single output file. So essentially `000`, `001`, etc., are "cross-repo join keys" operating at the file level. This could be implemented in two steps: 1. "Smoosh" `processed_a`, `processed_b`, `processed_c`, `processed_d` into a single unified namespace. 2. Apply the match glob `/*` _to the unified namespace_, so that all the `000/*` files from each input repo wind up assigned to a single node. It would be OK if this were somehow a two step process. @username_3 Does this help explain our use case? username_1: Thank you for looking into this feature! I've writen up a test case for union/"smoosh" joins, with sample input and output repositories, and a Python Pandas script that does a simulated join: https://github.com/faradayio/pachyderm_union_join_test Obviously, my heart isn't set on exact syntax in the `*.json` file, or how the two inputs get combined for `/pfs`. But ideally, it should be possible to produce that exact output data using the supplied Python script and the two input repositories. (No duplicate CSV headers in the middle of files allowed! 🙂 ) As mentioned in the README, the real join will involve far more records and more than 2 input repositories. But this should give you a more concrete idea about what we're trying to do. Once again, thank you for looking into this, and please don't hesitate to ask questions! username_2: @username_0 @username_3 is this addressed by the Union inputs PR? https://github.com/pachyderm/pachyderm/issues/1491 Status: Issue closed username_3: Yes, this has been merged.
brendan-r/googdown
220432721
Title: [standardize_rmd] Fix spacing Question: username_0: Currently, `standardize_rmd` produces rather 'squashed' looking files. You should probably write something at the end that forces some reasonable spacing conventions (e.g. before / after code-blocks and headings).
plaidml/plaidml
673110064
Title: Limited PXA vectorization transform + pass Question: username_0: Write a limited vectorization transform for PXA with the following interface: LogicalResult performVectorization(AffineParallelOp op, BlockArgument index, unsigned vecSize); Example: // Original affine.parallel (%i, %j, %k) = (0, 0, 0) to (64, 64, 66) { %a = affine.load %X[%i, %k] : memref<64x64xf32> %b = affine.load %Y[%k, %j] : memref<64x64xf32> %c = mul %a, %b : f32 pxa.reduce add %c, %Z[%i, %j] : memref<64x64xf32> } } // Vectorized over j, size = 8 affine.parallel (%i, %j, %k) = (0, 0, 0) to (64, 64, 66) step (1, 8, 1) { %a = affine.load %X[%i, %k] : memref<64x64xf32> %a2 = vector.broadcast %a : vector<8xf32> %b = affine.vector_load %Y[%k, %j] : memref<64x64xf32>, vector<8xf32> %c = mul %a, %b : vector<8xf32> pxa.vector_reduce add %c, %Z[%i, %j] : memref<64x64xf32> } } It's OK to simply fail in the following cases for the initial version: - The affine.parallel has anything besides loads/reduces or scalar ops (i.e. fmul, etc). - The range of the index to vectorize on is not divisible by the vector size - Any of the loads is not stride 1 or stride 0 with respect to the index to vectorize over - Any reduce is not stride 1 with respect to the vector index. - The step for the index being vectorized is not 1 Additionally, provide a trivial vectorization pass for testing that simply trys to vectorize over each index in order for each top level parallel for. Answers: username_0: Implemented in: https://github.com/plaidml/plaidml/pull/1266 Status: Issue closed
TiesdeKok/ipystata
1174418773
Title: hello please I am looking for the equivalant of these stata codes in Python Question: username_0: local i = 0 foreach y of global outcomes_T4 { local ++i /* Set conditions; first 4 regressions should only be run on treatment group */ if `i'<=4 { local subsample = "& assigned_p1 == 1" } else { local subsample = "" } local var`i' "`y'" qui sum `y' if sample_p1==1 & assigned_p1==0 `subsample' local cm`i' = r(mean) qui reg `y' assigned_p1 ${controls_main} if sample_p1==1 & found_p1e == 1 `subsample', cluster(siteid) local b1_`i' = _b[assigned_p1] local perc1_`i' = _b[assigned_p1]/`cm`i'' local se1_`i' = _se[assigned_p1] local p1_`i' = 2*ttail(e(df_r),abs(_b[assigned_p1]/_se[assigned_p1])) local N`i' = e(N) qui reg `y' assigned_p1 assigned_p1_gd ${controls_main} if sample_p1==1 & found_p1e == 1 `subsample', cluster(siteid) local b2_`i' = _b[assigned_p1] local perc2_`i' = _b[assigned_p1]/`cm`i'' local se2_`i' = _se[assigned_p1] local p2_`i' = 2*ttail(e(df_r),abs(_b[assigned_p1]/_se[assigned_p1])) local b4_`i' = _b[assigned_p1_gd] local perc4_`i' = _b[assigned_p1_gd]/`cm`i'' local se4_`i' = _se[assigned_p1_gd] local p4_`i' = 2*ttail(e(df_r),abs(_b[assigned_p1_gd]/_se[assigned_p1_gd])) /*qui*/ lincom assigned_p1 + assigned_p1_gd local b3_`i' = r(estimate) local perc3_`i' = r(estimate)/`cm`i'' local se3_`i' = r(se) local p3_`i' = 2*ttail(r(df),abs(r(estimate)/r(se))) foreach x in 1_`i' 2_`i' 3_`i' 4_`i' { local se`x' = round(`se`x'',.001) /* Occasionally the round function gives output like 1.2340000000000001. Here, we truncate those outputs */ local dec = "" local dec = substr("`se`x''",strpos("`se`x''","."),.) if (strlen("`dec'")>4) local se`x' = substr("`se`x''",1,strpos("`se`x''",".")+3) if (strlen("`dec'")==3) local se`x' = "`se`x''0" if (strlen("`dec'")==2) local se`x' = "`se`x''00" if (strlen("`dec'")<2) local se`x' = "`se`x''000" /* Put some brackets and asterisks on the standard errors */ local se`x' = "[`se`x'']" if (`p`x''<=.1) local se`x' = "`se`x''*" if (`p`x''<=.05) local se`x' = "`se`x''*" if (`p`x''<=.01) local se`x' = "`se`x''*" } } local I = `i' preserve clear qui set obs `=`I'*2' foreach v in var cm b1_ perc1_ b2_ perc2_ b3_ perc3_ b4_ perc4_ N{ qui gen `v' = "" forv i = 1/`I' { qui replace `v' = "``v'`i''" in `=`i'*2-1' } } forv j = 1/4 { forv i = 1/`I' { qui replace b`j'_ = "`se`j'_`i''" in `=`i'*2' } } export excel using T4_groupoutcomes.xlsx, sheetmodify sheet("raw") firstrow(var) restore $exit
rubygems/rubygems
73044789
Title: An interrupted gem update is registered as a succeeded update Question: username_0: I sometimes encounter gem updates that cause RubyGems to hang on my OS X 10.10.3 machines. Lately I have been having this problem with the documentation installation tasks of the `commander` gem. When I interrupt the update with `ctrl+c` I get an "ERROR: Interrupted" message, but when I run the update again there are no longer any updates. This makes me believe updates are not installed in a transaction and that installations can be corrupted this way. Is this expected behavior? ```$ sw_vers ProductName: Mac OS X ProductVersion: 10.10.3 BuildVersion: 14D136 $ gem --version 2.1.10 $ sudo gem update Updating installed gems Updating commander Fetching: commander-4.3.4.gem (100%) Successfully installed commander-4.3.4 Parsing documentation for commander-4.3.4 Installing ri documentation for commander-4.3.4 Installing darkfish documentation for commander-4.3.4 ^CERROR: Interrupted $ sudo gem update Updating installed gems Nothing to update ``` Answers: username_1: Ya, it is. The documentation tasks are done in an after install hook. By the time they get around to running, the gem installation transaction is complete. I think there is a different command to rebuild the docs for a particular gem. /cc @username_2 Status: Issue closed username_2: When you get to "Successfully installed" then the gem is usable by `require` and is considered installed. As @username_1 noted, installing documentation is a post-install step and isn't necessary for `require` to work. This is intended behavior.
OSeMOSYS/otoole
902260813
Title: Better user documentation Question: username_0: In discussion with @AgnesBelt, it would be good to document the key basic concepts for a new user describing a standard workflow. This could be through a GIF or video embedded in the documentation. There are three key commands: 1. generate excel file – how to use Excel as an interface with a better workflow structure – understand how the file is structured – define sets e.g. emissions and technologies first 1. generate input datafile 2. convert the results
keybase/keybase-issues
474533001
Title: stellar wallet - forces path payment when sending and receiving same asset Question: username_0: I have two wallets. I go into the wallet and trust say BB1 on both wallets (bitbondsto.com) and try to transfer from one wallet to the other. I choose "other asset" and then the number of tokens to send of BB1 and the number to receive of BB1. The system forces me to hit the calculate key which does a path payment. But because it is the same asset to the same asset path payment is not required. Also there are not that many BB1 tokens on offer so the call fails with a 218 error. If I lower the amount to be sent it shows me the result of the path payment ![Selection_394](https://user-images.githubusercontent.com/17188302/62125994-fed6ae00-b2ce-11e9-84ca-ee7a77efa33f.png) ![Selection_393](https://user-images.githubusercontent.com/17188302/62125995-fed6ae00-b2ce-11e9-9660-665f8c6dcd66.png) Answers: username_1: Hi @username_0 There is a [horizon path finding bug](https://github.com/stellar/go/issues/1496) that it looks like you've run into. It may actually work to send a slightly different amount. As for being forced to hit calculate, that will be fixed in the next Keybase version. So in the next version sending a single asset should work.
conan-io/conan
598160895
Title: [feature] provide environmental variable for storage.path Question: username_0: There seem to be a env variable for most of the things in conan.conf - why not for storage.data ? I am forced to use sth like: ``` previous=$(conan config get storage.path) # do_my_stuff conan config set storage.path=${previous} ``` In one of my flows. Env variable would be slightly more elegant from my perspective. - [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
ros/ros
21019636
Title: rosrun doesn't handle file permission correctly Question: username_0: In hydro 'rosrun' ignores all executables, if you're not using 'darwin', which do not have all executable bits set. This is due to line 38 in the 'rosrun' script, where the permission mask is set for the subsequent 'find' command. I believe 'rosrun' should be able to execute a file for me, if I am the owner of said file and have executable rights without giving executable rights to group members or other users. The behavior can be reproduced easily if you've an executable which can be executed via 'rosrun' at the moment and remove for exemple the group executable rights via 'chmod'. Answers: username_1: This is pretty old now, but since @dirk-thomas seems to respond quickly to all my comments: This issue with `find` is also applicable to FreeBSD (not surprising, given the common ancestry with Darwin). I verified by modifying `rosbash` and `rosrun` source files to use `+111` instead of `/111` for `$(uname) == FreeBSD`, and it works. Previously, when I would attempt to use `rosrun`, I was getting an error associated with `/111`, and then it would tell me that it had found a file named [executable I was trying to run] but that it wasn't an executable. username_2: @username_1 Since this is a closed ticket it's unlikely to get reviewed. Please open a new one or even better a PR otherwise no one is likely to follow up. And as we don't have FreeBSD systems to test on it's even less likely. username_1: I can open a new one. I actually left it here as a breadcrumb for people searching in the future. I am planning on a PR, though.
rsyslog/rsyslog-pkg-ubuntu
718087991
Title: Error: "trying to overwrite '/usr/lib/rsyslog/pmnormalize.so', which is also in package rsyslog 8.2008.0-0adiscon2bionic1" Question: username_0: From the `Enable scheduled stable PPA and install packages` job: ``` Run sudo apt-get install rsyslog-pmnormalize Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: rsyslog-pmnormalize 0 upgraded, 1 newly installed, 0 to remove and 171 not upgraded. Need to get 7724 B of archives. After this operation, 29.7 kB of additional disk space will be used. Get:1 http://ppa.launchpad.net/adiscon/v8-stable/ubuntu bionic/main amd64 rsyslog-pmnormalize amd64 8.2008.0-0adiscon2bionic1 [7724 B] Fetched 7724 B in 0s (24.7 kB/s) Selecting previously unselected package rsyslog-pmnormalize. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 272471 files and directories currently installed.) Preparing to unpack .../rsyslog-pmnormalize_8.2008.0-0adiscon2bionic1_amd64.deb ... Unpacking rsyslog-pmnormalize (8.2008.0-0adiscon2bionic1) ... dpkg: error processing archive /var/cache/apt/archives/rsyslog-pmnormalize_8.2008.0-0adiscon2bionic1_amd64.deb (--unpack): trying to overwrite '/usr/lib/rsyslog/pmnormalize.so', which is also in package rsyslog 8.2008.0-0adiscon2bionic1 E: Sub-process /usr/bin/dpkg returned an error code (1) Error: Process completed with exit code 100. ```
glennmatthews/cot
210158425
Title: Smarter handling of TemporaryDirectory disk space Question: username_0: When converting disks between formats, COT may need to use one or more temporary directories to store files. COT uses the [`tempfile`](https://docs.python.org/2/library/tempfile.html) module for this purpose. which by default creates temporary directories in, by preference: 1. The directory named by the TMPDIR environment variable. 2. The directory named by the TEMP environment variable. 3. The directory named by the TMP environment variable. 4. A platform-specific location: * On RiscOS, the directory named by the Wimp$ScrapDir environment variable. * On Windows, the directories C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order. * On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order. 5. As a last resort, the current working directory. When working with large files (such as RAW disk images), `/tmp` is often not large enough to contain these files, and COT fails ungracefully in this case. COT should check the available disk space in the created temporary directory against the expected size of any file(s) it will store there and warn the user. If an out of space error occurs, COT should recognize this and advise the user how to set their environment variables to select a different location. Answers: username_0: Fixed in 2.0.0 Status: Issue closed
honeycombio/libhoney-java
718887720
Title: Add support for markers Question: username_0: Would anyone be against adding support for markers? https://docs.honeycomb.io/api/markers/ Answers: username_0: I'm trying to take a swing at a PR, but it appears there might be significant changes required to get it going. I'll try to iterate the problems I see and how to potentially solve them. 1. The `BatchingHttpTransport` wraps around the HTTPClient which means I can't necessarily share it. 1. Given that Markers are typically one off requests, it doesn't make a whole sense to me to have them batched. username_0: ## Desired Capabilities * Creating Markers * Updating Markers ## High level proposal I surveyed the library to get a sense of what needs to be done to support `Markers`. I think there might be non-trivial changes required to get it going. It appears to me `LibHoney` is design primarily around "fire and forget" of events. Adding support for `Markers` means setting up support for a request API in order for users to potentially make modifications to the `Marker` (selfishly this is my use-case). There are a few problems I'd like to discuss given these assumptions: 1. I'd like to continue using the `CloseableHttpAsyncClient` to avoid having multiple clients for different purposes and potentially taking up more resources from the host JVM. In order to meet this requirement I'll have to move the creation of the `CloseableHttpAsyncClient` from `BatchingHttpTransport` into `HoneyClient`. 1. Regarding method signatures, I plan to create a method in `HoneyClient` to allow users to create a `Marker` that will be initialized with the client itself to allow users to call `Marker#send`. `Marker#send` will be "fire and forget" in case the user does not care about the result of the request. `Marker#send(Callable<MarkerResponse>)` will allow users to receive information about the result. 1. `MarkerResponse` will consist of 2 fields: 1. Nullable String errorMessage - if `null` there was no error, otherwise the string will contain why the request failed. 1. Marker marker - a new `Marker` with the response details from Honeycomb's API Servers, it will be initialized with the client in case users want to later update the Marker. 1. Running the `Callable` passed in from `Marker#send(Callable<MarkerResponse>)` can lead to freezing the `CloseableHttpAsyncClient` as the user may have blocking code inside of the `Callable`. If the `CloseableHttpAsyncClient`'s threads are blocked, `HoneyClient` will be unable to send events to the servers. A way to mitigate that risk is by running the `Callable` inside of a threadpool which `HoneyClient` manages. 1. This new threadpool can be used for any method that requires running an async callback. The following will be the threadpool's default configuration, that can be reconfigured via `ResponseHandlerOptions`. 1. Have a maximum of 5 daemon threads. 1. Queue will be limited to 100 `Runnables`. When the queue is full any callbacks will be dropped and a log will be written. 1. Exceptions thrown inside of `Callable<MarkerResponse>` will be ignored and written to log. Let me know your thoughts on the above and if there is a better way to proceed, I'm no Java master by any means, especially Java 7 :pray:. ## Things not in Scope * Deleting Markers * Listing Markers * I can write comments on the methods but not sure I have the write (😉) skills to write public documentations. username_1: Have `Marker#send` return a `Future`. We should ask about Java 7 support - if it were possible to target Java 8, a `CompletableFuture`, would be more apt. There's no need for a `MarkerResponse`. The `Future` will resolve to the updated `Marker`, or it will contain an exception. There's then no problem wrt threading, as there's no `Callable` username_0: @username_1 thanks for the quick response. I do agree having the ability to return a `CompletableFuture<Marker>` would be desirable, but I'm not entirely sure if Honeycomb wants to drop Java 7 support (Nor do I know who to ask). Returning a `Future<Marker>` means the callers would have to block a thread to get the results. I assume most would use a threadpool to use to retrieve the Future's value, so why not implement it on on their behalf? Alternatively, given that `Future#get` could block, why not just have `Marker#send` be a blocking method? username_1: Hopefully someone from HC will chime in on Java 7 compatibility. It may be the case that there was no strong need to use any Java 8 features, thus using 7. But given that it's almost 2021, I'd drop it if it were my call 😀 Yes, one would need to use a thread pool to poll for the result of the future. However, that'd only need to be done in the case where someone does want to get a handle to the marker (your original 2nd case). username_0: Ah, fair enough, I think this is reasonable for V1. Thanks, 👍 I'll try to get a PR by EOW. username_2: Weirdly, I am not able to set the @honeycombio/integrations-team as the assignee to get it on our things to look into / watch. Thanks for the contribution @username_0 - we'll try to get you some feedback soon. username_3: Hi, @username_0 thanks for opening the issue! We've discussed this internally, and it's something we may add one day, but aren't planning to support in the libhoneys at the moment. We try to make sure that the SDKs follow our [SDK specification](https://docs.honeycomb.io/api/sdk-spec/), which at the moment does not consider any resources other than events. Appreciate the support for the idea, but please don't spend time on a PR at this time. Thanks! Status: Issue closed username_0: ah ok, thx @username_3 for the verification. I spent the morning looking at how the Library was structured and introducing support for `Marker` is definitely non-trivial. Glad you reached out before I had a PR ready.
wallabyjs/public
278810603
Title: CoffeeScript 2 support Question: username_0: ### Issue description or question Is CoffeeScript 2 supported? Either way how would I go about getting it to work with Wallaby and Jest? With the configuration presented below, I get: ``` .\src\func.test.js:2 import { ^^^^^^ SyntaxError: Unexpected token import ``` The test file contains the now-supported `import` statement at the top: ```coffee import { x, y, z. } from './foo' ``` ### Wallaby.js configuration file ```javascript module.exports = (wallaby) => ({ files: [ 'package.json', 'src/**/*.*', '!src/**/*.test.coffee', ], tests: [ 'src/**/*.test.coffee', ], env: { type: 'node', }, testFramework: 'jest', debug: true, }); ``` Additional, Jest config: ```json { "verbose": true, "testMatch": [ "**/*.test.coffee" ], "moduleFileExtensions": [ "coffee.md", "coffee", "litcoffee", "js" ], "transform": { "\\.coffee$": "<rootDir>/infra/coffee-processor.js", "\\.litcoffee$": "<rootDir>/infra/coffee-processor.js", [Truncated] exports.process = (src, path) => { return CoffeeScript.compile(src, { bare: true, inlineMap: true, filename: path, literate: helpers.isLiterate(path), transpile: { presets: ['env'], }, }) } ``` ### Code editor or IDE name and version Atom v1.22.1 x64 ### OS name and version Windows 10 64-bit Status: Issue closed Answers: username_1: It should be supported. By default Wallaby CoffeeScript compiler doesn't pass the `transpile` option, so CoffeeScript compiler emits ES6 module imports/exports as is (that are not supported by node, hence the error). You may either [configure babel preprocessor on top of the CoffeeScript compiler](https://wallabyjs.com/docs/integration/coffeescript.html#using-with-babel), or pass the `transpile` option to the [Wallaby CoffeeScript compiler](https://wallabyjs.com/docs/integration/coffeescript.html): ```diff module.exports = (wallaby) => ({ files: [ 'package.json', 'src/**/*.*', '!src/**/*.test.coffee', ], tests: [ 'src/**/*.test.coffee', ], + compilers: { + '**/*.coffee': wallaby.compilers.coffeeScript({ + transpile: { + presets: ['env'], + } + }) + }, env: { type: 'node', }, testFramework: 'jest', debug: true, }); ``` username_0: Adding the `compiler` option worked, thanks! Now I have module resolution issue: ``` Error: Cannot find module './func' from 'func.test.js' at Object.<anonymous> src/func.test.coffee:1:0x ``` I have `func.test.coffee` sitting in the same directory as `func.coffee.md`. In Jest config, I have: ``` "moduleFileExtensions": [ "coffee.md", "coffee", "litcoffee", "js" ], ``` Any idea how I can fix this? username_1: Yep, by default Wallaby CoffeeScript compiler changes compiled file extension to `.js`, and it is ok for cases like `foo.coffee`, that becomes `foo.js`. But `foo.coffee.md` becomes `foo.coffee.js` and because `coffee.js` is not in the `moduleFileExtensions`, `foo` is not found. The easiest way to make it work is to pass the `noFileRename` option to Wallaby CoffeeScript compiler. Note that I have also extended the compiler pattern (`**/*.?(lit)coffee?(.md)`) to include `.litcoffee` and `coffee.md`. ```diff module.exports = (wallaby) => ({ files: [ 'package.json', 'src/**/*.*', '!src/**/*.test.coffee', ], tests: [ 'src/**/*.test.coffee', ], + compilers: { + '**/*.?(lit)coffee?(.md)': wallaby.compilers.coffeeScript({ + transpile: { + presets: ['env'], + }, + noFileRename: true + }) + }, env: { type: 'node', }, testFramework: 'jest', debug: true, }); ``` username_0: I've added the `noFileRename` flag to the compiler options, but that caused the compiler to complain about `var` not being a keyword. I assume it tried to compile the file twice? ``` .\src\contexts.test.coffee:3:1: error: reserved word 'var' var _contexts = ($_$w(3, 0), require('./contexts')); ^^^ ``` I added this to the wallaby configuration, and that seems to work: ``` setup: w => { const jestCfg = require('./package.json').jest jestCfg.moduleFileExtensions.push('coffee.js') w.testFramework.configure(jestCfg) }, ``` I suppose switching to `.litcoffee` extension would probably work even better. username_1: Yes, it is second Jest compilation of top of Wallaby one. This should work: ``` setup: w => { const jestCfg = require('./package.json').jest delete jestCfg.transform; // <-- w.testFramework.configure(jestCfg) }, ``` username_0: Good to know. Do you plan on setting up a cookbook for edge cases like this one? username_1: We are handling the case automatically for TypeScript, should probably do the same for CoffeeScript. It's just you're the first one to hit it, very few of our users are using CoffeeScript these days.
flathub/org.gnucash.GnuCash
733968098
Title: AqBanking assisant for HBCI don't work Question: username_0: I have installed GnuCash from flathub on my Linux Mint 18.3 Sylvia 64-bit (MATE 1.18.0) system. I tried to setup a HBCI connection to my bankaccount. I go to tools ("Werkzeuge") - Set up online banking ("Online Banking einrichten") and click continue. Then a button "Start AqBanking setup wizard" ("AqBanking Einrichtungs-Assistenten starten") appears. I click on this button and nothing happens. (The text in the brackets are the original string in my german installation.) if I start GnuCash with the shell I get this out put, when doing the steps above: ``` flatpak run org.gnucash.GnuCash --logto stdout Finance::Quote Version 1.49 wurde gefunden. * 13:29:02 ERROR <gnc.import.aqbanking> gnc_AB_BANKING_new: assertion 'AB_Banking_Init(api) == 0' failed * 13:29:09 ERROR <gnc.import.aqbanking> aai_wizard_page_prepare: assertion 'info->api' failed * 13:29:13 ERROR <gnc.import.aqbanking> aai_wizard_button_clicked_cb: assertion 'banking' failed ``` It looks like something of AqBanking is missing, but I don't find a package for this on flathub. I have installed the "Aqbanking-tools" package from the native Linux Mint repo, but the problem still appears. I think there is a bug in the flathub package, isn't it? Answers: username_1: No, the bundle contains the aqbanking and gwenhyfar packages. - Which versions are you currently using? See https://wiki.gnucash.org/wiki/De/Feedback#Verwendete_Versionen or https://wiki.gnucash.org/wiki/AqBanking#Determinating_the_Versions username_0: This are outputs of the version commands on my system: ``` flatpak run --command=sh org.gnucash.GnuCash sh-5.0$ gnucash --version GnuCash 4.2 Build ID: Flathub 4.2 sh-5.0$ aqbanking-cli versions Versions: AqBanking-CLI: 6.2.1 Gwenhywfar : 5.3.0.0 AqBanking : 6.2.1.0 ``` username_1: Short after our last release have been several updates on the aqbanking project. Can you test one of our [nightlies](https://code.gnucash.org/builds/flatpak/maint/?C=M;O=D) >= 2020-10-16? username_0: I uninstalled the normal version and installed the latest nightly build. `sudo flatpak install gnucash-maint-C4.2-123-gafcf1765f-D4.2.flatpakref` The error still appears. The log output looks quietly the same. ``` (flatpak run org.gnucash.GnuCash --logto stdout Diese Version befindet sich noch in Entwicklung. Sie kann funktionieren, muss aber nicht. Fehler und andere Probleme werden auf <EMAIL> diskutiert. Fehlerberichte können hier eingesehen und erstellt werden: https://bugs.gnucash.org Um die letzte stabile Version zu finden, sehen Sie bitte hier nach: https://www.gnucash.org/ Finance::Quote Version 1.49 wurde gefunden. * 11:22:38 ERROR <gnc.import.aqbanking> gnc_AB_BANKING_new: assertion 'AB_Banking_Init(api) == 0' failed * 11:22:39 ERROR <gnc.import.aqbanking> aai_wizard_page_prepare: assertion 'info->api' failed * 11:22:40 ERROR <gnc.import.aqbanking> aai_wizard_button_clicked_cb: assertion 'banking' failed) ``` The version commands output: ``` flatpak run --command=sh org.gnucash.GnuCash sh-5.0$ gnucash --version GnuCash 4.2 development version Build ID: git 4.2-123-gafcf1765f+(2020-11-05) sh-5.0$ aqbanking-cli versions Versions: AqBanking-CLI: 6.2.5 Gwenhywfar : 5.4.1.0 AqBanking : 6.2.5.0 ``` username_1: Ich bekomme dann den Dialog "Aqbanking-Einrichtung" mit den beiden Reitern: Benutzer, Konten. Kann es sein, daß das entsprechende Fenster von einem anderen verdeckt wird oder auf einem anderen, möglicherweise nich eingestöpselten Monitor dargestellt wird? username_0: Thank you for your adivce. Now I have cleared the .aqbanking folder on my system and it works. ``` flatpak run org.gnucash.GnuCash --logto stdout Diese Version befindet sich noch in Entwicklung. Sie kann funktionieren, muss aber nicht. Fehler und andere Probleme werden auf <EMAIL> diskutiert. Fehlerberichte können hier eingesehen und erstellt werden: https://bugs.gnucash.org Um die letzte stabile Version zu finden, sehen Sie bitte hier nach: https://www.gnucash.org/ Finance::Quote Version 1.49 wurde gefunden. * 10:29:23 ERROR <aqbanking> banking_update.c: 610: No AqBanking config folder found at [/home/michael/.aqbanking/settings6/users] (-1) * 10:29:23 ERROR <aqbanking> banking_update.c: 610: No AqBanking config folder found at [/home/michael/.aqbanking/settings/users] (-1) * 10:29:23 ERROR <aqbanking> banking_update.c: 411: There is no old settings folder, need initial setup * 10:29:41 ERROR <gwenhywfar> pathmanager.c: 83: Path "aqhbci/xmldatadir" already exists ``` Status: Issue closed
Tehnix/ide-haskell-hie
343322773
Title: Make error warning pop up dialogue copy-able Question: username_0: When you have over an error or hint you can select the text but not copy it. It would be useful if copying could be enabled. Answers: username_1: UI things all come from [atom-ide-ui](https://github.com/facebook-atom/atom-ide-ui). I think these issues might be related: - https://github.com/facebook-atom/atom-ide-ui/issues/283 - https://github.com/facebook-atom/atom-ide-ui/issues/182 - https://github.com/facebook-atom/atom-ide-ui/issues/81 Although not much feedback on them :/ NOTE: I know this seems like I'm just throwing the problem to someone else, even though your feature requests are great! :) But the thing is, with the LSP model, responsibilities are sorta spread out a couple of places, to avoid duplication. Namely: UI is the editors LSP client package, features is the languages LSP server package, and this extension is really just instructions to Atom on how to launch the LSP server. username_0: OK in that case will close both tomorrow unless for some reason you want them open ?? (e.g. so someone else doesn't log the same thing) username_1: I think it's fine to close them, people can still search the closed ones :) Status: Issue closed
juanjoDiaz/removeNPMAbsolutePaths
452487217
Title: 1.0.5 does not work on old node versions Question: username_0: Hello username_1, I ran into a syntax error with version 1.0.5 with production old node version, I expect it is the same issue as reported earlier for 1.0.0, a comma at the end of the function call. Version 1.0.4 works fine for me on old node. Can you remove the comma at the end of line 71 in removeNPMAbsolutePaths.js ? Answers: username_1: Fixed in 1.0.6. However, Node 6 reached its end of life on April 30, 2019. So it's not advisable to continue using it. Status: Issue closed
amzn/amazon-payments-magento-2-plugin
253288332
Title: errors AmountNotSet and PaymentPlanNotSet on checkout when customer logged in Question: username_0: <!-- Thanks for contributing! Please pick a meaningful title and fill in the details below. --> #### What I expected To see shipping methods on checkout <!-- What you or customer expected when performing the steps --> Logged in customer should be able to do proceed amazon checkout. #### What happened instead When using amazon checkout - always loading wheel seen on "Shipping Methods" section on one page checkout <!-- What actual results you or customer got --> Always see loading wheel on "Shipping Methods" section of one page checkout #### Steps to reproduce the issue 1) log in to magento store 2) place product to cart 3) proceed to checkout (in any way) 4) press "amazon checkout button" 5) login to amazon account in pop up window 6) select shipping address 7) forever loading wheel on "Shipping Address" section <!-- Please add detailed steps to reproduce the issue. --> #### Your setup * Magento version: 2.1.8 * Magento Edition: Community <!-- PLEASE NOTE - These comments won't show up when you submit the issue. - Everything is optional, but try to add as many details as possible. - Screenshot worth a thousand words, use screenshots if possible. --> #### paywithamazon.log details `[2017-08-28 10:09:34] main.DEBUG: POST mws-eu.amazonservices.com /OffAmazonPayments/2013-01-01 AWSAccessKeyId=XXXXX&Action=GetOrderReferenceDetails&AddressConsentToken=<KEY>&AmazonOrderReferenceId=P02-XXXXX-7000685&SellerId=XXXXXX&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2017-08-28T10%3A09%3A34.000Z&Version=2013-01-01 {"is_exception":false} [] [2017-08-28 10:09:35] main.DEBUG: / (Language=PHP/7.0.21; Platform=Linux/x86_64/3.10.0-514.26.2.el7.x86_64; MWSClientVersion=2.1.0) {"is_exception":false} [] [2017-08-28 10:09:35] main.DEBUG: <GetOrderReferenceDetailsResponse xmlns="http://mws.amazonservices.com/schema/OffAmazonPayments/2013-01-01"> <GetOrderReferenceDetailsResult> <OrderReferenceDetails> <OrderReferenceStatus> <State>Draft</State> </OrderReferenceStatus> <OrderLanguage>en-GB</OrderLanguage> <Destination> <DestinationType>Physical</DestinationType> <PhysicalDestination> REMOVED </PhysicalDestination> </Destination> <ExpirationTimestamp>2018-02-24T10:09:34.717Z</ExpirationTimestamp> <IdList/> <Constraints> <Constraint> <ConstraintID>AmountNotSet</ConstraintID> <Description>The seller has not set the amount for the Order Reference.</Description> </Constraint> [Truncated] </Constraint> <Constraint> <ConstraintID>PaymentPlanNotSet</ConstraintID> <Description>The buyer has not been able to select a Payment method for the given Order Reference.</Description> </Constraint> </Constraints> <SellerOrderAttributes/> <Buyer> REMOVED </Buyer> <ReleaseEnvironment>Live</ReleaseEnvironment> <AmazonOrderReferenceId>P02-1528616-7000685</AmazonOrderReferenceId> <CreationTimestamp>2017-08-28T10:09:34.717Z</CreationTimestamp> <RequestPaymentAuthorization>false</RequestPaymentAuthorization> </OrderReferenceDetails> </GetOrderReferenceDetailsResult> <ResponseMetadata> <RequestId>cd43e71c-ff9f-4ec6-a5c2-4d10c658e593</RequestId> </ResponseMetadata> </GetOrderReferenceDetailsResponse> {"is_exception":false} [] ` Answers: username_0: with a bit of digging i see the next: 1) loading wheel is set with [`amazonStorage.isShippingMethodsLoading(true);` in `Amazon_Payment/js/view/checkout-widget-address.js` on line 94](https://github.com/amzn/amazon-payments-magento-2-plugin/blob/eb6135ae7fa1fb2428581b32bfc30e75387c026e/src/Payment/view/frontend/web/js/view/checkout-widget-address.js#L94) and `getShippingAddressFromAmazon` method called each time amazon address changed in widget 2) loading wheel goes off on address error 3) when address successfully loaded - wheel don't go to off state specifically inside widget code, just [`checkoutDataResolver.resolveShippingAddress();` called in `Amazon_Payment/js/view/checkout-widget-address.js` on line 118](https://github.com/amzn/amazon-payments-magento-2-plugin/blob/eb6135ae7fa1fb2428581b32bfc30e75387c026e/src/Payment/view/frontend/web/js/view/checkout-widget-address.js#L118) 4) because of `shippingAddress` variable is already set from logged in customer object on page load before amazon scripts - there is no further action inside [`Magento_Checkout/js/model/checkout-data-resolver.js` on line 99](https://github.com/magento/magento2/blob/99e85cbc45223baa3551e4c534b650c0d2c6358b/app/code/Magento/Checkout/view/frontend/web/js/model/checkout-data-resolver.js#L99) So it seems like address endpoint in shipping-rate-processor's method getRates should be called each time amazon address changed, or maybe do `amazonStorage.isShippingMethodsLoading(false);` after `checkoutDataResolver.resolveShippingAddress();` in `Amazon_Payment/js/view/checkout-widget-address.js` username_1: Hi @username_0, Please, take a look on https://github.com/amzn/amazon-payments-magento-2-plugin/issues/88 Seems it would help. Status: Issue closed username_2: Closing this as a duplicate of #88, thanks @username_1
Azure/azure-libraries-for-java
275457636
Title: Plan to public the latest version on Maven? Question: username_0: Currently, we have very old versions of this 'azure-mgmt-datalake-store on' maven. Is there a plan to publish the latest versions there? Status: Issue closed Answers: username_1: Shipped recently to maven - http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-mgmt-datalake-store%22 username_0: This shows the latest version is 1.0.0-beta1.4. But we are on 1.6 now. Looks like a very old version got published. username_2: @username_0 This is most likely because ADLA and ADLS are not in track with the rest of the libraries yet, as it does not have fluent support. The plan is to integrate fluent soon.
opensearch-project/OpenSearch-Dashboards
1096789980
Title: [BUG] Can't see any field in Discover page: just the "_source" one Question: username_0: **Describe the bug** I've correctly filled an Opensearch index using the bulk API. In fact, this index has a timestamp field which is correctly recognized when creating the index pattern in Dashboards. But if I select this field as the time reference, in Discover page I just can see "_source" field below "Selected fields" section and none below "Available fields". The thing is that if I don't select any field as time reference when creating the index pattern, then in Discover page appear all the fields correctly but, of course, I haven't any time reference as my timestamp field is only "one more field" like the others, so I can't get any visual representation. I don't think it is a Dashboards bug....surely I'm doing something wrong but if so, there should be a way to get right things done in a easier manner...In summary: my index has all the fields inside _source field (including timestamp's one) but I can't access to them from Discover and I don't know why. Thanks a lot Answers: username_0: I mean...why if I have stored documents like this in my OpenSearch server... ![Captura de pantalla de 2022-01-11 12-20-04](https://user-images.githubusercontent.com/7274874/148933875-cf184f2d-e42a-48c8-bf7d-dc19f952055b.png) ...I just get this screen in OpenSearch Dashboards after specifying an index pattern where "timestamp" field is choosen as the time reference? ![Captura de pantalla de 2022-01-11 12-20-49](https://user-images.githubusercontent.com/7274874/148933902-532e3b60-88b1-4b5b-9067-7b2bfd158335.png) Thanks! username_0: Just for reference...: https://discuss.opendistrocommunity.dev/t/cant-see-any-field-in-discover-page-just-the-source-one/8284/3?u=username_0 Status: Issue closed username_0: Well, the error was having arbitrary timestamps too many far away from past or from future. It cause Dashboards not to show any field in "Available fields" in "Discover" page. This behaviour misleads because at first it seems an indexing problem, but I recognize it was my fault. Thanks anyway.
terraform-google-modules/terraform-google-datalab
512150774
Title: Allow for additional custom startup script. Question: username_0: We need the ability to deploy a security agent for monitoring the datalab VMs. Can there be a way to specify a custom https://github.com/terraform-google-modules/terraform-google-datalab/blob/master/modules/template_files/templates/startup_script.tpl or an additional scrip when the instance is created? Answers: username_1: Would you prefer a totally custom script or simply appending an additional custom startup script? username_0: Appending would probably be better - that way we can always pick up the changes you make to the default status script. username_1: Ok, we should add a variable which allows for appending a custom script. Status: Issue closed
Azure/azure-sdk-for-python
810342466
Title: [formrecognizer] adjust TextAppearance to work with service changes Question: username_0: https://github.com/Azure/azure-rest-api-specs/blob/65f9a3d3cf22715604dc46a18ee5d88ec58bb413/specification/cognitiveservices/data-plane/FormRecognizer/preview/v2.1-preview.3/FormRecognizer.json#L2239-L2266 No longer have a Style type, properties for style/confidence are directly on the Appearance model. Samples and tests will need update as well. Answers: username_0: this swagger change was unintentional so this issue can be closed Status: Issue closed
Simon1PL/FortranProject2
458256672
Title: 1 Question: username_0: ![1](https://user-images.githubusercontent.com/44688394/59806193-d9c84600-92f3-11e9-9adc-d4a8d8042ab9.png) Answers: username_0: ![2](https://user-images.githubusercontent.com/44688394/59806229-f95f6e80-92f3-11e9-922a-f3a6ffe5b4aa.png) username_0: ![3](https://user-images.githubusercontent.com/44688394/59806243-03816d00-92f4-11e9-91c7-cfa00df5a968.png) username_0: ![4](https://user-images.githubusercontent.com/44688394/59806250-0bd9a800-92f4-11e9-8b7e-33afcafb4c81.png) username_0: ![5](https://user-images.githubusercontent.com/44688394/59806260-13994c80-92f4-11e9-9719-120a28fceac0.png) username_0: ![6](https://user-images.githubusercontent.com/44688394/59806269-1c8a1e00-92f4-11e9-99a0-5f8bfecc75d8.png) username_0: ![7](https://user-images.githubusercontent.com/44688394/59806283-23189580-92f4-11e9-9d28-9db716c0f8ac.png)
SuperNETorg/Iguana-application
203233040
Title: Wrong automatic passphrase entry Question: username_0: **Wrong:** Given I confirmed that I had saved a passphrase When I am redirected to passphrase confirmation page Then I see the passphrase is entered And the next (create) button is enabled ![wrong automatic passphrase entry](https://cloud.githubusercontent.com/assets/16269033/22310830/96cb6254-e361-11e6-92a1-dfcdabb402ae.png) **Right** Given I am redirected to passphrase confirmation page Then I see the passphrase field empty And the next (create) button is disabled ![reating_a_new_wallet_01_6_web](https://cloud.githubusercontent.com/assets/16269033/22310499/8b0ed05a-e360-11e6-96b0-d1c608be978d.png) Answers: username_1: @username_0 Done, please check username_2: closing, verified as fixed Status: Issue closed
vuejs/vetur
282336003
Title: commit <PASSWORD> BUG Question: username_0: - [ ] I have searched through existing issues - [ ] I have read through [docs](https://vuejs.github.io/vetur) - [ ] I have read [FAQ](https://github.com/vuejs/vetur/blob/master/docs/FAQ.md) ## Info - Platform: <!-- Win/macOS/Linux --> - Vetur version: 0.11.4 - VS Code version: 1.19.0 x64 ## Problem The LintToll has some problem. [http://omy0lm6ox.bkt.clouddn.com/QQ20171215-145115.png](url) [http://omy0lm6ox.bkt.clouddn.com/QQ20171215-145133.png](url)
google/knative-gcp
614865099
Title: Provide a troubleshooting doc for workload identity Question: username_0: **Problem** When reconciling Workload Identity, and having errors, users can refer to a link which provides explanations and solutions for most of the errors. 
 **[Persona:](https://github.com/knative/eventing/blob/master/docs/personas.md)** Which persona is this feature for? **Exit Criteria** Having a troubleshooting doc for workload identity **Time Estimate (optional):** How many developer-days do you think this may take to resolve? **Additional context (optional)** Add any other context about the feature request here. Answers: username_1: P1 (R1): External documentation. Add to preview guide and work with tech writers to have pubic documentation. P1(R1): TSE doc P2 (R1): Link in status message username_0: This one should be split into two - [ ] non-default user experience troubleshooting (can start now) - [ ] default user experience troubleshooting (need a decision for that) username_2: First part is 1-2 days, second part is more complicated. username_0: troubleshooting is in Knative-GCP repo, will open separate issue for public documentation with tech writter Status: Issue closed
wallabag/wallabag
173638159
Title: Update js & css assets before 2.1 Question: username_0: There has some changes in css and js files lately. This has to be changed too in 2.1. Answers: username_1: As far as I know, I fixed most of them when fighting against the bad rebase on the 2.1 branch. But yeah, could be a good idea to check before the release. username_0: Also, add linters to the build assets travis task (and Grunt watch). username_1: There is still (from https://github.com/wallabag/wallabag/pull/2314#issuecomment-250390210) - [ ] Fix for twitter icons becoming facebook one (WTF ?) Status: Issue closed username_2: Done 👍 Good job with the front part, @username_0 <3 username_1: There has some changes in css and js files lately. This has to be changed too in 2.1. Status: Issue closed username_1: Fixed here https://github.com/wallabag/wallabag/pull/2353
mfdteam/MagicArena
128005204
Title: Заменить дд эффект для Мечника Question: username_0: Вместо спауна 2 ментов сделать так, чтобы под дд мент убивал цель с 1 попадания Answers: username_0: Кстати, если реализуем это, то под каждого мента аля OnePUnchMan будем подкладывать тайл с эффектом ДД username_0: Хоть и иначе, но дд реализован Status: Issue closed
trailofbits/polytracker
766427839
Title: 晋中市榆社县妹子真实找上门服务_gp Question: username_0: 晋中市榆社县妹子真实找上门服务【威信IO77I9O9美女】年月日下午时齐头并进逐梦前行——骑猪传媒集团年度盛宴在北京凯宾斯基饭店盛大召开全网具有影响力的百余位主播及艺人悉数到场。当红艺人彭昱畅、黄景瑜、彭小冉、沈梦辰、吉克隽逸、郭聪明等余位明星人也送来了盛宴祝福。骑猪传媒集团董事会成员宋洪涛刘欢、王振、孙鹏尧、刘紫薇等集团董事莅临现场。执行董事刘欢先生代表董事会发布了骑猪传媒集团年战略规划布局新的领域高地并分别启动短视频、音乐、电商、影视等项目板块宣布进军二级市场。据了解在年度盛典正式开始前百余位来自骑猪传媒集团的高管、嘉宾以及艺人踏上红毯走向属于他们的荣耀现场网红云集美女如云气氛热烈。嘉宾艺人出席红毯年度盛宴上骑猪传媒集团刘紫薇女士对年度工作进行了总结并制定年业绩目标和战略规划。接下来由骑猪传媒集团旗下艺人们带来的节目表演令人目不暇接将盛典不断推向高潮。艺人舞蹈《不染》为了表彰公司的优秀艺人及员工骑猪传媒集团准备了丰厚的奖项和奖品获得“骑猪一哥”“骑猪一姐”“勤奋之星”“潜力之星”的艺人分别是黄老师、溪宝宝、小师妹、小布丁获得“骑猪优秀员工奖”的是苗苗和神速。颁奖合影抽奖环节总是那么的激动人心骑猪传媒集团现场准备了手机、平板、高档茶叶、高端茶具、金条、金珠等大奖大家热情高涨高潮迭起。另外本次盛典还在虎牙同步直播每个时段都在直播间抽取万元大奖让不能到现场感受这场狂欢的粉丝也能第一时间参与进来。盛宴现场直播现场晚会尾声骑猪传媒集团董事会成员宋洪涛、刘欢、王振、孙鹏尧集团高管以及旗下艺人共同上台合影留念这也标志着骑猪传媒集团正式开启了骑猪传媒时代。骑猪传媒董事会领导上台合影据悉北京骑猪传媒有限公司成立于年是一家大型综合娱乐公司。该公司专注于红人孵化及优质配套服务提供打造全面发展的高素质艺人建立网络红人新生态。近年来骑猪传媒直播、短视频、电商等板块业绩稳定上升获得了业界的认可目前公司签约、运营以及服务的艺人超人与国内各大网络娱乐平台长期深度合作。骑猪传媒立足于首都北京辐射全国现已在北京、沈阳、廊坊、唐山、长春等地建立分支机构因公司战略性布局已着手打造短视频商业化品牌—音动文化。骑猪传媒集团相关负责人表示在具有孵化、变现能力前提下公司将为旗下艺人规划并提供丰富的商务类及影视类职业发展路径以增加其曝光率。年公司将加大市场开拓力度储备专业性人才升级运营团队服务质量整合公司丰富的资源给每一位追求梦想的人插上翅膀。未来骑猪传媒集团的发展路径将更加清晰日益壮大的骑猪传媒集团必将为祖国文化娱乐产业的增长贡献力量。闷巢烫没融闯秘值懦轮炙献瓤字巡蟹褐飞胀始抠碌沽谰煌范逃妹母肥蒲碧诠估勇https://github.com/trailofbits/polytracker/issues/4881?vrbxq <br />https://github.com/trailofbits/polytracker/issues/4870?kzuqv <br />https://github.com/trailofbits/polytracker/issues/4857?34149 <br />https://github.com/trailofbits/polytracker/issues/4845?48822 <br />https://github.com/trailofbits/polytracker/issues/4833?12307 <br />ndrpozhvsnlovclgyuleerskkdvmgtfjwwo
charleso/haskell-in-haste
106524805
Title: Bot: Calculator Question: username_0: See the previous Scala course for inspiration: https://github.com/Svetixbot/calculator Status: Issue closed Answers: username_0: https://github.com/username_0/haskell-in-haste/commit/abfb197a3eb6460c0f914a9745fdfc1d6ae775fb Have added, but this one would be very hard for anyone who hasn't done a little Haskell before
Clinical-Genomics/cg
938788018
Title: Add BALSAMIC's UMI validation cases to no compression list Question: username_0: **What:** There are SeraCare samples for validation `equalbug` `stableraven` `sunnyiguana` that should not be compressed and we can use these for validating UMI workflow. **How:** Add these to list of cases to ignore compression. Here is a similar PR: https://github.com/Clinical-Genomics/cg/pull/1169 Answers: username_1: Done in PR https://github.com/Clinical-Genomics/cg/pull/1233 Status: Issue closed
solgenomics/sgn
439734734
Title: on protocol detail page, the markers table should be a server side request Question: username_0: Expected Behavior <!-- Describe the desired or expected behavour here. --> -------------------------------------------------------------------------- Protocol detail page crashes (browser memory exceeded) when too many markers deing populated into markers datatable. Marker data table should be populated by a server side ajax request For Bugs: --------- ### Environment <!-- Where did you encounter the error. --> #### Steps to Reproduce <!-- Provide an example, or an unambiguous set of steps to reproduce --> <!-- this bug. Include code to reproduce, if relevant. --><issue_closed> Status: Issue closed
knime-ip/knip
212734020
Title: Splitter loses offsets Question: username_0: The _Splitter_ node forgets about image offset metadata in the result. To reproduce, simply feed an image that has known offsets, and look at the split images. (Tested with an XYZChannel input and split by Channel, i.e. XYZ selected.) Status: Issue closed Answers: username_1: Hi @username_0, will do today. However, the fix not really correct. I'm working on a solution. username_1: build is running
associatedpress/datakit-data
206042025
Title: Implement a project initialization command Question: username_0: Create a `data:init` command that does following: - [ ] Checks if project is a git repo (to help ensure proper usage inside a data project) - [ ] Creates `config/datakit-data.json` config file with `s3_bucket`, `s3_path` (default = `YYYY/<dirname>`) and `user_profile` (default = `default`) - [ ] Adds `data/` dir to *.gitignore*, if present Answers: username_0: Initial implementation is complete. Decided against mucking about with version control management (i.e. adding `data/` dir to `.gitgnore`) to reduce complexity. Also, users may disagree with our opinion that data doesn't belong under version control (perhaps they want both VCS *and* S3 backups). In any event, it's simpler to be agnostic on this front and instead manage `.gitignore` using Cookiecutter templates such as [cookiecutter-r-project](https://github.com/associatedpress/cookiecutter-r-project) Status: Issue closed
TransbankDevelopers/transbank-plugin-woocommerce-onepay
360128480
Title: APIKey y Shared Secret Question: username_0: al momento de realizar el pago con onepay en woocommerce nos arroja el mensaje Operación cancelada, sera posible que sea por la APIKey y Shared Secret, ¿y si es posible enviarnos las correctas?. Answers: username_1: Hola @username_0, a futuro no será necesario ingresar APIKey y Shared Secret para realizar pruebas en ambiente de integración. Por ahora ocupa las credenciales de prueba que se encuentran en el manual de instalación, específicamente en https://github.com/TransbankDevelopers/transbank-plugin-woocommerce-onepay/blob/master/docs/INSTALLATION.md#credenciales-de-prueba Status: Issue closed username_1: @username_0 cerraré este Issue, asumiendo que mi respuesta anterior resolvió tu duda. Si necesitas ayuda, coméntanos nuevamente.
OpenMobileAlliance/OMA_LwM2M_for_Developers
189761421
Title: Add unsolicited notification Question: username_0: There is currently some issue with observation in LWM2M. 1) We face several issues with observation on DTLS in NAT environment. #161 #137 #115 ==> this sounds not completely address. 2) We also raise a problem when observing large object #76 ==>this will not be addressed. It also seems to me that CoAP observation was mainly for live synchronization of a client resource with the server. (like telemetry) We (at sierra wireless) face use cases where there is no need to start or stop observation, but the device just need to push resource to the server with long silent period. Should we imagine a kind of "Unsolicited notification" ? I think @username_3 proposed this kind of idea by sending CoAP POST on rd/registerid with the resource payload and the corresponding resource path. This kind of solution is not impacted by issues raised above. WDYT ? Answers: username_1: Yes, a very much needed capability for wireless. Is there really no way to do this with the present standard implementation? That would be a gross oversight if not possible. username_2: The idea with the observed mechanism is to have a good synchronization between the devices. Usually, in a wireless network, we don't want to flood the air with unsolicited messages. There could be some messages that are of interest for a specific server, but in this case using the right values for min/max period we can simulate an unsolicited notification. username_0: I mean that observation is technically complex. I pointed technical issues above. The idea of proposing "Unsolicited notification" is to avoid this complexity for use case where this complexity is in practice totally useless. Using observation to simulate "Unsolicited notification" does not help at all. username_3: Therefore my idea was to offer an additional mapping for the "reporting interface" using a device request for a "Notify" (in the meaning of LWM2M 5.5.2), where that "notify request" could provide the long term identity used by LWM2M, the registration location. The other definitions of LWM2M 5.5 can still be in place, but must be also mapped alternatively. The main difference according traffic would be, that RFC7641 may use one NON message (if this matches really your requirement), but the device request message would always be answered by a response (or ack) message (as a CON notify would also be). username_3: "new resource state" not "list of changes" Status: Issue closed username_4: @username_0, you are asking for a new feature in LwM2M. As an OMA member please contribute text for your "unsolicited notification" to the work on LwM2M v1.1. username_5: I am trying to implement observe and notify in my lwm2m wakaama based client in c and leshan server.The server is not getting auto observe and on client side for example temperature object data is changing but client is not pushing the value towards the server. what should be added on client side so that this will work properly. username_3: What do you mean by "auto observe"? Are you sure, your idea complies to the LWM2M TS? This issue was about to change/extend LWM2M to overcome technical issues of long term observations using RFC7641. Though your wording in your comment seems to use RFC7641, there is simply nothing as a "auto observe"! You must explizitly establish a observe relation by sending a observe request to the LWM2M client (coap-server) and then the LWM2M client will notify you by sending a notification. But if your client address changes, or your DTLS session gets "renewed", then your notify will be ignored. There are work-arounds, e.g. reestablish the observe relation frequently, but this would require, that you know, how frequently your application would require such reestablished observe. username_0: Just in case someone found this issue now : This feature is now part of LWM2M since v1.1, called `Send Operation` : see [core§6.4.6](http://www.openmobilealliance.org/release/LightweightM2M/V1_1_1-20190617-A/HTML-Version/OMA-TS-LightweightM2M_Core-V1_1_1-20190617-A.html#6-4-6-0-646-Send-Operation)
MicrosoftDocs/edge-developer
548478811
Title: How can I delete/unpublish an extension? Question: username_0: I pressed the "Create new extension" button, but nothing happend. So I pressed it twice again. I now have 3 extensions with the same name Product_xxxxxxxx. I was able to get one of them to the Submission In Review status so that one refreshed it's name to the correct extension. And I tried to see if I can remove an extension after uploading the zip but that was not possible. So my dashboard looks like this now: ![2020-01-11 21_50_12-Partner Center](https://user-images.githubusercontent.com/31220528/72210473-6323ad80-34bc-11ea-8bdc-d68aeae26651.png) Status: Issue closed Answers: username_0: I've figured out a workaround. 1. You first need to successfully upload a zip file in the first step. 2. Then you need to go to the next step 3. Then you need to go back to the overview of all your extension 4. Then click on the extension that is "in draft" status 5. Only then there is a "Discard draft" button that you can use to remove the extension completely.
aprell/tasking-2.0
531616970
Title: Backoff: the "on behalf" functions seem to miss half of the worker tree. Question: username_0: It seems to me like the condition variable backoff strategy has a bug when peeking, counting or receiving recursively from their child steal request channels: https://github.com/username_1/tasking-2.0/blob/303ce3e09a7c118241b9d5482219e3e3b848212e/src/runtime.c#L541-L630 If we focus on peeking ``` static inline bool PEEK_REQ(int n) // requires -1 <= n < num_workers { bool ret = false; // Valid worker ID? if (n == -1 || n >= num_workers) return ret; // Peek at steal requests on behalf of worker n ret = channel_peek(chan_requests[n]); #if BACKOFF == sleep_exp || BACKOFF == wait_cond if (!ret && tree.left_subtree_is_idle) { ret = PEEK_REQ(left_child(n, num_workers-1)); } if (!ret && tree.right_subtree_is_idle) { ret = PEEK_REQ(right_child(n, num_workers-1)); } #endif return ret; } ``` with 14 workers: ![image](https://user-images.githubusercontent.com/22738317/70004144-9beb7100-1565-11ea-88e1-1d45c1866336.png) We assume the branch from worker 4-9-10 is idle and every other branch is working, especially branch 2. Worker 1 wants to check its and its children steal requests: - The left side (worker 3) is not idle --> skipped - The right (worker 4) is idle -> peek and dive in - The right-left (worker 9) is idle -> but the comparison is `tree.left_subtree_is_idle`, with tree being thread-local to 1, so what is actually checked is worker 3 state which is active and worker 1 fails to retrieves worker 9 pending requests. Answers: username_1: Uh oh, you're right. I guess I didn't think this one through... username_0: To keep the intended functionality, you can check the branch and then traverse the worker tree iteratively. I've cooked up an algorithm to traverse any subtree of a binary tree in a depth-first way without having to allocate a stack. I'm not sure how to adapt the iterator to C though ```Nim import bitops template isLeaf(node, maxID: int32): bool = node >= maxID div 2 func lastLeafOfSubTree(start, maxID: int32): int32 = ## Returns the last leaf of a sub tree # Algorithm: # we take the right side by doing # repeated (2n+2) computation until we reach the rightmost leaf node preCondition: start <= maxID result = start while not result.isLeaf(maxID): result = 2*result + 2 func prevPowerofTwo(n: int32): int32 {.inline.} = ## Returns n if n is a power of 2 ## or the biggest power of 2 preceding n 1'i32 shl fastLog2(n+1) iterator traverseDepthFirst*(start, maxID: int32): int32 = ## CPU and memory efficient depth first iteration of implicit binary trees: ## Stackless and traversal can start from any subtrees # We use the integer bit representation as an encoding # to the binary tree path and use shifts and count trailing zero # to navigate downward and jump back to the parent preCondition: start in 0 .. maxID var depthIdx = start.prevPowerofTwo() # Index of the first node of the current depth var relPos = start - depthIdx + 1 # Relative position compared to the first node at that depth # A node has coordinates (Depth, Pos) in the tree # with Pos: relative position to the start node of that depth # # node(Depth, Pos) = 2^Depth + Pos - 1 # # In the actual code # depthIdx = 2^Depth and relPos = Pos preCondition: start == depthIdx + relPos - 1 let lastLeaf = lastLeafOfSubTree(start, maxID) var node: int32 while true: node = depthIdx + relPos - 1 yield node if node == lastLeaf: break if node.isLeaf(maxId): relPos += 1 let jump = countTrailingZeroBits(relPos) depthIdx = depthIdx shr jump [Truncated] doAssert pos == target.len check(0, from0) check(1, from1) check(2, from2) check(3, from3) check(4, from4) check(5, from5) check(6, from6) check(7, from7) check(8, from8) check(9, from9) check(10,from10) check(11,from11) check(12,from12) check(13,from13) check(14,from14) ``` I think the ideal would be breadth-first, i.e. checking all parents before children, also the storage of workers is breadth-first `0 1 2 3 4 5 6 7 8 9 10 11 12 13 14` so it would help the CPU branch predictor, but I'm not too sure yet on how to do breadth-first subtree traversal without also needing to allocate extra space. username_0: Well that algo has a bug. I was hunting a deadlock and found out that ``` for i in traverseDepthFirst(0, maxID = 3): echo i ``` outputs: ``` 0 1 2 ``` and ``` for i in traverseDepthFirst(0, maxID = 4): echo i ``` ``` 0 1 3 4 2 ``` So I need to fix it. username_0: 2 fixes, testing leaves was wrong: ```Nim template isLeaf(node, maxID: int32): bool = 2*node + 1 > maxID ``` And you also need to skip if a node is bigger than maxID ```Nim iterator traverseDepthFirst*(start, maxID: int32): MetaNode = ## CPU and memory efficient depth first iteration of implicit binary trees: ## Stackless and traversal can start from any subtrees # We use the integer bit representation as an encoding # to the binary tree path and use shifts and count trailing zero # to navigate downward and jump back to the parent assert: start in 0 .. maxID var depthStart = start.prevPowerofTwo() # Index+1 of the first node of the current depth. # (2^depth - 1) is the index of starter node at each depth. var relPos = start - depthStart + 1 # Relative position compared to the first node at that depth. # A node has coordinates (Depth, Pos) in the tree # with Pos: relative position to the start node of that depth # # node(Depth, Pos) = 2^Depth + Pos - 1 # # In the actual code # depthStart = 2^Depth and relPos = Pos assert: start == depthStart + relPos - 1 let lastLeaf = lastLeafOfSubTree(start, maxID) var node: int32 while true: node = depthStart + relPos - 1 if node <= maxID: yield (node, (depthStart, relPos, node.isLeaf(maxID))) if node == lastLeaf: break if node.isLeaf(maxId): relPos += 1 let jump = countTrailingZeroBits(relPos) depthStart = depthStart shr jump relPos = relPos shr jump else: depthStart = depthStart shl 1 relPos = relPos shl 1 ``` Otherwise with maxID = 3 we still get node 4 before the iterator goes to 2 and with maxID = 7 we get 8. I should really see if we can have something breadth first. username_0: Turns out it was much easier to do depth first from any subtree without needing a queue: ```Nim iterator traverseBreadthFirst*(start, maxID: int32): int32 = assert start in 0 .. maxID var levelStart = start # Index of the node starting the current depth levelEnd = start # Index of the node ending the current depth pos = 0'i32 # Relative position compared to the current depth var node: int32 while true: node = levelStart + pos if node >= maxID: break yield node if node == levelEnd: levelStart = 2*levelStart + 1 levelEnd = 2*levelEnd + 2 pos = 0 else: pos += 1 ``` username_1: Nice! I came up with this: ```python def left_child(node): n = 2*node + 1 return n if 0 <= n < max_nodes else -1 def check_subtree(root): return check_level(root, 0, []) def check_level(node, lvl, lst): if node == -1: return lst for n in range(node, min(node + 2**lvl, max_nodes)): lst.append(n) return check_level(left_child(node), lvl+1, lst) ``` I just love (tail) recursion. :wink: Status: Issue closed